diff mbox series

[ovs-dev,v6,05/13] northd: Refactor lflow management into a separate module.

Message ID 20240130212153.1483219-1-numans@ovn.org
State Accepted
Headers show
Series northd lflow incremental processing | expand

Checks

Context Check Description
ovsrobot/apply-robot warning apply and check: warning
ovsrobot/github-robot-_Build_and_Test fail github build: failed
ovsrobot/github-robot-_ovn-kubernetes success github build: passed
ovsrobot/github-robot-_Build_and_Test success github build: passed
ovsrobot/github-robot-_ovn-kubernetes success github build: passed
ovsrobot/github-robot-_Build_and_Test success github build: passed
ovsrobot/github-robot-_ovn-kubernetes success github build: passed

Commit Message

Numan Siddique Jan. 30, 2024, 9:21 p.m. UTC
From: Numan Siddique <numans@ovn.org>

ovn_lflow_add() and other related functions/macros are now moved
into a separate module - lflow-mgr.c.  This module maintains a
table 'struct lflow_table' for the logical flows.  lflow table
maintains a hmap to store the logical flows.

It also maintains the logical switch and router dp groups.

Previous commits which added lflow incremental processing for
the VIF logical ports, stored the references to
the logical ports' lflows using 'struct lflow_ref_list'.  This
struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
It is  modified a bit to store the resource to lflow references.

Example usage of 'struct lflow_ref'.

'struct ovn_port' maintains 2 instances of lflow_ref.  i,e

struct ovn_port {
   ...
   ...
   struct lflow_ref *lflow_ref;
   struct lflow_ref *stateful_lflow_ref;
};

All the logical flows generated by
build_lswitch_and_lrouter_iterate_by_lsp() uses the ovn_port->lflow_ref.

All the logical flows generated by build_lsp_lflows_for_lbnats()
uses the ovn_port->stateful_lflow_ref.

When handling the ovn_port changes incrementally, the lflows referenced
in 'struct ovn_port' are cleared and regenerated and synced to the
SB logical flows.

eg.

lflow_ref_clear_lflows(op->lflow_ref);
build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);

This patch does few more changes:
  -  Logical flows are now hashed without the logical
     datapaths.  If a logical flow is referenced by just one
     datapath, we don't rehash it.

  -  The synthetic 'hash' column of sbrec_logical_flow now
     doesn't use the logical datapath.  This means that
     when ovn-northd is updated/upgraded and has this commit,
     all the logical flows with 'logical_datapath' column
     set will get deleted and re-added causing some disruptions.

  -  With the commit [1] which added I-P support for logical
     port changes, multiple logical flows with same match 'M'
     and actions 'A' are generated and stored without the
     dp groups, which was not the case prior to
     that patch.
     One example to generate these lflows is:
             ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
             ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
	     ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"

     Now with this patch we go back to the earlier way.  i.e
     one logical flow with logical_dp_groups set.

  -  With this patch any updates to a logical port which
     doesn't result in new logical flows will not result in
     deletion and addition of same logical flows.
     Eg.
     ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
     will be a no-op to the SB logical flow table.

[1] - 8bbd678("northd: Incremental processing of VIF additions in 'lflow' node.")

Signed-off-by: Numan Siddique <numans@ovn.org>
---
 lib/ovn-util.c           |   18 +-
 lib/ovn-util.h           |    2 -
 northd/automake.mk       |    4 +-
 northd/en-lflow.c        |   24 +-
 northd/en-lflow.h        |    6 +
 northd/inc-proc-northd.c |    4 +-
 northd/lflow-mgr.c       | 1420 ++++++++++++++++++++++++++++
 northd/lflow-mgr.h       |  186 ++++
 northd/northd.c          | 1911 ++++++++++----------------------------
 northd/northd.h          |  221 ++++-
 northd/ovn-northd.c      |    4 +
 tests/ovn-northd.at      |  216 +++++
 12 files changed, 2553 insertions(+), 1463 deletions(-)
 create mode 100644 northd/lflow-mgr.c
 create mode 100644 northd/lflow-mgr.h

Comments

Numan Siddique Feb. 1, 2024, 3:11 p.m. UTC | #1
On Tue, Jan 30, 2024 at 4:22 PM <numans@ovn.org> wrote:
>
> From: Numan Siddique <numans@ovn.org>
>
> ovn_lflow_add() and other related functions/macros are now moved
> into a separate module - lflow-mgr.c.  This module maintains a
> table 'struct lflow_table' for the logical flows.  lflow table
> maintains a hmap to store the logical flows.
>
> It also maintains the logical switch and router dp groups.
>
> Previous commits which added lflow incremental processing for
> the VIF logical ports, stored the references to
> the logical ports' lflows using 'struct lflow_ref_list'.  This
> struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
> It is  modified a bit to store the resource to lflow references.
>
> Example usage of 'struct lflow_ref'.
>
> 'struct ovn_port' maintains 2 instances of lflow_ref.  i,e
>
> struct ovn_port {
>    ...
>    ...
>    struct lflow_ref *lflow_ref;
>    struct lflow_ref *stateful_lflow_ref;
> };
>
> All the logical flows generated by
> build_lswitch_and_lrouter_iterate_by_lsp() uses the ovn_port->lflow_ref.
>
> All the logical flows generated by build_lsp_lflows_for_lbnats()
> uses the ovn_port->stateful_lflow_ref.
>
> When handling the ovn_port changes incrementally, the lflows referenced
> in 'struct ovn_port' are cleared and regenerated and synced to the
> SB logical flows.
>
> eg.
>
> lflow_ref_clear_lflows(op->lflow_ref);
> build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
> lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);
>
> This patch does few more changes:
>   -  Logical flows are now hashed without the logical
>      datapaths.  If a logical flow is referenced by just one
>      datapath, we don't rehash it.
>
>   -  The synthetic 'hash' column of sbrec_logical_flow now
>      doesn't use the logical datapath.  This means that
>      when ovn-northd is updated/upgraded and has this commit,
>      all the logical flows with 'logical_datapath' column
>      set will get deleted and re-added causing some disruptions.
>
>   -  With the commit [1] which added I-P support for logical
>      port changes, multiple logical flows with same match 'M'
>      and actions 'A' are generated and stored without the
>      dp groups, which was not the case prior to
>      that patch.
>      One example to generate these lflows is:
>              ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
>              ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
>              ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"
>
>      Now with this patch we go back to the earlier way.  i.e
>      one logical flow with logical_dp_groups set.
>
>   -  With this patch any updates to a logical port which
>      doesn't result in new logical flows will not result in
>      deletion and addition of same logical flows.
>      Eg.
>      ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
>      will be a no-op to the SB logical flow table.
>
> [1] - 8bbd678("northd: Incremental processing of VIF additions in 'lflow' node.")
>
> Signed-off-by: Numan Siddique <numans@ovn.org>

Recheck-request: github-robot-_Build_and_Test

> ---
>  lib/ovn-util.c           |   18 +-
>  lib/ovn-util.h           |    2 -
>  northd/automake.mk       |    4 +-
>  northd/en-lflow.c        |   24 +-
>  northd/en-lflow.h        |    6 +
>  northd/inc-proc-northd.c |    4 +-
>  northd/lflow-mgr.c       | 1420 ++++++++++++++++++++++++++++
>  northd/lflow-mgr.h       |  186 ++++
>  northd/northd.c          | 1911 ++++++++++----------------------------
>  northd/northd.h          |  221 ++++-
>  northd/ovn-northd.c      |    4 +
>  tests/ovn-northd.at      |  216 +++++
>  12 files changed, 2553 insertions(+), 1463 deletions(-)
>  create mode 100644 northd/lflow-mgr.c
>  create mode 100644 northd/lflow-mgr.h
>
> diff --git a/lib/ovn-util.c b/lib/ovn-util.c
> index 3e69a25347..ee5cbcdc3c 100644
> --- a/lib/ovn-util.c
> +++ b/lib/ovn-util.c
> @@ -622,13 +622,10 @@ ovn_pipeline_from_name(const char *pipeline)
>  uint32_t
>  sbrec_logical_flow_hash(const struct sbrec_logical_flow *lf)
>  {
> -    const struct sbrec_datapath_binding *ld = lf->logical_datapath;
> -    uint32_t hash = ovn_logical_flow_hash(lf->table_id,
> -                                          ovn_pipeline_from_name(lf->pipeline),
> -                                          lf->priority, lf->match,
> -                                          lf->actions);
> -
> -    return ld ? ovn_logical_flow_hash_datapath(&ld->header_.uuid, hash) : hash;
> +    return ovn_logical_flow_hash(lf->table_id,
> +                                 ovn_pipeline_from_name(lf->pipeline),
> +                                 lf->priority, lf->match,
> +                                 lf->actions);
>  }
>
>  uint32_t
> @@ -641,13 +638,6 @@ ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
>      return hash_string(actions, hash);
>  }
>
> -uint32_t
> -ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
> -                               uint32_t hash)
> -{
> -    return hash_add(hash, uuid_hash(logical_datapath));
> -}
> -
>
>  struct tnlid_node {
>      struct hmap_node hmap_node;
> diff --git a/lib/ovn-util.h b/lib/ovn-util.h
> index 16e054812c..042e6bf82c 100644
> --- a/lib/ovn-util.h
> +++ b/lib/ovn-util.h
> @@ -146,8 +146,6 @@ uint32_t sbrec_logical_flow_hash(const struct sbrec_logical_flow *);
>  uint32_t ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
>                                 uint16_t priority,
>                                 const char *match, const char *actions);
> -uint32_t ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
> -                                        uint32_t hash);
>  void ovn_conn_show(struct unixctl_conn *conn, int argc OVS_UNUSED,
>                     const char *argv[] OVS_UNUSED, void *idl_);
>
> diff --git a/northd/automake.mk b/northd/automake.mk
> index a178541759..7c6d56a4ff 100644
> --- a/northd/automake.mk
> +++ b/northd/automake.mk
> @@ -33,7 +33,9 @@ northd_ovn_northd_SOURCES = \
>         northd/inc-proc-northd.c \
>         northd/inc-proc-northd.h \
>         northd/ipam.c \
> -       northd/ipam.h
> +       northd/ipam.h \
> +       northd/lflow-mgr.c \
> +       northd/lflow-mgr.h
>  northd_ovn_northd_LDADD = \
>         lib/libovn.la \
>         $(OVSDB_LIBDIR)/libovsdb.la \
> diff --git a/northd/en-lflow.c b/northd/en-lflow.c
> index b0161b98d9..fafdc24465 100644
> --- a/northd/en-lflow.c
> +++ b/northd/en-lflow.c
> @@ -24,6 +24,7 @@
>  #include "en-ls-stateful.h"
>  #include "en-northd.h"
>  #include "en-meters.h"
> +#include "lflow-mgr.h"
>
>  #include "lib/inc-proc-eng.h"
>  #include "northd.h"
> @@ -58,6 +59,8 @@ lflow_get_input_data(struct engine_node *node,
>          EN_OVSDB_GET(engine_get_input("SB_multicast_group", node));
>      lflow_input->sbrec_igmp_group_table =
>          EN_OVSDB_GET(engine_get_input("SB_igmp_group", node));
> +    lflow_input->sbrec_logical_dp_group_table =
> +        EN_OVSDB_GET(engine_get_input("SB_logical_dp_group", node));
>
>      lflow_input->sbrec_mcast_group_by_name_dp =
>             engine_ovsdb_node_get_index(
> @@ -90,17 +93,19 @@ void en_lflow_run(struct engine_node *node, void *data)
>      struct hmap bfd_connections = HMAP_INITIALIZER(&bfd_connections);
>      lflow_input.bfd_connections = &bfd_connections;
>
> +    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
> +
>      struct lflow_data *lflow_data = data;
> -    lflow_data_destroy(lflow_data);
> -    lflow_data_init(lflow_data);
> +    lflow_table_clear(lflow_data->lflow_table);
> +    lflow_reset_northd_refs(&lflow_input);
>
> -    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
>      build_bfd_table(eng_ctx->ovnsb_idl_txn,
>                      lflow_input.nbrec_bfd_table,
>                      lflow_input.sbrec_bfd_table,
>                      lflow_input.lr_ports,
>                      &bfd_connections);
> -    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input, &lflow_data->lflows);
> +    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input,
> +                 lflow_data->lflow_table);
>      bfd_cleanup_connections(lflow_input.nbrec_bfd_table,
>                              &bfd_connections);
>      hmap_destroy(&bfd_connections);
> @@ -131,7 +136,8 @@ lflow_northd_handler(struct engine_node *node,
>
>      if (!lflow_handle_northd_port_changes(eng_ctx->ovnsb_idl_txn,
>                                            &northd_data->trk_data.trk_lsps,
> -                                          &lflow_input, &lflow_data->lflows)) {
> +                                          &lflow_input,
> +                                          lflow_data->lflow_table)) {
>          return false;
>      }
>
> @@ -160,11 +166,13 @@ void *en_lflow_init(struct engine_node *node OVS_UNUSED,
>                       struct engine_arg *arg OVS_UNUSED)
>  {
>      struct lflow_data *data = xmalloc(sizeof *data);
> -    lflow_data_init(data);
> +    data->lflow_table = lflow_table_alloc();
> +    lflow_table_init(data->lflow_table);
>      return data;
>  }
>
> -void en_lflow_cleanup(void *data)
> +void en_lflow_cleanup(void *data_)
>  {
> -    lflow_data_destroy(data);
> +    struct lflow_data *data = data_;
> +    lflow_table_destroy(data->lflow_table);
>  }
> diff --git a/northd/en-lflow.h b/northd/en-lflow.h
> index 5417b2faff..f7325c56b1 100644
> --- a/northd/en-lflow.h
> +++ b/northd/en-lflow.h
> @@ -9,6 +9,12 @@
>
>  #include "lib/inc-proc-eng.h"
>
> +struct lflow_table;
> +
> +struct lflow_data {
> +    struct lflow_table *lflow_table;
> +};
> +
>  void en_lflow_run(struct engine_node *node, void *data);
>  void *en_lflow_init(struct engine_node *node, struct engine_arg *arg);
>  void en_lflow_cleanup(void *data);
> diff --git a/northd/inc-proc-northd.c b/northd/inc-proc-northd.c
> index 9ce4279ee8..0e17bfe2e6 100644
> --- a/northd/inc-proc-northd.c
> +++ b/northd/inc-proc-northd.c
> @@ -99,7 +99,8 @@ static unixctl_cb_func chassis_features_list;
>      SB_NODE(bfd, "bfd") \
>      SB_NODE(fdb, "fdb") \
>      SB_NODE(static_mac_binding, "static_mac_binding") \
> -    SB_NODE(chassis_template_var, "chassis_template_var")
> +    SB_NODE(chassis_template_var, "chassis_template_var") \
> +    SB_NODE(logical_dp_group, "logical_dp_group")
>
>  enum sb_engine_node {
>  #define SB_NODE(NAME, NAME_STR) SB_##NAME,
> @@ -229,6 +230,7 @@ void inc_proc_northd_init(struct ovsdb_idl_loop *nb,
>      engine_add_input(&en_lflow, &en_sb_igmp_group, NULL);
>      engine_add_input(&en_lflow, &en_lr_stateful, NULL);
>      engine_add_input(&en_lflow, &en_ls_stateful, NULL);
> +    engine_add_input(&en_lflow, &en_sb_logical_dp_group, NULL);
>      engine_add_input(&en_lflow, &en_northd, lflow_northd_handler);
>      engine_add_input(&en_lflow, &en_port_group, lflow_port_group_handler);
>
> diff --git a/northd/lflow-mgr.c b/northd/lflow-mgr.c
> new file mode 100644
> index 0000000000..3b423192bb
> --- /dev/null
> +++ b/northd/lflow-mgr.c
> @@ -0,0 +1,1420 @@
> +/*
> + * Copyright (c) 2024, Red Hat, Inc.
> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +
> +#include <config.h>
> +
> +/* OVS includes */
> +#include "include/openvswitch/thread.h"
> +#include "lib/bitmap.h"
> +#include "openvswitch/vlog.h"
> +
> +/* OVN includes */
> +#include "debug.h"
> +#include "lflow-mgr.h"
> +#include "lib/ovn-parallel-hmap.h"
> +
> +VLOG_DEFINE_THIS_MODULE(lflow_mgr);
> +
> +/* Static function declarations. */
> +struct ovn_lflow;
> +
> +static void ovn_lflow_init(struct ovn_lflow *, struct ovn_datapath *od,
> +                           size_t dp_bitmap_len, enum ovn_stage stage,
> +                           uint16_t priority, char *match,
> +                           char *actions, char *io_port,
> +                           char *ctrl_meter, char *stage_hint,
> +                           const char *where);
> +static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
> +                                        enum ovn_stage stage,
> +                                        uint16_t priority, const char *match,
> +                                        const char *actions,
> +                                        const char *ctrl_meter, uint32_t hash);
> +static void ovn_lflow_destroy(struct lflow_table *lflow_table,
> +                              struct ovn_lflow *lflow);
> +static char *ovn_lflow_hint(const struct ovsdb_idl_row *row);
> +
> +static struct ovn_lflow *do_ovn_lflow_add(
> +    struct lflow_table *, const struct ovn_datapath *,
> +    const unsigned long *dp_bitmap, size_t dp_bitmap_len, uint32_t hash,
> +    enum ovn_stage stage, uint16_t priority, const char *match,
> +    const char *actions, const char *io_port,
> +    const char *ctrl_meter,
> +    const struct ovsdb_idl_row *stage_hint,
> +    const char *where);
> +
> +
> +static struct ovs_mutex *lflow_hash_lock(const struct hmap *lflow_table,
> +                                         uint32_t hash);
> +static void lflow_hash_unlock(struct ovs_mutex *hash_lock);
> +
> +static struct ovn_dp_group *ovn_dp_group_get(
> +    struct hmap *dp_groups, size_t desired_n,
> +    const unsigned long *desired_bitmap,
> +    size_t bitmap_len);
> +static struct ovn_dp_group *ovn_dp_group_create(
> +    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
> +    struct sbrec_logical_dp_group *, size_t desired_n,
> +    const unsigned long *desired_bitmap,
> +    size_t bitmap_len, bool is_switch,
> +    const struct ovn_datapaths *ls_datapaths,
> +    const struct ovn_datapaths *lr_datapaths);
> +static struct ovn_dp_group *ovn_dp_group_get(
> +    struct hmap *dp_groups, size_t desired_n,
> +    const unsigned long *desired_bitmap,
> +    size_t bitmap_len);
> +static struct sbrec_logical_dp_group *ovn_sb_insert_or_update_logical_dp_group(
> +    struct ovsdb_idl_txn *ovnsb_txn,
> +    struct sbrec_logical_dp_group *,
> +    const unsigned long *dpg_bitmap,
> +    const struct ovn_datapaths *);
> +static struct ovn_dp_group *ovn_dp_group_find(const struct hmap *dp_groups,
> +                                              const unsigned long *dpg_bitmap,
> +                                              size_t bitmap_len,
> +                                              uint32_t hash);
> +static void ovn_dp_group_use(struct ovn_dp_group *);
> +static void ovn_dp_group_release(struct hmap *dp_groups,
> +                                 struct ovn_dp_group *);
> +static void ovn_dp_group_destroy(struct ovn_dp_group *dpg);
> +static void ovn_dp_group_add_with_reference(struct ovn_lflow *,
> +                                            const struct ovn_datapath *od,
> +                                            const unsigned long *dp_bitmap,
> +                                            size_t bitmap_len);
> +
> +static bool lflow_ref_sync_lflows__(
> +    struct lflow_ref  *, struct lflow_table *,
> +    struct ovsdb_idl_txn *ovnsb_txn,
> +    const struct ovn_datapaths *ls_datapaths,
> +    const struct ovn_datapaths *lr_datapaths,
> +    bool ovn_internal_version_changed,
> +    const struct sbrec_logical_flow_table *,
> +    const struct sbrec_logical_dp_group_table *);
> +static bool sync_lflow_to_sb(struct ovn_lflow *,
> +                             struct ovsdb_idl_txn *ovnsb_txn,
> +                             struct lflow_table *,
> +                             const struct ovn_datapaths *ls_datapaths,
> +                             const struct ovn_datapaths *lr_datapaths,
> +                             bool ovn_internal_version_changed,
> +                             const struct sbrec_logical_flow *sbflow,
> +                             const struct sbrec_logical_dp_group_table *);
> +
> +extern int parallelization_state;
> +extern thread_local size_t thread_lflow_counter;
> +
> +struct dp_refcnt;
> +static struct dp_refcnt *dp_refcnt_find(struct hmap *dp_refcnts_map,
> +                                        size_t dp_index);
> +static void dp_refcnt_use(struct hmap *dp_refcnts_map, size_t dp_index);
> +static bool dp_refcnt_release(struct hmap *dp_refcnts_map, size_t dp_index);
> +static void ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *);
> +static struct lflow_ref_node *lflow_ref_node_find(struct hmap *lflow_ref_nodes,
> +                                                  struct ovn_lflow *lflow,
> +                                                  uint32_t lflow_hash);
> +static void lflow_ref_node_destroy(struct lflow_ref_node *);
> +
> +static bool lflow_hash_lock_initialized = false;
> +/* The lflow_hash_lock is a mutex array that protects updates to the shared
> + * lflow table across threads when parallel lflow build and dp-group are both
> + * enabled. To avoid high contention between threads, a big array of mutexes
> + * are used instead of just one. This is possible because when parallel build
> + * is used we only use hmap_insert_fast() to update the hmap, which would not
> + * touch the bucket array but only the list in a single bucket. We only need to
> + * make sure that when adding lflows to the same hash bucket, the same lock is
> + * used, so that no two threads can add to the bucket at the same time.  It is
> + * ok that the same lock is used to protect multiple buckets, so a fixed sized
> + * mutex array is used instead of 1-1 mapping to the hash buckets. This
> + * simplies the implementation while effectively reduces lock contention
> + * because the chance that different threads contending the same lock amongst
> + * the big number of locks is very low. */
> +#define LFLOW_HASH_LOCK_MASK 0xFFFF
> +static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
> +
> +/* Full thread safety analysis is not possible with hash locks, because
> + * they are taken conditionally based on the 'parallelization_state' and
> + * a flow hash.  Also, the order in which two hash locks are taken is not
> + * predictable during the static analysis.
> + *
> + * Since the order of taking two locks depends on a random hash, to avoid
> + * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
> + * of hash locks is similar to a single mutex.
> + *
> + * Using a fake mutex to partially simulate thread safety restrictions, as
> + * if it were actually a single mutex.
> + *
> + * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
> + * nature of the lock.  Unlike other attributes, it applies to the
> + * implementation and not to the interface.  So, we can define a function
> + * that acquires the lock without analysing the way it does that.
> + */
> +extern struct ovs_mutex fake_hash_mutex;
> +
> +/* Represents a logical ovn flow (lflow).
> + *
> + * A logical flow with match 'M' and actions 'A' - L(M, A) is created
> + * when lflow engine node (northd.c) calls lflow_table_add_lflow
> + * (or one of the helper macros ovn_lflow_add_*).
> + *
> + * Each lflow is stored in the lflow_table (see 'struct lflow_table' below)
> + * and possibly referenced by zero or more lflow_refs
> + * (see 'struct lflow_ref' and 'struct lflow_ref_node' below).
> + *
> + * */
> +struct ovn_lflow {
> +    struct hmap_node hmap_node;
> +
> +    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
> +    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
> +    enum ovn_stage stage;
> +    uint16_t priority;
> +    char *match;
> +    char *actions;
> +    char *io_port;
> +    char *stage_hint;
> +    char *ctrl_meter;
> +    size_t n_ods;                /* Number of datapaths referenced by 'od' and
> +                                  * 'dpg_bitmap'. */
> +    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
> +    const char *where;
> +
> +    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
> +    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
> +    struct hmap dp_refcnts_map; /* Maintains the number of times this ovn_lflow
> +                                 * is referenced by a given datapath.
> +                                 * Contains 'struct dp_refcnt' in the map. */
> +};
> +
> +/* Logical flow table. */
> +struct lflow_table {
> +    struct hmap entries; /* hmap of lflows. */
> +    struct hmap ls_dp_groups; /* hmap of logical switch dp groups. */
> +    struct hmap lr_dp_groups; /* hmap of logical router dp groups. */
> +    ssize_t max_seen_lflow_size;
> +};
> +
> +struct lflow_table *
> +lflow_table_alloc(void)
> +{
> +    struct lflow_table *lflow_table = xzalloc(sizeof *lflow_table);
> +    lflow_table->max_seen_lflow_size = 128;
> +
> +    return lflow_table;
> +}
> +
> +void
> +lflow_table_init(struct lflow_table *lflow_table)
> +{
> +    fast_hmap_size_for(&lflow_table->entries,
> +                       lflow_table->max_seen_lflow_size);
> +    ovn_dp_groups_init(&lflow_table->ls_dp_groups);
> +    ovn_dp_groups_init(&lflow_table->lr_dp_groups);
> +}
> +
> +void
> +lflow_table_clear(struct lflow_table *lflow_table)
> +{
> +    struct ovn_lflow *lflow;
> +    HMAP_FOR_EACH_SAFE (lflow, hmap_node, &lflow_table->entries) {
> +        ovn_lflow_destroy(lflow_table, lflow);
> +    }
> +
> +    ovn_dp_groups_clear(&lflow_table->ls_dp_groups);
> +    ovn_dp_groups_clear(&lflow_table->lr_dp_groups);
> +}
> +
> +void
> +lflow_table_destroy(struct lflow_table *lflow_table)
> +{
> +    lflow_table_clear(lflow_table);
> +    hmap_destroy(&lflow_table->entries);
> +    ovn_dp_groups_destroy(&lflow_table->ls_dp_groups);
> +    ovn_dp_groups_destroy(&lflow_table->lr_dp_groups);
> +    free(lflow_table);
> +}
> +
> +void
> +lflow_table_expand(struct lflow_table *lflow_table)
> +{
> +    hmap_expand(&lflow_table->entries);
> +
> +    if (hmap_count(&lflow_table->entries) >
> +            lflow_table->max_seen_lflow_size) {
> +        lflow_table->max_seen_lflow_size = hmap_count(&lflow_table->entries);
> +    }
> +}
> +
> +void
> +lflow_table_set_size(struct lflow_table *lflow_table, size_t size)
> +{
> +    lflow_table->entries.n = size;
> +}
> +
> +void
> +lflow_table_sync_to_sb(struct lflow_table *lflow_table,
> +                       struct ovsdb_idl_txn *ovnsb_txn,
> +                       const struct ovn_datapaths *ls_datapaths,
> +                       const struct ovn_datapaths *lr_datapaths,
> +                       bool ovn_internal_version_changed,
> +                       const struct sbrec_logical_flow_table *sb_flow_table,
> +                       const struct sbrec_logical_dp_group_table *dpgrp_table)
> +{
> +    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
> +    struct hmap *lflows = &lflow_table->entries;
> +    struct ovn_lflow *lflow;
> +
> +    /* Push changes to the Logical_Flow table to database. */
> +    const struct sbrec_logical_flow *sbflow;
> +    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow, sb_flow_table) {
> +        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
> +        struct ovn_datapath *logical_datapath_od = NULL;
> +        size_t i;
> +
> +        /* Find one valid datapath to get the datapath type. */
> +        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
> +        if (dp) {
> +            logical_datapath_od = ovn_datapath_from_sbrec(
> +                &ls_datapaths->datapaths, &lr_datapaths->datapaths, dp);
> +            if (logical_datapath_od
> +                && ovn_datapath_is_stale(logical_datapath_od)) {
> +                logical_datapath_od = NULL;
> +            }
> +        }
> +        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
> +            logical_datapath_od = ovn_datapath_from_sbrec(
> +                &ls_datapaths->datapaths, &lr_datapaths->datapaths,
> +                dp_group->datapaths[i]);
> +            if (logical_datapath_od
> +                && !ovn_datapath_is_stale(logical_datapath_od)) {
> +                break;
> +            }
> +            logical_datapath_od = NULL;
> +        }
> +
> +        if (!logical_datapath_od) {
> +            /* This lflow has no valid logical datapaths. */
> +            sbrec_logical_flow_delete(sbflow);
> +            continue;
> +        }
> +
> +        enum ovn_pipeline pipeline
> +            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
> +
> +        lflow = ovn_lflow_find(
> +            lflows,
> +            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
> +                            pipeline, sbflow->table_id),
> +            sbflow->priority, sbflow->match, sbflow->actions,
> +            sbflow->controller_meter, sbflow->hash);
> +        if (lflow) {
> +            sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
> +                             lr_datapaths, ovn_internal_version_changed,
> +                             sbflow, dpgrp_table);
> +
> +            hmap_remove(lflows, &lflow->hmap_node);
> +            hmap_insert(&lflows_temp, &lflow->hmap_node,
> +                        hmap_node_hash(&lflow->hmap_node));
> +        } else {
> +            sbrec_logical_flow_delete(sbflow);
> +        }
> +    }
> +
> +    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
> +        sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
> +                         lr_datapaths, ovn_internal_version_changed,
> +                         NULL, dpgrp_table);
> +
> +        hmap_remove(lflows, &lflow->hmap_node);
> +        hmap_insert(&lflows_temp, &lflow->hmap_node,
> +                    hmap_node_hash(&lflow->hmap_node));
> +    }
> +    hmap_swap(lflows, &lflows_temp);
> +    hmap_destroy(&lflows_temp);
> +}
> +
> +/* Logical flow sync using 'struct lflow_ref'
> + * ==========================================
> + * The 'struct lflow_ref' represents a collection of (or references to)
> + * logical flows (struct ovn_lflow) which belong to a logical entity 'E'.
> + * This entity 'E' is external to lflow manager (see northd.h and northd.c)
> + * Eg. logical datapath (struct ovn_datapath), logical switch and router ports
> + * (struct ovn_port), load balancer (struct lb_datapath) etc.
> + *
> + * General guidelines on using 'struct lflow_ref'.
> + *   - For an entity 'E', create an instance of lflow_ref
> + *           E->lflow_ref = lflow_ref_create();
> + *
> + *   - For each logical flow L(M, A) generated for the entity 'E'
> + *     pass E->lflow_ref when adding L(M, A) to the lflow table.
> + *     Eg. lflow_table_add_lflow(lflow_table, od_of_E, M, A, .., E->lflow_ref);
> + *
> + * If lflows L1, L2 and L3 are generated for 'E', then
> + * E->lflow_ref stores these in its hmap.
> + * i.e E->lflow_ref->lflow_ref_nodes = hmap[LRN(L1, E1), LRN(L2, E1),
> + *                                          LRN(L3, E1)]
> + *
> + * LRN is an instance of 'struct lflow_ref_node'.
> + * 'struct lflow_ref_node' is used to store a logical lflow L(M, A) as a
> + * reference in the lflow_ref.  It is possible that an lflow L(M,A) can be
> + * referenced by one or more lflow_ref's.  For each reference, an instance of
> + * this struct 'lflow_ref_node' is created.
> + *
> + * For example, if entity E1 generates lflows L1, L2 and L3
> + * and entity E2 generates lflows L1, L3, and L4 then
> + * an instance of this struct is created for each entity.
> + * For example LRN(L1, E1).
> + *
> + * Each logical flow's L also maintains a list of its references in the
> + * ovn_lflow->referenced_by list.
> + *
> + *
> + *
> + *                L1            L2             L3             L4
> + *                |             |  (list)      |              |
> + *   (lflow_ref)  v             v              v              v
> + *  ----------------------------------------------------------------------
> + * | E1 (hmap) => LRN(L1,E1) => LRN(L2, E1) => LRN(L3, E1)    |           |
> + * |              |                            |              |           |
> + * |              v                            v              v           |
> + * | E2 (hmap) => LRN(L1,E2) ================> LRN(L3, E2) => LRN(L4, E2) |
> + *  ----------------------------------------------------------------------
> + *
> + *
> + * Life cycle of 'struct lflow_ref_node'
> + * =====================================
> + * For a given logical flow L1 and entity E1's lflow_ref,
> + *  1. LRN(L1, E1) is created in lflow_table_add_lflow() and its 'linked' flag
> + *     is set to true.
> + *  2. LRN(L1, E1) is stored in the hmap - E1->lflow_ref->lflow_ref_nodes.
> + *  3. LRN(L1, E1) is also stored in the linked list L1->referenced_by.
> + *  4. LRN(L1, E1)->linked is set to false when the client calls
> + *     lflow_ref_unlink_lflows(E1->lflow_ref).
> + *  5. LRN(L1, E1)->linked is set to true again when the client calls
> + *     lflow_table_add_lflow(L1, ..., E1->lflow_ref) and LRN(L1, E1)
> + *     is already present.
> + *  6. LRN(L1, E1) is destroyed if LRN(L1, E1)->linked is false
> + *     when the client calls lflow_ref_sync_lflows().
> + *  7. LRN(L1, E1) is also destroyed in lflow_ref_clear(E1->lflow_ref).
> + *
> + *
> + * Incremental lflow generation for a logical entity
> + * =================================================
> + * Lets take the above example again.
> + *
> + *
> + *                L1            L2             L3             L4
> + *                |             |  (list)      |              |
> + *   (lflow_ref)  v             v              v              v
> + *  ----------------------------------------------------------------------
> + * | E1 (hmap) => LRN(L1,E1) => LRN(L2, E1) => LRN(L3, E1)    |           |
> + * |              |                            |              |           |
> + * |              v                            v              v           |
> + * | E2 (hmap) => LRN(L1,E2) ================> LRN(L3, E2) => LRN(L4, E2) |
> + *  ----------------------------------------------------------------------
> + *
> + *
> + * L1 is referenced by E1 and E2
> + * L2 is referenced by just E1
> + * L3 is referenced by E1 and E2
> + * L4 is referenced by just E2
> + *
> + * L1->dpg_bitmap = [E1->od->index, E2->od->index]
> + * L2->dpg_bitmap = [E1->od->index]
> + * L3->dpg_bitmap = [E1->od->index, E2->od->index]
> + * L4->dpg_bitmap = [E2->od->index]
> + *
> + *
> + * When 'E' gets updated,
> + *   1.  the client should first call
> + *       lflow_ref_unlink_lflows(E1->lflow_ref);
> + *
> + *       This function sets the 'linked' flag to false and clears the dp bitmap
> + *       of linked lflows.
> + *
> + *       LRN(L1,E1)->linked = false;
> + *       LRN(L2,E1)->linked = false;
> + *       LRN(L3,E1)->linked = false;
> + *
> + *       bitmap status of all lflows in the lflows table
> + *       -----------------------------------------------
> + *       L1->dpg_bitmap = [E2->od->index]
> + *       L2->dpg_bitmap = []
> + *       L3->dpg_bitmap = [E2->od->index]
> + *       L4->dpg_bitmap = [E2->od->index]
> + *
> + *   2.  In step (2), client should generate the logical flows again for 'E1'.
> + *       Lets say it calls:
> + *       lflow_table_add_lflow(lflow_table, L3, E1->lflow_ref)
> + *       lflow_table_add_lflow(lflow_table, L5, E1->lflow_ref)
> + *
> + *       So, E1 generates the flows L3 and L5 and discards L1 and L2.
> + *
> + *       Below is the state of LRNs of E1
> + *       LRN(L1,E1)->linked = false;
> + *       LRN(L2,E1)->linked = false;
> + *       LRN(L3,E1)->linked = true;
> + *       LRN(L5,E1)->linked = true;
> + *
> + *       bitmap status of all lflows in the lflow table after end of step (2)
> + *       --------------------------------------------------------------------
> + *       L1->dpg_bitmap = [E2->od->index]
> + *       L2->dpg_bitmap = []
> + *       L3->dpg_bitmap = [E1->od->index, E2->od->index]
> + *       L4->dpg_bitmap = [E2->od->index]
> + *       L5->dpg_bitmap = [E1->od->index]
> + *
> + *   3.  In step (3), client should sync the E1's lflows by calling
> + *       lflow_ref_sync_lflows(E1->lflow_ref,....);
> + *
> + *       Below is how the logical flows in SB DB gets updated:
> + *       lflow L1:
> + *              SB:L1->logical_dp_group = NULL;
> + *              SB:L1->logical_datapath = E2->od;
> + *
> + *       lflow L2: L2 is deleted since no datapath is using it.
> + *
> + *       lflow L3: No changes
> + *
> + *       lflow L5: New row is created for this.
> + *
> + * After step (3)
> + *
> + *                L1            L5             L3             L4
> + *                |             |  (list)      |              |
> + *   (lflow_ref)  v             v              v              v
> + *  ----------------------------------------------------------------------
> + * | E1 (hmap) ===============> LRN(L2, E1) => LRN(L3, E1)    |           |
> + * |              |                            |              |           |
> + * |              v                            v              v           |
> + * | E2 (hmap) => LRN(L1,E2) ================> LRN(L3, E2) => LRN(L4, E2) |
> + *  ----------------------------------------------------------------------
> + *
> + * Thread safety in lflow_ref
> + * ==========================
> + * The function lflow_table_add_lflow() is not thread safe for lflow_ref.
> + * Client should ensure that same instance of lflow_ref's are not used
> + * by multiple threads when calling lflow_table_add_lflow().
> + *
> + * One way to ensure thread safety is to maintain array of hash locks
> + * in each lflow_ref just like how we have static variable lflow_hash_locks
> + * of type ovs_mutex. This would mean that client has to reconsile the
> + * lflow_ref hmap lflow_ref_nodes (by calling hmap_expand()) after the
> + * lflow generation is complete.  (See lflow_table_expand()).
> + *
> + * Presently the client of lflow manager (northd.c) doesn't call
> + * lflow_table_add_lflow() in multiple threads for the same lflow_ref.
> + * But it may change in the future and we may need to add the thread
> + * safety support.
> + *
> + * Until then care should be taken by the contributors to avoid this
> + * scenario.
> + */
> +struct lflow_ref {
> +    /* hmap of lfow ref nodes. hmap_node is 'struct lflow_ref_node *'. */
> +    struct hmap lflow_ref_nodes;
> +};
> +
> +struct lflow_ref_node {
> +    /* hmap node in the hmap - 'struct lflow_ref->lflow_ref_nodes' */
> +    struct hmap_node ref_node;
> +    struct lflow_ref *lflow_ref; /* pointer to 'lflow_ref' it is part of. */
> +
> +    /* This list follows different objects that reference the same lflow. List
> +     * head is ovn_lflow->referenced_by. */
> +    struct ovs_list ref_list_node;
> +    /* The lflow. */
> +    struct ovn_lflow *lflow;
> +
> +    /* Index id of the datapath this lflow_ref_node belongs to. */
> +    size_t dp_index;
> +
> +    /* Indicates if the lflow_ref_node for an lflow - L(M, A) is linked
> +     * to datapath(s) or not.
> +     * It is set to true when an lflow L(M, A) is referenced by an lflow ref
> +     * in lflow_table_add_lflow().  It is set to false when it is unlinked
> +     * from the datapath when lflow_ref_unlink_lflows() is called. */
> +    bool linked;
> +};
> +
> +struct lflow_ref *
> +lflow_ref_create(void)
> +{
> +    struct lflow_ref *lflow_ref = xzalloc(sizeof *lflow_ref);
> +    hmap_init(&lflow_ref->lflow_ref_nodes);
> +    return lflow_ref;
> +}
> +
> +void
> +lflow_ref_clear(struct lflow_ref *lflow_ref)
> +{
> +    struct lflow_ref_node *lrn;
> +    HMAP_FOR_EACH_SAFE (lrn, ref_node, &lflow_ref->lflow_ref_nodes) {
> +        lflow_ref_node_destroy(lrn);
> +    }
> +}
> +
> +void
> +lflow_ref_destroy(struct lflow_ref *lflow_ref)
> +{
> +    lflow_ref_clear(lflow_ref);
> +    hmap_destroy(&lflow_ref->lflow_ref_nodes);
> +    free(lflow_ref);
> +}
> +
> +/* Unlinks the lflows referenced by the 'lflow_ref'.
> + * For each lflow_ref_node (lrn) in the lflow_ref, it basically clears
> + * the datapath id (lrn->dp_index) from the lrn->lflow's dpg bitmap.
> + */
> +void
> +lflow_ref_unlink_lflows(struct lflow_ref *lflow_ref)
> +{
> +    struct lflow_ref_node *lrn;
> +
> +    HMAP_FOR_EACH (lrn, ref_node, &lflow_ref->lflow_ref_nodes) {
> +        if (dp_refcnt_release(&lrn->lflow->dp_refcnts_map,
> +                              lrn->dp_index)) {
> +            bitmap_set0(lrn->lflow->dpg_bitmap, lrn->dp_index);
> +        }
> +
> +        lrn->linked = false;
> +    }
> +}
> +
> +bool
> +lflow_ref_resync_flows(struct lflow_ref *lflow_ref,
> +                       struct lflow_table *lflow_table,
> +                       struct ovsdb_idl_txn *ovnsb_txn,
> +                       const struct ovn_datapaths *ls_datapaths,
> +                       const struct ovn_datapaths *lr_datapaths,
> +                       bool ovn_internal_version_changed,
> +                       const struct sbrec_logical_flow_table *sbflow_table,
> +                       const struct sbrec_logical_dp_group_table *dpgrp_table)
> +{
> +    lflow_ref_unlink_lflows(lflow_ref);
> +    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
> +                                   ls_datapaths, lr_datapaths,
> +                                   ovn_internal_version_changed, sbflow_table,
> +                                   dpgrp_table);
> +}
> +
> +bool
> +lflow_ref_sync_lflows(struct lflow_ref *lflow_ref,
> +                      struct lflow_table *lflow_table,
> +                      struct ovsdb_idl_txn *ovnsb_txn,
> +                      const struct ovn_datapaths *ls_datapaths,
> +                      const struct ovn_datapaths *lr_datapaths,
> +                      bool ovn_internal_version_changed,
> +                      const struct sbrec_logical_flow_table *sbflow_table,
> +                      const struct sbrec_logical_dp_group_table *dpgrp_table)
> +{
> +    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
> +                                   ls_datapaths, lr_datapaths,
> +                                   ovn_internal_version_changed, sbflow_table,
> +                                   dpgrp_table);
> +}
> +
> +/* Adds a logical flow to the logical flow table for the match 'match'
> + * and actions 'actions'.
> + *
> + * If a logical flow L(M, A) for the 'match' and 'actions' already exist then
> + *   - It will be no-op if L(M,A) was already added for the same datapath.
> + *   - if its a different datapath, then the datapath index (od->index)
> + *     is set in the lflow dp group bitmap.
> + *
> + * If 'lflow_ref' is not NULL then
> + *    - it first checks if the lflow is present in the lflow_ref or not
> + *    - if present, then it does nothing
> + *    - if not present, then it creates an lflow_ref_node object for
> + *      the [L(M, A), dp index] and adds ito the lflow_ref hmap.
> + *
> + * Note that this function is not thread safe for 'lflow_ref'.
> + * If 2 or more threads calls this function for the same 'lflow_ref',
> + * then it may corrupt the hmap.  Caller should ensure thread safety
> + * for such scenarios.
> + */
> +void
> +lflow_table_add_lflow(struct lflow_table *lflow_table,
> +                      const struct ovn_datapath *od,
> +                      const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> +                      enum ovn_stage stage, uint16_t priority,
> +                      const char *match, const char *actions,
> +                      const char *io_port, const char *ctrl_meter,
> +                      const struct ovsdb_idl_row *stage_hint,
> +                      const char *where,
> +                      struct lflow_ref *lflow_ref)
> +    OVS_EXCLUDED(fake_hash_mutex)
> +{
> +    struct ovs_mutex *hash_lock;
> +    uint32_t hash;
> +
> +    ovs_assert(!od ||
> +               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
> +
> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> +                                 ovn_stage_get_pipeline(stage),
> +                                 priority, match,
> +                                 actions);
> +
> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
> +    struct ovn_lflow *lflow =
> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
> +                         dp_bitmap_len, hash, stage,
> +                         priority, match, actions,
> +                         io_port, ctrl_meter, stage_hint, where);
> +
> +    if (lflow_ref) {
> +        /* lflow referencing is only supported if 'od' is not NULL. */
> +        ovs_assert(od);
> +
> +        struct lflow_ref_node *lrn =
> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow, hash);
> +        if (!lrn) {
> +            lrn = xzalloc(sizeof *lrn);
> +            lrn->lflow = lflow;
> +            lrn->lflow_ref = lflow_ref;
> +            lrn->dp_index = od->index;
> +            dp_refcnt_use(&lflow->dp_refcnts_map, lrn->dp_index);
> +            ovs_list_insert(&lflow->referenced_by, &lrn->ref_list_node);
> +            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node, hash);
> +        }
> +
> +        lrn->linked = true;
> +    }
> +
> +    lflow_hash_unlock(hash_lock);
> +
> +}
> +
> +void
> +lflow_table_add_lflow_default_drop(struct lflow_table *lflow_table,
> +                                   const struct ovn_datapath *od,
> +                                   enum ovn_stage stage,
> +                                   const char *where,
> +                                   struct lflow_ref *lflow_ref)
> +{
> +    lflow_table_add_lflow(lflow_table, od, NULL, 0, stage, 0, "1",
> +                          debug_drop_action(), NULL, NULL, NULL,
> +                          where, lflow_ref);
> +}
> +
> +/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
> + * doesn't exist, creates a new one and adds it to 'dp_groups'.
> + * If 'sb_group' is provided, function will try to re-use this group by
> + * either taking it directly, or by modifying, if it's not already in use. */
> +struct ovn_dp_group *
> +ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
> +                           struct hmap *dp_groups,
> +                           struct sbrec_logical_dp_group *sb_group,
> +                           size_t desired_n,
> +                           const unsigned long *desired_bitmap,
> +                           size_t bitmap_len,
> +                           bool is_switch,
> +                           const struct ovn_datapaths *ls_datapaths,
> +                           const struct ovn_datapaths *lr_datapaths)
> +{
> +    struct ovn_dp_group *dpg;
> +
> +    dpg = ovn_dp_group_get(dp_groups, desired_n, desired_bitmap, bitmap_len);
> +    if (dpg) {
> +        return dpg;
> +    }
> +
> +    return ovn_dp_group_create(ovnsb_txn, dp_groups, sb_group, desired_n,
> +                               desired_bitmap, bitmap_len, is_switch,
> +                               ls_datapaths, lr_datapaths);
> +}
> +
> +void
> +ovn_dp_groups_clear(struct hmap *dp_groups)
> +{
> +    struct ovn_dp_group *dpg;
> +    HMAP_FOR_EACH_POP (dpg, node, dp_groups) {
> +        ovn_dp_group_destroy(dpg);
> +    }
> +}
> +
> +void
> +ovn_dp_groups_destroy(struct hmap *dp_groups)
> +{
> +    ovn_dp_groups_clear(dp_groups);
> +    hmap_destroy(dp_groups);
> +}
> +
> +void
> +lflow_hash_lock_init(void)
> +{
> +    if (!lflow_hash_lock_initialized) {
> +        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> +            ovs_mutex_init(&lflow_hash_locks[i]);
> +        }
> +        lflow_hash_lock_initialized = true;
> +    }
> +}
> +
> +void
> +lflow_hash_lock_destroy(void)
> +{
> +    if (lflow_hash_lock_initialized) {
> +        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> +            ovs_mutex_destroy(&lflow_hash_locks[i]);
> +        }
> +    }
> +    lflow_hash_lock_initialized = false;
> +}
> +
> +/* static functions. */
> +static void
> +ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
> +               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
> +               char *match, char *actions, char *io_port, char *ctrl_meter,
> +               char *stage_hint, const char *where)
> +{
> +    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
> +    lflow->od = od;
> +    lflow->stage = stage;
> +    lflow->priority = priority;
> +    lflow->match = match;
> +    lflow->actions = actions;
> +    lflow->io_port = io_port;
> +    lflow->stage_hint = stage_hint;
> +    lflow->ctrl_meter = ctrl_meter;
> +    lflow->dpg = NULL;
> +    lflow->where = where;
> +    lflow->sb_uuid = UUID_ZERO;
> +    hmap_init(&lflow->dp_refcnts_map);
> +    ovs_list_init(&lflow->referenced_by);
> +}
> +
> +static struct ovs_mutex *
> +lflow_hash_lock(const struct hmap *lflow_table, uint32_t hash)
> +    OVS_ACQUIRES(fake_hash_mutex)
> +    OVS_NO_THREAD_SAFETY_ANALYSIS
> +{
> +    struct ovs_mutex *hash_lock = NULL;
> +
> +    if (parallelization_state == STATE_USE_PARALLELIZATION) {
> +        hash_lock =
> +            &lflow_hash_locks[hash & lflow_table->mask & LFLOW_HASH_LOCK_MASK];
> +        ovs_mutex_lock(hash_lock);
> +    }
> +    return hash_lock;
> +}
> +
> +static void
> +lflow_hash_unlock(struct ovs_mutex *hash_lock)
> +    OVS_RELEASES(fake_hash_mutex)
> +    OVS_NO_THREAD_SAFETY_ANALYSIS
> +{
> +    if (hash_lock) {
> +        ovs_mutex_unlock(hash_lock);
> +    }
> +}
> +
> +static bool
> +ovn_lflow_equal(const struct ovn_lflow *a, enum ovn_stage stage,
> +                uint16_t priority, const char *match,
> +                const char *actions, const char *ctrl_meter)
> +{
> +    return (a->stage == stage
> +            && a->priority == priority
> +            && !strcmp(a->match, match)
> +            && !strcmp(a->actions, actions)
> +            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
> +}
> +
> +static struct ovn_lflow *
> +ovn_lflow_find(const struct hmap *lflows,
> +               enum ovn_stage stage, uint16_t priority,
> +               const char *match, const char *actions,
> +               const char *ctrl_meter, uint32_t hash)
> +{
> +    struct ovn_lflow *lflow;
> +    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
> +        if (ovn_lflow_equal(lflow, stage, priority, match, actions,
> +                            ctrl_meter)) {
> +            return lflow;
> +        }
> +    }
> +    return NULL;
> +}
> +
> +static char *
> +ovn_lflow_hint(const struct ovsdb_idl_row *row)
> +{
> +    if (!row) {
> +        return NULL;
> +    }
> +    return xasprintf("%08x", row->uuid.parts[0]);
> +}
> +
> +static void
> +ovn_lflow_destroy(struct lflow_table *lflow_table, struct ovn_lflow *lflow)
> +{
> +    hmap_remove(&lflow_table->entries, &lflow->hmap_node);
> +    bitmap_free(lflow->dpg_bitmap);
> +    free(lflow->match);
> +    free(lflow->actions);
> +    free(lflow->io_port);
> +    free(lflow->stage_hint);
> +    free(lflow->ctrl_meter);
> +    ovn_lflow_clear_dp_refcnts_map(lflow);
> +    struct lflow_ref_node *lrn;
> +    LIST_FOR_EACH_SAFE (lrn, ref_list_node, &lflow->referenced_by) {
> +        lflow_ref_node_destroy(lrn);
> +    }
> +    free(lflow);
> +}
> +
> +static struct ovn_lflow *
> +do_ovn_lflow_add(struct lflow_table *lflow_table,
> +                 const struct ovn_datapath *od,
> +                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> +                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
> +                 const char *match, const char *actions,
> +                 const char *io_port, const char *ctrl_meter,
> +                 const struct ovsdb_idl_row *stage_hint,
> +                 const char *where)
> +    OVS_REQUIRES(fake_hash_mutex)
> +{
> +    struct ovn_lflow *old_lflow;
> +    struct ovn_lflow *lflow;
> +
> +    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
> +    ovs_assert(bitmap_len);
> +
> +    old_lflow = ovn_lflow_find(&lflow_table->entries, stage,
> +                               priority, match, actions, ctrl_meter, hash);
> +    if (old_lflow) {
> +        ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
> +                                        bitmap_len);
> +        return old_lflow;
> +    }
> +
> +    lflow = xzalloc(sizeof *lflow);
> +    /* While adding new logical flows we're not setting single datapath, but
> +     * collecting a group.  'od' will be updated later for all flows with only
> +     * one datapath in a group, so it could be hashed correctly. */
> +    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
> +                   xstrdup(match), xstrdup(actions),
> +                   io_port ? xstrdup(io_port) : NULL,
> +                   nullable_xstrdup(ctrl_meter),
> +                   ovn_lflow_hint(stage_hint), where);
> +
> +    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
> +
> +    if (parallelization_state != STATE_USE_PARALLELIZATION) {
> +        hmap_insert(&lflow_table->entries, &lflow->hmap_node, hash);
> +    } else {
> +        hmap_insert_fast(&lflow_table->entries, &lflow->hmap_node,
> +                         hash);
> +        thread_lflow_counter++;
> +    }
> +
> +    return lflow;
> +}
> +
> +static bool
> +sync_lflow_to_sb(struct ovn_lflow *lflow,
> +                 struct ovsdb_idl_txn *ovnsb_txn,
> +                 struct lflow_table *lflow_table,
> +                 const struct ovn_datapaths *ls_datapaths,
> +                 const struct ovn_datapaths *lr_datapaths,
> +                 bool ovn_internal_version_changed,
> +                 const struct sbrec_logical_flow *sbflow,
> +                 const struct sbrec_logical_dp_group_table *sb_dpgrp_table)
> +{
> +    struct sbrec_logical_dp_group *sbrec_dp_group = NULL;
> +    struct ovn_dp_group *pre_sync_dpg = lflow->dpg;
> +    struct ovn_datapath **datapaths_array;
> +    struct hmap *dp_groups;
> +    size_t n_datapaths;
> +    bool is_switch;
> +
> +    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> +        n_datapaths = ods_size(ls_datapaths);
> +        datapaths_array = ls_datapaths->array;
> +        dp_groups = &lflow_table->ls_dp_groups;
> +        is_switch = true;
> +    } else {
> +        n_datapaths = ods_size(lr_datapaths);
> +        datapaths_array = lr_datapaths->array;
> +        dp_groups = &lflow_table->lr_dp_groups;
> +        is_switch = false;
> +    }
> +
> +    lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> +    ovs_assert(lflow->n_ods);
> +
> +    if (lflow->n_ods == 1) {
> +        /* There is only one datapath, so it should be moved out of the
> +         * group to a single 'od'. */
> +        size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
> +                                    n_datapaths);
> +
> +        lflow->od = datapaths_array[index];
> +        lflow->dpg = NULL;
> +    } else {
> +        lflow->od = NULL;
> +    }
> +
> +    if (!sbflow) {
> +        lflow->sb_uuid = uuid_random();
> +        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
> +                                                        &lflow->sb_uuid);
> +        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
> +        uint8_t table = ovn_stage_get_table(lflow->stage);
> +        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
> +        sbrec_logical_flow_set_table_id(sbflow, table);
> +        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
> +        sbrec_logical_flow_set_match(sbflow, lflow->match);
> +        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
> +        if (lflow->io_port) {
> +            struct smap tags = SMAP_INITIALIZER(&tags);
> +            smap_add(&tags, "in_out_port", lflow->io_port);
> +            sbrec_logical_flow_set_tags(sbflow, &tags);
> +            smap_destroy(&tags);
> +        }
> +        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
> +
> +        /* Trim the source locator lflow->where, which looks something like
> +         * "ovn/northd/northd.c:1234", down to just the part following the
> +         * last slash, e.g. "northd.c:1234". */
> +        const char *slash = strrchr(lflow->where, '/');
> +#if _WIN32
> +        const char *backslash = strrchr(lflow->where, '\\');
> +        if (!slash || backslash > slash) {
> +            slash = backslash;
> +        }
> +#endif
> +        const char *where = slash ? slash + 1 : lflow->where;
> +
> +        struct smap ids = SMAP_INITIALIZER(&ids);
> +        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
> +        smap_add(&ids, "source", where);
> +        if (lflow->stage_hint) {
> +            smap_add(&ids, "stage-hint", lflow->stage_hint);
> +        }
> +        sbrec_logical_flow_set_external_ids(sbflow, &ids);
> +        smap_destroy(&ids);
> +
> +    } else {
> +        lflow->sb_uuid = sbflow->header_.uuid;
> +        sbrec_dp_group = sbflow->logical_dp_group;
> +
> +        if (ovn_internal_version_changed) {
> +            const char *stage_name = smap_get_def(&sbflow->external_ids,
> +                                                  "stage-name", "");
> +            const char *stage_hint = smap_get_def(&sbflow->external_ids,
> +                                                  "stage-hint", "");
> +            const char *source = smap_get_def(&sbflow->external_ids,
> +                                              "source", "");
> +
> +            if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
> +                sbrec_logical_flow_update_external_ids_setkey(
> +                    sbflow, "stage-name", ovn_stage_to_str(lflow->stage));
> +            }
> +            if (lflow->stage_hint) {
> +                if (strcmp(stage_hint, lflow->stage_hint)) {
> +                    sbrec_logical_flow_update_external_ids_setkey(
> +                        sbflow, "stage-hint", lflow->stage_hint);
> +                }
> +            }
> +            if (lflow->where) {
> +
> +                /* Trim the source locator lflow->where, which looks something
> +                 * like "ovn/northd/northd.c:1234", down to just the part
> +                 * following the last slash, e.g. "northd.c:1234". */
> +                const char *slash = strrchr(lflow->where, '/');
> +#if _WIN32
> +                const char *backslash = strrchr(lflow->where, '\\');
> +                if (!slash || backslash > slash) {
> +                    slash = backslash;
> +                }
> +#endif
> +                const char *where = slash ? slash + 1 : lflow->where;
> +
> +                if (strcmp(source, where)) {
> +                    sbrec_logical_flow_update_external_ids_setkey(
> +                        sbflow, "source", where);
> +                }
> +            }
> +        }
> +    }
> +
> +    if (lflow->od) {
> +        sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> +        sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
> +    } else {
> +        sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
> +        lflow->dpg = ovn_dp_group_get(dp_groups, lflow->n_ods,
> +                                      lflow->dpg_bitmap,
> +                                      n_datapaths);
> +        if (lflow->dpg) {
> +            /* Update the dpg's sb dp_group. */
> +            lflow->dpg->dp_group = sbrec_logical_dp_group_table_get_for_uuid(
> +                sb_dpgrp_table,
> +                &lflow->dpg->dpg_uuid);
> +
> +            if (!lflow->dpg->dp_group) {
> +                /* Ideally this should not happen.  But it can still happen
> +                 * due to 2 reasons:
> +                 * 1. There is a bug in the dp_group management.  We should
> +                 *    perhaps assert here.
> +                 * 2. A User or CMS may delete the logical_dp_groups in SB DB
> +                 *    or clear the SB:Logical_flow.logical_dp_groups column
> +                 *    (intentionally or accidentally)
> +                 *
> +                 * Because of (2) it is better to return false instead of
> +                 * assert,so that we recover from th inconsistent SB DB.
> +                 */
> +                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
> +                VLOG_WARN_RL(&rl, "SB Logical flow ["UUID_FMT"]'s "
> +                            "logical_dp_group column is not set "
> +                            "(which is unexpected).  It should have been "
> +                            "referencing the dp group ["UUID_FMT"]",
> +                            UUID_ARGS(&sbflow->header_.uuid),
> +                            UUID_ARGS(&lflow->dpg->dpg_uuid));
> +                return false;
> +            }
> +        } else {
> +            lflow->dpg = ovn_dp_group_create(
> +                                ovnsb_txn, dp_groups, sbrec_dp_group,
> +                                lflow->n_ods, lflow->dpg_bitmap,
> +                                n_datapaths, is_switch,
> +                                ls_datapaths,
> +                                lr_datapaths);
> +        }
> +        sbrec_logical_flow_set_logical_dp_group(sbflow,
> +                                                lflow->dpg->dp_group);
> +    }
> +
> +    if (pre_sync_dpg != lflow->dpg) {
> +        ovn_dp_group_use(lflow->dpg);
> +        ovn_dp_group_release(dp_groups, pre_sync_dpg);
> +    }
> +
> +    return true;
> +}
> +
> +static struct ovn_dp_group *
> +ovn_dp_group_find(const struct hmap *dp_groups,
> +                  const unsigned long *dpg_bitmap, size_t bitmap_len,
> +                  uint32_t hash)
> +{
> +    struct ovn_dp_group *dpg;
> +
> +    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
> +        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
> +            return dpg;
> +        }
> +    }
> +    return NULL;
> +}
> +
> +static void
> +ovn_dp_group_use(struct ovn_dp_group *dpg)
> +{
> +    if (dpg) {
> +        dpg->refcnt++;
> +    }
> +}
> +
> +static void
> +ovn_dp_group_release(struct hmap *dp_groups, struct ovn_dp_group *dpg)
> +{
> +    if (dpg && !--dpg->refcnt) {
> +        hmap_remove(dp_groups, &dpg->node);
> +        ovn_dp_group_destroy(dpg);
> +    }
> +}
> +
> +/* Destroys the ovn_dp_group and frees the memory.
> + * Caller should remove the dpg->node from the hmap before
> + * calling this. */
> +static void
> +ovn_dp_group_destroy(struct ovn_dp_group *dpg)
> +{
> +    bitmap_free(dpg->bitmap);
> +    free(dpg);
> +}
> +
> +static struct sbrec_logical_dp_group *
> +ovn_sb_insert_or_update_logical_dp_group(
> +                            struct ovsdb_idl_txn *ovnsb_txn,
> +                            struct sbrec_logical_dp_group *dp_group,
> +                            const unsigned long *dpg_bitmap,
> +                            const struct ovn_datapaths *datapaths)
> +{
> +    const struct sbrec_datapath_binding **sb;
> +    size_t n = 0, index;
> +
> +    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
> +    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
> +        sb[n++] = datapaths->array[index]->sb;
> +    }
> +    if (!dp_group) {
> +        struct uuid dpg_uuid = uuid_random();
> +        dp_group = sbrec_logical_dp_group_insert_persist_uuid(
> +            ovnsb_txn, &dpg_uuid);
> +    }
> +    sbrec_logical_dp_group_set_datapaths(
> +        dp_group, (struct sbrec_datapath_binding **) sb, n);
> +    free(sb);
> +
> +    return dp_group;
> +}
> +
> +static struct ovn_dp_group *
> +ovn_dp_group_get(struct hmap *dp_groups, size_t desired_n,
> +                 const unsigned long *desired_bitmap,
> +                 size_t bitmap_len)
> +{
> +    uint32_t hash;
> +
> +    hash = hash_int(desired_n, 0);
> +    return ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
> +}
> +
> +/* Creates a new datapath group and adds it to 'dp_groups'.
> + * If 'sb_group' is provided, function will try to re-use this group by
> + * either taking it directly, or by modifying, if it's not already in use.
> + * Caller should first call ovn_dp_group_get() before calling this function. */
> +static struct ovn_dp_group *
> +ovn_dp_group_create(struct ovsdb_idl_txn *ovnsb_txn,
> +                    struct hmap *dp_groups,
> +                    struct sbrec_logical_dp_group *sb_group,
> +                    size_t desired_n,
> +                    const unsigned long *desired_bitmap,
> +                    size_t bitmap_len,
> +                    bool is_switch,
> +                    const struct ovn_datapaths *ls_datapaths,
> +                    const struct ovn_datapaths *lr_datapaths)
> +{
> +    struct ovn_dp_group *dpg;
> +
> +    bool update_dp_group = false, can_modify = false;
> +    unsigned long *dpg_bitmap;
> +    size_t i, n = 0;
> +
> +    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
> +    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
> +        struct ovn_datapath *datapath_od;
> +
> +        datapath_od = ovn_datapath_from_sbrec(
> +                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
> +                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
> +                        sb_group->datapaths[i]);
> +        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
> +            break;
> +        }
> +        bitmap_set1(dpg_bitmap, datapath_od->index);
> +        n++;
> +    }
> +    if (!sb_group || i != sb_group->n_datapaths) {
> +        /* No group or stale group.  Not going to be used. */
> +        update_dp_group = true;
> +        can_modify = true;
> +    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
> +        /* The group in Sb is different. */
> +        update_dp_group = true;
> +        /* We can modify existing group if it's not already in use. */
> +        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
> +                                        bitmap_len, hash_int(n, 0));
> +    }
> +
> +    bitmap_free(dpg_bitmap);
> +
> +    dpg = xzalloc(sizeof *dpg);
> +    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
> +    if (!update_dp_group) {
> +        dpg->dp_group = sb_group;
> +    } else {
> +        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
> +                            ovnsb_txn,
> +                            can_modify ? sb_group : NULL,
> +                            desired_bitmap,
> +                            is_switch ? ls_datapaths : lr_datapaths);
> +    }
> +    dpg->dpg_uuid = dpg->dp_group->header_.uuid;
> +    hmap_insert(dp_groups, &dpg->node, hash_int(desired_n, 0));
> +
> +    return dpg;
> +}
> +
> +/* Adds an OVN datapath to a datapath group of existing logical flow.
> + * Version to use when hash bucket locking is NOT required or the corresponding
> + * hash lock is already taken. */
> +static void
> +ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
> +                                const struct ovn_datapath *od,
> +                                const unsigned long *dp_bitmap,
> +                                size_t bitmap_len)
> +    OVS_REQUIRES(fake_hash_mutex)
> +{
> +    if (od) {
> +        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
> +    }
> +    if (dp_bitmap) {
> +        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
> +    }
> +}
> +
> +static bool
> +lflow_ref_sync_lflows__(struct lflow_ref  *lflow_ref,
> +                        struct lflow_table *lflow_table,
> +                        struct ovsdb_idl_txn *ovnsb_txn,
> +                        const struct ovn_datapaths *ls_datapaths,
> +                        const struct ovn_datapaths *lr_datapaths,
> +                        bool ovn_internal_version_changed,
> +                        const struct sbrec_logical_flow_table *sbflow_table,
> +                        const struct sbrec_logical_dp_group_table *dpgrp_table)
> +{
> +    struct lflow_ref_node *lrn;
> +    struct ovn_lflow *lflow;
> +    HMAP_FOR_EACH_SAFE (lrn, ref_node, &lflow_ref->lflow_ref_nodes) {
> +        lflow = lrn->lflow;
> +        const struct sbrec_logical_flow *sblflow =
> +            sbrec_logical_flow_table_get_for_uuid(sbflow_table,
> +                                                  &lflow->sb_uuid);
> +
> +        struct hmap *dp_groups = NULL;
> +        size_t n_datapaths;
> +        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> +            dp_groups = &lflow_table->ls_dp_groups;
> +            n_datapaths = ods_size(ls_datapaths);
> +        } else {
> +            dp_groups = &lflow_table->lr_dp_groups;
> +            n_datapaths = ods_size(lr_datapaths);
> +        }
> +
> +        size_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> +
> +        if (n_ods) {
> +            if (!sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
> +                                  lr_datapaths, ovn_internal_version_changed,
> +                                  sblflow, dpgrp_table)) {
> +                return false;
> +            }
> +        }
> +
> +        if (!lrn->linked) {
> +            lflow_ref_node_destroy(lrn);
> +
> +            if (ovs_list_is_empty(&lflow->referenced_by)) {
> +                ovn_dp_group_release(dp_groups, lflow->dpg);
> +                ovn_lflow_destroy(lflow_table, lflow);
> +                if (sblflow) {
> +                    sbrec_logical_flow_delete(sblflow);
> +                }
> +            }
> +        }
> +    }
> +
> +    return true;
> +}
> +
> +/* Used for the datapath reference counting for a given 'struct ovn_lflow'.
> + * See the hmap 'dp_refcnts_map in 'struct ovn_lflow'.
> + * For a given lflow L(M, A) with match - M and actions - A, it can be
> + * referenced by multiple lflow_refs for the same datapath
> + * Eg. Two lflow_ref's - op->lflow_ref and op->stateful_lflow_ref of a
> + * datapath can have a reference to the same lflow L (M, A).  In this it
> + * is important to maintain this reference count so that the sync to the
> + * SB DB logical_flow is correct. */
> +struct dp_refcnt {
> +    struct hmap_node key_node;
> +
> +    size_t dp_index; /* datapath index.  Also used as hmap key. */
> +    size_t refcnt;   /* reference counter. */
> +};
> +
> +static struct dp_refcnt *
> +dp_refcnt_find(struct hmap *dp_refcnts_map, size_t dp_index)
> +{
> +    struct dp_refcnt *dp_refcnt;
> +    HMAP_FOR_EACH_WITH_HASH (dp_refcnt, key_node, dp_index, dp_refcnts_map) {
> +        if (dp_refcnt->dp_index == dp_index) {
> +            return dp_refcnt;
> +        }
> +    }
> +
> +    return NULL;
> +}
> +
> +static void
> +dp_refcnt_use(struct hmap *dp_refcnts_map, size_t dp_index)
> +{
> +    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
> +
> +    if (!dp_refcnt) {
> +        dp_refcnt = xzalloc(sizeof *dp_refcnt);
> +        dp_refcnt->dp_index = dp_index;
> +
> +        hmap_insert(dp_refcnts_map, &dp_refcnt->key_node, dp_index);
> +    }
> +
> +    dp_refcnt->refcnt++;
> +}
> +
> +/* Decrements the datapath's refcnt from the 'dp_refcnts_map' if it exists
> + * and returns true if the refcnt is 0 or if the dp refcnt doesn't exist. */
> +static bool
> +dp_refcnt_release(struct hmap *dp_refcnts_map, size_t dp_index)
> +{
> +    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
> +    if (!dp_refcnt) {
> +        return true;
> +    }
> +
> +    if (!--dp_refcnt->refcnt) {
> +        hmap_remove(dp_refcnts_map, &dp_refcnt->key_node);
> +        free(dp_refcnt);
> +        return true;
> +    }
> +
> +    return false;
> +}
> +
> +static void
> +ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *lflow)
> +{
> +    struct dp_refcnt *dp_refcnt;
> +
> +    HMAP_FOR_EACH_POP (dp_refcnt, key_node, &lflow->dp_refcnts_map) {
> +        free(dp_refcnt);
> +    }
> +
> +    hmap_destroy(&lflow->dp_refcnts_map);
> +}
> +
> +static struct lflow_ref_node *
> +lflow_ref_node_find(struct hmap *lflow_ref_nodes, struct ovn_lflow *lflow,
> +                    uint32_t lflow_hash)
> +{
> +    struct lflow_ref_node *lrn;
> +    HMAP_FOR_EACH_WITH_HASH (lrn, ref_node, lflow_hash, lflow_ref_nodes) {
> +        if (lrn->lflow == lflow) {
> +            return lrn;
> +        }
> +    }
> +
> +    return NULL;
> +}
> +
> +static void
> +lflow_ref_node_destroy(struct lflow_ref_node *lrn)
> +{
> +    hmap_remove(&lrn->lflow_ref->lflow_ref_nodes, &lrn->ref_node);
> +    ovs_list_remove(&lrn->ref_list_node);
> +    free(lrn);
> +}
> diff --git a/northd/lflow-mgr.h b/northd/lflow-mgr.h
> new file mode 100644
> index 0000000000..211d6d9d36
> --- /dev/null
> +++ b/northd/lflow-mgr.h
> @@ -0,0 +1,186 @@
> +    /*
> + * Copyright (c) 2024, Red Hat, Inc.
> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +#ifndef LFLOW_MGR_H
> +#define LFLOW_MGR_H 1
> +
> +#include "include/openvswitch/hmap.h"
> +#include "include/openvswitch/uuid.h"
> +
> +#include "northd.h"
> +
> +struct ovsdb_idl_txn;
> +struct ovn_datapath;
> +struct ovsdb_idl_row;
> +
> +/* lflow map which stores the logical flows. */
> +struct lflow_table;
> +struct lflow_table *lflow_table_alloc(void);
> +void lflow_table_init(struct lflow_table *);
> +void lflow_table_clear(struct lflow_table *);
> +void lflow_table_destroy(struct lflow_table *);
> +void lflow_table_expand(struct lflow_table *);
> +void lflow_table_set_size(struct lflow_table *, size_t);
> +void lflow_table_sync_to_sb(struct lflow_table *,
> +                            struct ovsdb_idl_txn *ovnsb_txn,
> +                            const struct ovn_datapaths *ls_datapaths,
> +                            const struct ovn_datapaths *lr_datapaths,
> +                            bool ovn_internal_version_changed,
> +                            const struct sbrec_logical_flow_table *,
> +                            const struct sbrec_logical_dp_group_table *);
> +void lflow_table_destroy(struct lflow_table *);
> +
> +void lflow_hash_lock_init(void);
> +void lflow_hash_lock_destroy(void);
> +
> +/* lflow mgr manages logical flows for a resource (like logical port
> + * or datapath). */
> +struct lflow_ref;
> +
> +struct lflow_ref *lflow_ref_create(void);
> +void lflow_ref_destroy(struct lflow_ref *);
> +void lflow_ref_clear(struct lflow_ref *lflow_ref);
> +void lflow_ref_unlink_lflows(struct lflow_ref *);
> +bool lflow_ref_resync_flows(struct lflow_ref *,
> +                            struct lflow_table *lflow_table,
> +                            struct ovsdb_idl_txn *ovnsb_txn,
> +                            const struct ovn_datapaths *ls_datapaths,
> +                            const struct ovn_datapaths *lr_datapaths,
> +                            bool ovn_internal_version_changed,
> +                            const struct sbrec_logical_flow_table *,
> +                            const struct sbrec_logical_dp_group_table *);
> +bool lflow_ref_sync_lflows(struct lflow_ref *,
> +                           struct lflow_table *lflow_table,
> +                           struct ovsdb_idl_txn *ovnsb_txn,
> +                           const struct ovn_datapaths *ls_datapaths,
> +                           const struct ovn_datapaths *lr_datapaths,
> +                           bool ovn_internal_version_changed,
> +                           const struct sbrec_logical_flow_table *,
> +                           const struct sbrec_logical_dp_group_table *);
> +
> +
> +void lflow_table_add_lflow(struct lflow_table *, const struct ovn_datapath *,
> +                           const unsigned long *dp_bitmap,
> +                           size_t dp_bitmap_len, enum ovn_stage stage,
> +                           uint16_t priority, const char *match,
> +                           const char *actions, const char *io_port,
> +                           const char *ctrl_meter,
> +                           const struct ovsdb_idl_row *stage_hint,
> +                           const char *where, struct lflow_ref *);
> +void lflow_table_add_lflow_default_drop(struct lflow_table *,
> +                                        const struct ovn_datapath *,
> +                                        enum ovn_stage stage,
> +                                        const char *where,
> +                                        struct lflow_ref *);
> +
> +/* Adds a row with the specified contents to the Logical_Flow table. */
> +#define ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> +                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
> +                                  STAGE_HINT) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
> +                          OVS_SOURCE_LOCATOR, NULL)
> +
> +#define ovn_lflow_add_with_lflow_ref_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, \
> +                                            MATCH, ACTIONS, IN_OUT_PORT, \
> +                                            CTRL_METER, STAGE_HINT, LFLOW_REF)\
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
> +                          OVS_SOURCE_LOCATOR, LFLOW_REF)
> +
> +#define ovn_lflow_add_with_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> +                                ACTIONS, STAGE_HINT) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, NULL, NULL, STAGE_HINT,  \
> +                          OVS_SOURCE_LOCATOR, NULL)
> +
> +#define ovn_lflow_add_with_lflow_ref_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
> +                                          MATCH, ACTIONS, STAGE_HINT, \
> +                                          LFLOW_REF) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, NULL, NULL, STAGE_HINT,  \
> +                          OVS_SOURCE_LOCATOR, LFLOW_REF)
> +
> +#define ovn_lflow_add_with_dp_group(LFLOW_TABLE, DP_BITMAP, DP_BITMAP_LEN, \
> +                                    STAGE, PRIORITY, MATCH, ACTIONS, \
> +                                    STAGE_HINT) \
> +    lflow_table_add_lflow(LFLOW_TABLE, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
> +                          PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
> +                          OVS_SOURCE_LOCATOR, NULL)
> +
> +#define ovn_lflow_add_default_drop(LFLOW_TABLE, OD, STAGE)                    \
> +    lflow_table_add_lflow_default_drop(LFLOW_TABLE, OD, STAGE, \
> +                                       OVS_SOURCE_LOCATOR, NULL)
> +
> +
> +/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
> + * the IN_OUT_PORT argument, which tells the lport name that appears in the
> + * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
> + * not local to the chassis. The critiera of the lport to be added using this
> + * argument:
> + *
> + * - For ingress pipeline, the lport that is used to match "inport".
> + * - For egress pipeline, the lport that is used to match "outport".
> + *
> + * For now, only LS pipelines should use this macro.  */
> +#define ovn_lflow_add_with_lport_and_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
> +                                          MATCH, ACTIONS, IN_OUT_PORT, \
> +                                          STAGE_HINT, LFLOW_REF) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, IN_OUT_PORT, NULL, STAGE_HINT, \
> +                          OVS_SOURCE_LOCATOR, LFLOW_REF)
> +
> +#define ovn_lflow_add(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, NULL)
> +
> +#define ovn_lflow_add_with_lflow_ref(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> +                                     ACTIONS, LFLOW_REF) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, \
> +                          LFLOW_REF)
> +
> +#define ovn_lflow_metered(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
> +                          CTRL_METER) \
> +    ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> +                              ACTIONS, NULL, CTRL_METER, NULL)
> +
> +struct sbrec_logical_dp_group;
> +
> +struct ovn_dp_group {
> +    unsigned long *bitmap;
> +    const struct sbrec_logical_dp_group *dp_group;
> +    struct uuid dpg_uuid;
> +    struct hmap_node node;
> +    size_t refcnt;
> +};
> +
> +static inline void
> +ovn_dp_groups_init(struct hmap *dp_groups)
> +{
> +    hmap_init(dp_groups);
> +}
> +
> +void ovn_dp_groups_clear(struct hmap *dp_groups);
> +void ovn_dp_groups_destroy(struct hmap *dp_groups);
> +struct ovn_dp_group *ovn_dp_group_get_or_create(
> +    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
> +    struct sbrec_logical_dp_group *sb_group,
> +    size_t desired_n, const unsigned long *desired_bitmap,
> +    size_t bitmap_len, bool is_switch,
> +    const struct ovn_datapaths *ls_datapaths,
> +    const struct ovn_datapaths *lr_datapaths);
> +
> +#endif /* LFLOW_MGR_H */
> \ No newline at end of file
> diff --git a/northd/northd.c b/northd/northd.c
> index 467056053f..76004256f1 100644
> --- a/northd/northd.c
> +++ b/northd/northd.c
> @@ -41,6 +41,7 @@
>  #include "lib/ovn-sb-idl.h"
>  #include "lib/ovn-util.h"
>  #include "lib/lb.h"
> +#include "lflow-mgr.h"
>  #include "memory.h"
>  #include "northd.h"
>  #include "en-lb-data.h"
> @@ -68,7 +69,7 @@
>  VLOG_DEFINE_THIS_MODULE(northd);
>
>  static bool controller_event_en;
> -static bool lflow_hash_lock_initialized = false;
> +
>
>  static bool check_lsp_is_up;
>
> @@ -97,116 +98,6 @@ static bool default_acl_drop;
>
>  #define MAX_OVN_TAGS 4096
>
> -/* Pipeline stages. */
> -
> -/* The two purposes for which ovn-northd uses OVN logical datapaths. */
> -enum ovn_datapath_type {
> -    DP_SWITCH,                  /* OVN logical switch. */
> -    DP_ROUTER                   /* OVN logical router. */
> -};
> -
> -/* Returns an "enum ovn_stage" built from the arguments.
> - *
> - * (It's better to use ovn_stage_build() for type-safety reasons, but inline
> - * functions can't be used in enums or switch cases.) */
> -#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
> -    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
> -
> -/* A stage within an OVN logical switch or router.
> - *
> - * An "enum ovn_stage" indicates whether the stage is part of a logical switch
> - * or router, whether the stage is part of the ingress or egress pipeline, and
> - * the table within that pipeline.  The first three components are combined to
> - * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
> - * S_ROUTER_OUT_DELIVERY. */
> -enum ovn_stage {
> -#define PIPELINE_STAGES                                                   \
> -    /* Logical switch ingress stages. */                                  \
> -    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
> -    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
> -    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
> -    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
> -    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
> -    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
> -    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
> -    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
> -    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
> -    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
> -    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
> -                   "ls_in_acl_after_lb_eval")  \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
> -                   "ls_in_acl_after_lb_action")  \
> -    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
> -    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
> -    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
> -    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
> -    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
> -                                                                          \
> -    /* Logical switch egress stages. */                                   \
> -    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
> -    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
> -    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
> -    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
> -    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
> -    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
> -    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
> -    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
> -    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
> -    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
> -    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
> -                                                                      \
> -    /* Logical router ingress stages. */                              \
> -    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
> -    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
> -    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
> -    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
> -    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
> -    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
> -    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
> -    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
> -    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
> -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
> -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
> -    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
> -    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
> -    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
> -                                                                      \
> -    /* Logical router egress stages. */                               \
> -    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
> -                   "lr_out_chk_dnat_local")                                  \
> -    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
> -    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
> -    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
> -    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
> -    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
> -    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
> -
> -#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
> -    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
> -        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
> -    PIPELINE_STAGES
> -#undef PIPELINE_STAGE
> -};
>
>  /* Due to various hard-coded priorities need to implement ACLs, the
>   * northbound database supports a smaller range of ACL priorities than
> @@ -391,51 +282,9 @@ enum ovn_stage {
>  #define ROUTE_PRIO_OFFSET_STATIC 1
>  #define ROUTE_PRIO_OFFSET_CONNECTED 2
>
> -/* Returns an "enum ovn_stage" built from the arguments. */
> -static enum ovn_stage
> -ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
> -                uint8_t table)
> -{
> -    return OVN_STAGE_BUILD(dp_type, pipeline, table);
> -}
> -
> -/* Returns the pipeline to which 'stage' belongs. */
> -static enum ovn_pipeline
> -ovn_stage_get_pipeline(enum ovn_stage stage)
> -{
> -    return (stage >> 8) & 1;
> -}
> -
> -/* Returns the pipeline name to which 'stage' belongs. */
> -static const char *
> -ovn_stage_get_pipeline_name(enum ovn_stage stage)
> -{
> -    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
> -}
> -
> -/* Returns the table to which 'stage' belongs. */
> -static uint8_t
> -ovn_stage_get_table(enum ovn_stage stage)
> -{
> -    return stage & 0xff;
> -}
> -
> -/* Returns a string name for 'stage'. */
> -static const char *
> -ovn_stage_to_str(enum ovn_stage stage)
> -{
> -    switch (stage) {
> -#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> -        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
> -    PIPELINE_STAGES
> -#undef PIPELINE_STAGE
> -        default: return "<unknown>";
> -    }
> -}
> -
>  /* Returns the type of the datapath to which a flow with the given 'stage' may
>   * be added. */
> -static enum ovn_datapath_type
> +enum ovn_datapath_type
>  ovn_stage_to_datapath_type(enum ovn_stage stage)
>  {
>      switch (stage) {
> @@ -680,13 +529,6 @@ ovn_datapath_destroy(struct hmap *datapaths, struct ovn_datapath *od)
>      }
>  }
>
> -/* Returns 'od''s datapath type. */
> -static enum ovn_datapath_type
> -ovn_datapath_get_type(const struct ovn_datapath *od)
> -{
> -    return od->nbs ? DP_SWITCH : DP_ROUTER;
> -}
> -
>  static struct ovn_datapath *
>  ovn_datapath_find_(const struct hmap *datapaths,
>                     const struct uuid *uuid)
> @@ -722,13 +564,7 @@ ovn_datapath_find_by_key(struct hmap *datapaths, uint32_t dp_key)
>      return NULL;
>  }
>
> -static bool
> -ovn_datapath_is_stale(const struct ovn_datapath *od)
> -{
> -    return !od->nbr && !od->nbs;
> -}
> -
> -static struct ovn_datapath *
> +struct ovn_datapath *
>  ovn_datapath_from_sbrec(const struct hmap *ls_datapaths,
>                          const struct hmap *lr_datapaths,
>                          const struct sbrec_datapath_binding *sb)
> @@ -1297,19 +1133,6 @@ struct ovn_port_routable_addresses {
>      size_t n_addrs;
>  };
>
> -/* A node that maintains link between an object (such as an ovn_port) and
> - * a lflow. */
> -struct lflow_ref_node {
> -    /* This list follows different lflows referenced by the same object. List
> -     * head is, for example, ovn_port->lflows.  */
> -    struct ovs_list lflow_list_node;
> -    /* This list follows different objects that reference the same lflow. List
> -     * head is ovn_lflow->referenced_by. */
> -    struct ovs_list ref_list_node;
> -    /* The lflow. */
> -    struct ovn_lflow *lflow;
> -};
> -
>  static bool lsp_can_be_inc_processed(const struct nbrec_logical_switch_port *);
>
>  static bool
> @@ -1389,6 +1212,8 @@ ovn_port_set_nb(struct ovn_port *op,
>      init_mcast_port_info(&op->mcast_info, op->nbsp, op->nbrp);
>  }
>
> +static bool lsp_is_router(const struct nbrec_logical_switch_port *nbsp);
> +
>  static struct ovn_port *
>  ovn_port_create(struct hmap *ports, const char *key,
>                  const struct nbrec_logical_switch_port *nbsp,
> @@ -1407,12 +1232,14 @@ ovn_port_create(struct hmap *ports, const char *key,
>      op->l3dgw_port = op->cr_port = NULL;
>      hmap_insert(ports, &op->key_node, hash_string(op->key, 0));
>
> -    ovs_list_init(&op->lflows);
> +    op->lflow_ref = lflow_ref_create();
> +    op->stateful_lflow_ref = lflow_ref_create();
> +
>      return op;
>  }
>
>  static void
> -ovn_port_destroy_orphan(struct ovn_port *port)
> +ovn_port_cleanup(struct ovn_port *port)
>  {
>      if (port->tunnel_key) {
>          ovs_assert(port->od);
> @@ -1422,6 +1249,8 @@ ovn_port_destroy_orphan(struct ovn_port *port)
>          destroy_lport_addresses(&port->lsp_addrs[i]);
>      }
>      free(port->lsp_addrs);
> +    port->n_lsp_addrs = 0;
> +    port->lsp_addrs = NULL;
>
>      if (port->peer) {
>          port->peer->peer = NULL;
> @@ -1431,18 +1260,22 @@ ovn_port_destroy_orphan(struct ovn_port *port)
>          destroy_lport_addresses(&port->ps_addrs[i]);
>      }
>      free(port->ps_addrs);
> +    port->ps_addrs = NULL;
> +    port->n_ps_addrs = 0;
>
>      destroy_lport_addresses(&port->lrp_networks);
>      destroy_lport_addresses(&port->proxy_arp_addrs);
> +}
> +
> +static void
> +ovn_port_destroy_orphan(struct ovn_port *port)
> +{
> +    ovn_port_cleanup(port);
>      free(port->json_key);
>      free(port->key);
> +    lflow_ref_destroy(port->lflow_ref);
> +    lflow_ref_destroy(port->stateful_lflow_ref);
>
> -    struct lflow_ref_node *l;
> -    LIST_FOR_EACH_SAFE (l, lflow_list_node, &port->lflows) {
> -        ovs_list_remove(&l->lflow_list_node);
> -        ovs_list_remove(&l->ref_list_node);
> -        free(l);
> -    }
>      free(port);
>  }
>
> @@ -3889,124 +3722,6 @@ build_lb_port_related_data(
>      build_lswitch_lbs_from_lrouter(lr_datapaths, lb_dps_map, lb_group_dps_map);
>  }
>
> -
> -struct ovn_dp_group {
> -    unsigned long *bitmap;
> -    struct sbrec_logical_dp_group *dp_group;
> -    struct hmap_node node;
> -};
> -
> -static struct ovn_dp_group *
> -ovn_dp_group_find(const struct hmap *dp_groups,
> -                  const unsigned long *dpg_bitmap, size_t bitmap_len,
> -                  uint32_t hash)
> -{
> -    struct ovn_dp_group *dpg;
> -
> -    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
> -        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
> -            return dpg;
> -        }
> -    }
> -    return NULL;
> -}
> -
> -static struct sbrec_logical_dp_group *
> -ovn_sb_insert_or_update_logical_dp_group(
> -                            struct ovsdb_idl_txn *ovnsb_txn,
> -                            struct sbrec_logical_dp_group *dp_group,
> -                            const unsigned long *dpg_bitmap,
> -                            const struct ovn_datapaths *datapaths)
> -{
> -    const struct sbrec_datapath_binding **sb;
> -    size_t n = 0, index;
> -
> -    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
> -    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
> -        sb[n++] = datapaths->array[index]->sb;
> -    }
> -    if (!dp_group) {
> -        dp_group = sbrec_logical_dp_group_insert(ovnsb_txn);
> -    }
> -    sbrec_logical_dp_group_set_datapaths(
> -        dp_group, (struct sbrec_datapath_binding **) sb, n);
> -    free(sb);
> -
> -    return dp_group;
> -}
> -
> -/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
> - * doesn't exist, creates a new one and adds it to 'dp_groups'.
> - * If 'sb_group' is provided, function will try to re-use this group by
> - * either taking it directly, or by modifying, if it's not already in use. */
> -static struct ovn_dp_group *
> -ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
> -                           struct hmap *dp_groups,
> -                           struct sbrec_logical_dp_group *sb_group,
> -                           size_t desired_n,
> -                           const unsigned long *desired_bitmap,
> -                           size_t bitmap_len,
> -                           bool is_switch,
> -                           const struct ovn_datapaths *ls_datapaths,
> -                           const struct ovn_datapaths *lr_datapaths)
> -{
> -    struct ovn_dp_group *dpg;
> -    uint32_t hash;
> -
> -    hash = hash_int(desired_n, 0);
> -    dpg = ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
> -    if (dpg) {
> -        return dpg;
> -    }
> -
> -    bool update_dp_group = false, can_modify = false;
> -    unsigned long *dpg_bitmap;
> -    size_t i, n = 0;
> -
> -    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
> -    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
> -        struct ovn_datapath *datapath_od;
> -
> -        datapath_od = ovn_datapath_from_sbrec(
> -                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
> -                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
> -                        sb_group->datapaths[i]);
> -        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
> -            break;
> -        }
> -        bitmap_set1(dpg_bitmap, datapath_od->index);
> -        n++;
> -    }
> -    if (!sb_group || i != sb_group->n_datapaths) {
> -        /* No group or stale group.  Not going to be used. */
> -        update_dp_group = true;
> -        can_modify = true;
> -    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
> -        /* The group in Sb is different. */
> -        update_dp_group = true;
> -        /* We can modify existing group if it's not already in use. */
> -        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
> -                                        bitmap_len, hash_int(n, 0));
> -    }
> -
> -    bitmap_free(dpg_bitmap);
> -
> -    dpg = xzalloc(sizeof *dpg);
> -    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
> -    if (!update_dp_group) {
> -        dpg->dp_group = sb_group;
> -    } else {
> -        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
> -                            ovnsb_txn,
> -                            can_modify ? sb_group : NULL,
> -                            desired_bitmap,
> -                            is_switch ? ls_datapaths : lr_datapaths);
> -    }
> -    hmap_insert(dp_groups, &dpg->node, hash);
> -
> -    return dpg;
> -}
> -
>  struct sb_lb {
>      struct hmap_node hmap_node;
>
> @@ -4820,28 +4535,20 @@ ovn_port_find_in_datapath(struct ovn_datapath *od,
>      return NULL;
>  }
>
> -static struct ovn_port *
> -ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
> -               const char *key, const struct nbrec_logical_switch_port *nbsp,
> -               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
> -               struct ovs_list *lflows,
> -               const struct sbrec_mirror_table *sbrec_mirror_table,
> -               const struct sbrec_chassis_table *sbrec_chassis_table,
> -               struct ovsdb_idl_index *sbrec_chassis_by_name,
> -               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> +static bool
> +ls_port_init(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
> +             struct hmap *ls_ports, struct ovn_datapath *od,
> +             const struct sbrec_port_binding *sb,
> +             const struct sbrec_mirror_table *sbrec_mirror_table,
> +             const struct sbrec_chassis_table *sbrec_chassis_table,
> +             struct ovsdb_idl_index *sbrec_chassis_by_name,
> +             struct ovsdb_idl_index *sbrec_chassis_by_hostname)
>  {
> -    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
> -                                          NULL);
> -    parse_lsp_addrs(op);
>      op->od = od;
> -    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
> -    if (lflows) {
> -        ovs_list_splice(&op->lflows, lflows->next, lflows);
> -    }
> -
> +    parse_lsp_addrs(op);
>      /* Assign explicitly requested tunnel ids first. */
>      if (!ovn_port_assign_requested_tnl_id(sbrec_chassis_table, op)) {
> -        return NULL;
> +        return false;
>      }
>      if (sb) {
>          op->sb = sb;
> @@ -4858,14 +4565,57 @@ ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
>      }
>      /* Assign new tunnel ids where needed. */
>      if (!ovn_port_allocate_key(sbrec_chassis_table, ls_ports, op)) {
> -        return NULL;
> +        return false;
>      }
>      ovn_port_update_sbrec(ovnsb_txn, sbrec_chassis_by_name,
>                            sbrec_chassis_by_hostname, NULL, sbrec_mirror_table,
>                            op, NULL, NULL);
> +    return true;
> +}
> +
> +static struct ovn_port *
> +ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
> +               const char *key, const struct nbrec_logical_switch_port *nbsp,
> +               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
> +               const struct sbrec_mirror_table *sbrec_mirror_table,
> +               const struct sbrec_chassis_table *sbrec_chassis_table,
> +               struct ovsdb_idl_index *sbrec_chassis_by_name,
> +               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> +{
> +    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
> +                                          NULL);
> +    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
> +    if (!ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
> +                      sbrec_mirror_table, sbrec_chassis_table,
> +                      sbrec_chassis_by_name, sbrec_chassis_by_hostname)) {
> +        ovn_port_destroy(ls_ports, op);
> +        return NULL;
> +    }
> +
>      return op;
>  }
>
> +static bool
> +ls_port_reinit(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
> +                struct hmap *ls_ports,
> +                const struct nbrec_logical_switch_port *nbsp,
> +                const struct nbrec_logical_router_port *nbrp,
> +                struct ovn_datapath *od,
> +                const struct sbrec_port_binding *sb,
> +                const struct sbrec_mirror_table *sbrec_mirror_table,
> +                const struct sbrec_chassis_table *sbrec_chassis_table,
> +                struct ovsdb_idl_index *sbrec_chassis_by_name,
> +                struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> +{
> +    ovn_port_cleanup(op);
> +    op->sb = sb;
> +    ovn_port_set_nb(op, nbsp, nbrp);
> +    op->l3dgw_port = op->cr_port = NULL;
> +    return ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
> +                        sbrec_mirror_table, sbrec_chassis_table,
> +                        sbrec_chassis_by_name, sbrec_chassis_by_hostname);
> +}
> +
>  /* Returns true if the logical switch has changes which can be
>   * incrementally handled.
>   * Presently supports i-p for the below changes:
> @@ -5005,7 +4755,7 @@ ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
>                  goto fail;
>              }
>              op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
> -                                new_nbsp->name, new_nbsp, od, NULL, NULL,
> +                                new_nbsp->name, new_nbsp, od, NULL,
>                                  ni->sbrec_mirror_table,
>                                  ni->sbrec_chassis_table,
>                                  ni->sbrec_chassis_by_name,
> @@ -5036,17 +4786,12 @@ ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
>                  op->visited = true;
>                  continue;
>              }
> -            struct ovs_list lflows = OVS_LIST_INITIALIZER(&lflows);
> -            ovs_list_splice(&lflows, op->lflows.next, &op->lflows);
> -            ovn_port_destroy(&nd->ls_ports, op);
> -            op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
> -                                new_nbsp->name, new_nbsp, od, sb, &lflows,
> -                                ni->sbrec_mirror_table,
> +            if (!ls_port_reinit(op, ovnsb_idl_txn, &nd->ls_ports,
> +                                new_nbsp, NULL,
> +                                od, sb, ni->sbrec_mirror_table,
>                                  ni->sbrec_chassis_table,
>                                  ni->sbrec_chassis_by_name,
> -                                ni->sbrec_chassis_by_hostname);
> -            ovs_assert(ovs_list_is_empty(&lflows));
> -            if (!op) {
> +                                ni->sbrec_chassis_by_hostname)) {
>                  goto fail;
>              }
>              add_op_to_northd_tracked_ports(&trk_lsps->updated, op);
> @@ -5991,170 +5736,7 @@ ovn_igmp_group_destroy(struct hmap *igmp_groups,
>   * function of most of the northbound database.
>   */
>
> -struct ovn_lflow {
> -    struct hmap_node hmap_node;
> -    struct ovs_list list_node;   /* For temporary list of lflows. Don't remove
> -                                    at destroy. */
> -
> -    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
> -    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
> -    enum ovn_stage stage;
> -    uint16_t priority;
> -    char *match;
> -    char *actions;
> -    char *io_port;
> -    char *stage_hint;
> -    char *ctrl_meter;
> -    size_t n_ods;                /* Number of datapaths referenced by 'od' and
> -                                  * 'dpg_bitmap'. */
> -    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
> -
> -    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
> -    const char *where;
> -
> -    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
> -};
> -
> -static void ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow);
> -static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
> -                                        const struct ovn_datapath *od,
> -                                        enum ovn_stage stage,
> -                                        uint16_t priority, const char *match,
> -                                        const char *actions,
> -                                        const char *ctrl_meter, uint32_t hash);
> -
> -static char *
> -ovn_lflow_hint(const struct ovsdb_idl_row *row)
> -{
> -    if (!row) {
> -        return NULL;
> -    }
> -    return xasprintf("%08x", row->uuid.parts[0]);
> -}
> -
> -static bool
> -ovn_lflow_equal(const struct ovn_lflow *a, const struct ovn_datapath *od,
> -                enum ovn_stage stage, uint16_t priority, const char *match,
> -                const char *actions, const char *ctrl_meter)
> -{
> -    return (a->od == od
> -            && a->stage == stage
> -            && a->priority == priority
> -            && !strcmp(a->match, match)
> -            && !strcmp(a->actions, actions)
> -            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
> -}
> -
> -enum {
> -    STATE_NULL,               /* parallelization is off */
> -    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
> -    STATE_USE_PARALLELIZATION /* parallelization is on */
> -};
> -static int parallelization_state = STATE_NULL;
> -
> -static void
> -ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
> -               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
> -               char *match, char *actions, char *io_port, char *ctrl_meter,
> -               char *stage_hint, const char *where)
> -{
> -    ovs_list_init(&lflow->list_node);
> -    ovs_list_init(&lflow->referenced_by);
> -    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
> -    lflow->od = od;
> -    lflow->stage = stage;
> -    lflow->priority = priority;
> -    lflow->match = match;
> -    lflow->actions = actions;
> -    lflow->io_port = io_port;
> -    lflow->stage_hint = stage_hint;
> -    lflow->ctrl_meter = ctrl_meter;
> -    lflow->dpg = NULL;
> -    lflow->where = where;
> -    lflow->sb_uuid = UUID_ZERO;
> -}
> -
> -/* The lflow_hash_lock is a mutex array that protects updates to the shared
> - * lflow table across threads when parallel lflow build and dp-group are both
> - * enabled. To avoid high contention between threads, a big array of mutexes
> - * are used instead of just one. This is possible because when parallel build
> - * is used we only use hmap_insert_fast() to update the hmap, which would not
> - * touch the bucket array but only the list in a single bucket. We only need to
> - * make sure that when adding lflows to the same hash bucket, the same lock is
> - * used, so that no two threads can add to the bucket at the same time.  It is
> - * ok that the same lock is used to protect multiple buckets, so a fixed sized
> - * mutex array is used instead of 1-1 mapping to the hash buckets. This
> - * simplies the implementation while effectively reduces lock contention
> - * because the chance that different threads contending the same lock amongst
> - * the big number of locks is very low. */
> -#define LFLOW_HASH_LOCK_MASK 0xFFFF
> -static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
> -
> -static void
> -lflow_hash_lock_init(void)
> -{
> -    if (!lflow_hash_lock_initialized) {
> -        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> -            ovs_mutex_init(&lflow_hash_locks[i]);
> -        }
> -        lflow_hash_lock_initialized = true;
> -    }
> -}
> -
> -static void
> -lflow_hash_lock_destroy(void)
> -{
> -    if (lflow_hash_lock_initialized) {
> -        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> -            ovs_mutex_destroy(&lflow_hash_locks[i]);
> -        }
> -    }
> -    lflow_hash_lock_initialized = false;
> -}
> -
> -/* Full thread safety analysis is not possible with hash locks, because
> - * they are taken conditionally based on the 'parallelization_state' and
> - * a flow hash.  Also, the order in which two hash locks are taken is not
> - * predictable during the static analysis.
> - *
> - * Since the order of taking two locks depends on a random hash, to avoid
> - * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
> - * of hash locks is similar to a single mutex.
> - *
> - * Using a fake mutex to partially simulate thread safety restrictions, as
> - * if it were actually a single mutex.
> - *
> - * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
> - * nature of the lock.  Unlike other attributes, it applies to the
> - * implementation and not to the interface.  So, we can define a function
> - * that acquires the lock without analysing the way it does that.
> - */
> -extern struct ovs_mutex fake_hash_mutex;
> -
> -static struct ovs_mutex *
> -lflow_hash_lock(const struct hmap *lflow_map, uint32_t hash)
> -    OVS_ACQUIRES(fake_hash_mutex)
> -    OVS_NO_THREAD_SAFETY_ANALYSIS
> -{
> -    struct ovs_mutex *hash_lock = NULL;
> -
> -    if (parallelization_state == STATE_USE_PARALLELIZATION) {
> -        hash_lock =
> -            &lflow_hash_locks[hash & lflow_map->mask & LFLOW_HASH_LOCK_MASK];
> -        ovs_mutex_lock(hash_lock);
> -    }
> -    return hash_lock;
> -}
> -
> -static void
> -lflow_hash_unlock(struct ovs_mutex *hash_lock)
> -    OVS_RELEASES(fake_hash_mutex)
> -    OVS_NO_THREAD_SAFETY_ANALYSIS
> -{
> -    if (hash_lock) {
> -        ovs_mutex_unlock(hash_lock);
> -    }
> -}
> +int parallelization_state = STATE_NULL;
>
>
>  /* This thread-local var is used for parallel lflow building when dp-groups is
> @@ -6167,240 +5749,7 @@ lflow_hash_unlock(struct ovs_mutex *hash_lock)
>   * threads are collected to fix the lflow hmap's size (by the function
>   * fix_flow_map_size()).
>   * */
> -static thread_local size_t thread_lflow_counter = 0;
> -
> -/* Adds an OVN datapath to a datapath group of existing logical flow.
> - * Version to use when hash bucket locking is NOT required or the corresponding
> - * hash lock is already taken. */
> -static void
> -ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
> -                                const struct ovn_datapath *od,
> -                                const unsigned long *dp_bitmap,
> -                                size_t bitmap_len)
> -    OVS_REQUIRES(fake_hash_mutex)
> -{
> -    if (od) {
> -        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
> -    }
> -    if (dp_bitmap) {
> -        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
> -    }
> -}
> -
> -/* This global variable collects the lflows generated by do_ovn_lflow_add().
> - * start_collecting_lflows() will enable the lflow collection and the calls to
> - * do_ovn_lflow_add (or the macros ovn_lflow_add_...) will add generated lflows
> - * to the list. end_collecting_lflows() will disable it. */
> -static thread_local struct ovs_list collected_lflows;
> -static thread_local bool collecting_lflows = false;
> -
> -static void
> -start_collecting_lflows(void)
> -{
> -    ovs_assert(!collecting_lflows);
> -    ovs_list_init(&collected_lflows);
> -    collecting_lflows = true;
> -}
> -
> -static void
> -end_collecting_lflows(void)
> -{
> -    ovs_assert(collecting_lflows);
> -    collecting_lflows = false;
> -}
> -
> -/* Adds a row with the specified contents to the Logical_Flow table.
> - * Version to use when hash bucket locking is NOT required. */
> -static void
> -do_ovn_lflow_add(struct hmap *lflow_map, const struct ovn_datapath *od,
> -                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> -                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
> -                 const char *match, const char *actions, const char *io_port,
> -                 const struct ovsdb_idl_row *stage_hint,
> -                 const char *where, const char *ctrl_meter)
> -    OVS_REQUIRES(fake_hash_mutex)
> -{
> -
> -    struct ovn_lflow *old_lflow;
> -    struct ovn_lflow *lflow;
> -
> -    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
> -    ovs_assert(bitmap_len);
> -
> -    if (collecting_lflows) {
> -        ovs_assert(od);
> -        ovs_assert(!dp_bitmap);
> -    } else {
> -        old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
> -                                   actions, ctrl_meter, hash);
> -        if (old_lflow) {
> -            ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
> -                                            bitmap_len);
> -            return;
> -        }
> -    }
> -
> -    lflow = xmalloc(sizeof *lflow);
> -    /* While adding new logical flows we're not setting single datapath, but
> -     * collecting a group.  'od' will be updated later for all flows with only
> -     * one datapath in a group, so it could be hashed correctly. */
> -    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
> -                   xstrdup(match), xstrdup(actions),
> -                   io_port ? xstrdup(io_port) : NULL,
> -                   nullable_xstrdup(ctrl_meter),
> -                   ovn_lflow_hint(stage_hint), where);
> -
> -    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
> -
> -    if (parallelization_state != STATE_USE_PARALLELIZATION) {
> -        hmap_insert(lflow_map, &lflow->hmap_node, hash);
> -    } else {
> -        hmap_insert_fast(lflow_map, &lflow->hmap_node, hash);
> -        thread_lflow_counter++;
> -    }
> -
> -    if (collecting_lflows) {
> -        ovs_list_insert(&collected_lflows, &lflow->list_node);
> -    }
> -}
> -
> -/* Adds a row with the specified contents to the Logical_Flow table. */
> -static void
> -ovn_lflow_add_at(struct hmap *lflow_map, const struct ovn_datapath *od,
> -                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> -                 enum ovn_stage stage, uint16_t priority,
> -                 const char *match, const char *actions, const char *io_port,
> -                 const char *ctrl_meter,
> -                 const struct ovsdb_idl_row *stage_hint, const char *where)
> -    OVS_EXCLUDED(fake_hash_mutex)
> -{
> -    struct ovs_mutex *hash_lock;
> -    uint32_t hash;
> -
> -    ovs_assert(!od ||
> -               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
> -
> -    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> -                                 ovn_stage_get_pipeline(stage),
> -                                 priority, match,
> -                                 actions);
> -
> -    hash_lock = lflow_hash_lock(lflow_map, hash);
> -    do_ovn_lflow_add(lflow_map, od, dp_bitmap, dp_bitmap_len, hash, stage,
> -                     priority, match, actions, io_port, stage_hint, where,
> -                     ctrl_meter);
> -    lflow_hash_unlock(hash_lock);
> -}
> -
> -static void
> -__ovn_lflow_add_default_drop(struct hmap *lflow_map,
> -                             struct ovn_datapath *od,
> -                             enum ovn_stage stage,
> -                             const char *where)
> -{
> -        ovn_lflow_add_at(lflow_map, od, NULL, 0, stage, 0, "1",
> -                         debug_drop_action(),
> -                         NULL, NULL, NULL, where );
> -}
> -
> -/* Adds a row with the specified contents to the Logical_Flow table. */
> -#define ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
> -                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
> -                                  STAGE_HINT) \
> -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                     IN_OUT_PORT, CTRL_METER, STAGE_HINT, OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_add_with_hint(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
> -                                ACTIONS, STAGE_HINT) \
> -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                     NULL, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_add_with_dp_group(LFLOW_MAP, DP_BITMAP, DP_BITMAP_LEN, \
> -                                    STAGE, PRIORITY, MATCH, ACTIONS, \
> -                                    STAGE_HINT) \
> -    ovn_lflow_add_at(LFLOW_MAP, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
> -                     PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
> -                     OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE)                    \
> -    __ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE, OVS_SOURCE_LOCATOR)
> -
> -
> -/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
> - * the IN_OUT_PORT argument, which tells the lport name that appears in the
> - * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
> - * not local to the chassis. The critiera of the lport to be added using this
> - * argument:
> - *
> - * - For ingress pipeline, the lport that is used to match "inport".
> - * - For egress pipeline, the lport that is used to match "outport".
> - *
> - * For now, only LS pipelines should use this macro.  */
> -#define ovn_lflow_add_with_lport_and_hint(LFLOW_MAP, OD, STAGE, PRIORITY, \
> -                                          MATCH, ACTIONS, IN_OUT_PORT, \
> -                                          STAGE_HINT) \
> -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                     IN_OUT_PORT, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_add(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
> -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                     NULL, NULL, NULL, OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_metered(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                          CTRL_METER) \
> -    ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
> -                              ACTIONS, NULL, CTRL_METER, NULL)
> -
> -static struct ovn_lflow *
> -ovn_lflow_find(const struct hmap *lflows, const struct ovn_datapath *od,
> -               enum ovn_stage stage, uint16_t priority,
> -               const char *match, const char *actions, const char *ctrl_meter,
> -               uint32_t hash)
> -{
> -    struct ovn_lflow *lflow;
> -    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
> -        if (ovn_lflow_equal(lflow, od, stage, priority, match, actions,
> -                            ctrl_meter)) {
> -            return lflow;
> -        }
> -    }
> -    return NULL;
> -}
> -
> -static void
> -ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow)
> -{
> -    if (lflow) {
> -        if (lflows) {
> -            hmap_remove(lflows, &lflow->hmap_node);
> -        }
> -        bitmap_free(lflow->dpg_bitmap);
> -        free(lflow->match);
> -        free(lflow->actions);
> -        free(lflow->io_port);
> -        free(lflow->stage_hint);
> -        free(lflow->ctrl_meter);
> -        struct lflow_ref_node *l;
> -        LIST_FOR_EACH_SAFE (l, ref_list_node, &lflow->referenced_by) {
> -            ovs_list_remove(&l->lflow_list_node);
> -            ovs_list_remove(&l->ref_list_node);
> -            free(l);
> -        }
> -        free(lflow);
> -    }
> -}
> -
> -static void
> -link_ovn_port_to_lflows(struct ovn_port *op, struct ovs_list *lflows)
> -{
> -    struct ovn_lflow *f;
> -    LIST_FOR_EACH (f, list_node, lflows) {
> -        struct lflow_ref_node *lfrn = xmalloc(sizeof *lfrn);
> -        lfrn->lflow = f;
> -        ovs_list_insert(&op->lflows, &lfrn->lflow_list_node);
> -        ovs_list_insert(&f->referenced_by, &lfrn->ref_list_node);
> -    }
> -}
> +thread_local size_t thread_lflow_counter = 0;
>
>  static bool
>  build_dhcpv4_action(struct ovn_port *op, ovs_be32 offer_ip,
> @@ -6578,8 +5927,8 @@ build_dhcpv6_action(struct ovn_port *op, struct in6_addr *offer_ip,
>   * build_lswitch_lflows_admission_control() handles the port security.
>   */
>  static void
> -build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
> -                                struct ds *actions, struct ds *match)
> +build_lswitch_port_sec_op(struct ovn_port *op, struct lflow_table *lflows,
> +                          struct ds *actions, struct ds *match)
>  {
>      ovs_assert(op->nbsp);
>
> @@ -6595,13 +5944,13 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
>          ovn_lflow_add_with_lport_and_hint(
>              lflows, op->od, S_SWITCH_IN_CHECK_PORT_SEC,
>              100, ds_cstr(match), REGBIT_PORT_SEC_DROP" = 1; next;",
> -            op->key, &op->nbsp->header_);
> +            op->key, &op->nbsp->header_, op->lflow_ref);
>
>          ds_clear(match);
>          ds_put_format(match, "outport == %s", op->json_key);
>          ovn_lflow_add_with_lport_and_hint(
>              lflows, op->od, S_SWITCH_IN_L2_UNKNOWN, 50, ds_cstr(match),
> -            debug_drop_action(), op->key, &op->nbsp->header_);
> +            debug_drop_action(), op->key, &op->nbsp->header_, op->lflow_ref);
>          return;
>      }
>
> @@ -6617,14 +5966,16 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
>          ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                            S_SWITCH_IN_CHECK_PORT_SEC, 70,
>                                            ds_cstr(match), ds_cstr(actions),
> -                                          op->key, &op->nbsp->header_);
> +                                          op->key, &op->nbsp->header_,
> +                                          op->lflow_ref);
>      } else if (queue_id) {
>          ds_put_cstr(actions,
>                      REGBIT_PORT_SEC_DROP" = check_in_port_sec(); next;");
>          ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                            S_SWITCH_IN_CHECK_PORT_SEC, 70,
>                                            ds_cstr(match), ds_cstr(actions),
> -                                          op->key, &op->nbsp->header_);
> +                                          op->key, &op->nbsp->header_,
> +                                          op->lflow_ref);
>
>          if (!lsp_is_localnet(op->nbsp) && !op->od->n_localnet_ports) {
>              return;
> @@ -6639,7 +5990,8 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
>              ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                                S_SWITCH_OUT_APPLY_PORT_SEC, 100,
>                                                ds_cstr(match), ds_cstr(actions),
> -                                              op->key, &op->nbsp->header_);
> +                                              op->key, &op->nbsp->header_,
> +                                              op->lflow_ref);
>          } else if (op->od->n_localnet_ports) {
>              ds_put_format(match, "outport == %s && inport == %s",
>                            op->od->localnet_ports[0]->json_key,
> @@ -6648,15 +6000,16 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
>                      S_SWITCH_OUT_APPLY_PORT_SEC, 110,
>                      ds_cstr(match), ds_cstr(actions),
>                      op->od->localnet_ports[0]->key,
> -                    &op->od->localnet_ports[0]->nbsp->header_);
> +                    &op->od->localnet_ports[0]->nbsp->header_,
> +                    op->lflow_ref);
>          }
>      }
>  }
>
>  static void
>  build_lswitch_learn_fdb_op(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *actions, struct ds *match)
> +    struct ovn_port *op, struct lflow_table *lflows,
> +    struct ds *actions, struct ds *match)
>  {
>      ovs_assert(op->nbsp);
>
> @@ -6673,7 +6026,8 @@ build_lswitch_learn_fdb_op(
>          ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                            S_SWITCH_IN_LOOKUP_FDB, 100,
>                                            ds_cstr(match), ds_cstr(actions),
> -                                          op->key, &op->nbsp->header_);
> +                                          op->key, &op->nbsp->header_,
> +                                          op->lflow_ref);
>
>          ds_put_cstr(match, " && "REGBIT_LKUP_FDB" == 0");
>          ds_clear(actions);
> @@ -6681,13 +6035,14 @@ build_lswitch_learn_fdb_op(
>          ovn_lflow_add_with_lport_and_hint(lflows, op->od, S_SWITCH_IN_PUT_FDB,
>                                            100, ds_cstr(match),
>                                            ds_cstr(actions), op->key,
> -                                          &op->nbsp->header_);
> +                                          &op->nbsp->header_,
> +                                          op->lflow_ref);
>      }
>  }
>
>  static void
>  build_lswitch_learn_fdb_od(
> -        struct ovn_datapath *od, struct hmap *lflows)
> +    struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_LOOKUP_FDB, 0, "1", "next;");
> @@ -6701,7 +6056,7 @@ build_lswitch_learn_fdb_od(
>   *                 (priority 100). */
>  static void
>  build_lswitch_output_port_sec_od(struct ovn_datapath *od,
> -                              struct hmap *lflows)
> +                                 struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_OUT_CHECK_PORT_SEC, 100,
> @@ -6719,7 +6074,7 @@ static void
>  skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
>                           bool has_stateful_acl, enum ovn_stage in_stage,
>                           enum ovn_stage out_stage, uint16_t priority,
> -                         struct hmap *lflows)
> +                         struct lflow_table *lflows)
>  {
>      /* Can't use ct() for router ports. Consider the following configuration:
>       * lp1(10.0.0.2) on hostA--ls1--lr0--ls2--lp2(10.0.1.2) on hostB, For a
> @@ -6741,10 +6096,10 @@ skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
>
>      ovn_lflow_add_with_lport_and_hint(lflows, od, in_stage, priority,
>                                        ingress_match, ingress_action,
> -                                      op->key, &op->nbsp->header_);
> +                                      op->key, &op->nbsp->header_, NULL);
>      ovn_lflow_add_with_lport_and_hint(lflows, od, out_stage, priority,
>                                        egress_match, egress_action,
> -                                      op->key, &op->nbsp->header_);
> +                                      op->key, &op->nbsp->header_, NULL);
>
>      free(ingress_match);
>      free(egress_match);
> @@ -6753,7 +6108,7 @@ skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
>  static void
>  build_stateless_filter(const struct ovn_datapath *od,
>                         const struct nbrec_acl *acl,
> -                       struct hmap *lflows)
> +                       struct lflow_table *lflows)
>  {
>      const char *action = REGBIT_ACL_STATELESS" = 1; next;";
>      if (!strcmp(acl->direction, "from-lport")) {
> @@ -6774,7 +6129,7 @@ build_stateless_filter(const struct ovn_datapath *od,
>  static void
>  build_stateless_filters(const struct ovn_datapath *od,
>                          const struct ls_port_group_table *ls_port_groups,
> -                        struct hmap *lflows)
> +                        struct lflow_table *lflows)
>  {
>      for (size_t i = 0; i < od->nbs->n_acls; i++) {
>          const struct nbrec_acl *acl = od->nbs->acls[i];
> @@ -6802,7 +6157,7 @@ build_stateless_filters(const struct ovn_datapath *od,
>  }
>
>  static void
> -build_pre_acls(struct ovn_datapath *od, struct hmap *lflows)
> +build_pre_acls(struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are
>       * allowed by default. */
> @@ -6821,7 +6176,7 @@ build_ls_stateful_rec_pre_acls(
>      const struct ls_stateful_record *ls_stateful_rec,
>      const struct ovn_datapath *od,
>      const struct ls_port_group_table *ls_port_groups,
> -    struct hmap *lflows)
> +    struct lflow_table *lflows)
>  {
>      /* If there are any stateful ACL rules in this datapath, we may
>       * send IP packets for some (allow) filters through the conntrack action,
> @@ -6942,7 +6297,7 @@ build_empty_lb_event_flow(struct ovn_lb_vip *lb_vip,
>  static void
>  build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
>                                    const struct shash *meter_groups,
> -                                  struct hmap *lflows)
> +                                  struct lflow_table *lflows)
>  {
>      struct mcast_switch_info *mcast_sw_info = &od->mcast_info.sw;
>      if (!mcast_sw_info->enabled
> @@ -6976,7 +6331,7 @@ build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
>
>  static void
>  build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
> -             struct hmap *lflows)
> +             struct lflow_table *lflows)
>  {
>      /* Handle IGMP/MLD packets crossing AZs. */
>      build_interconn_mcast_snoop_flows(od, meter_groups, lflows);
> @@ -7013,7 +6368,7 @@ build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
>  static void
>  build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
>                               const struct ovn_datapath *od,
> -                             struct hmap *lflows)
> +                             struct lflow_table *lflows)
>  {
>      for (size_t i = 0; i < od->n_router_ports; i++) {
>          skip_port_from_conntrack(od, od->router_ports[i],
> @@ -7077,7 +6432,7 @@ build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
>  static void
>  build_pre_stateful(struct ovn_datapath *od,
>                     const struct chassis_features *features,
> -                   struct hmap *lflows)
> +                   struct lflow_table *lflows)
>  {
>      /* Ingress and Egress pre-stateful Table (Priority 0): Packets are
>       * allowed by default. */
> @@ -7110,7 +6465,7 @@ static void
>  build_acl_hints(const struct ls_stateful_record *ls_stateful_rec,
>                  const struct ovn_datapath *od,
>                  const struct chassis_features *features,
> -                struct hmap *lflows)
> +                struct lflow_table *lflows)
>  {
>      /* This stage builds hints for the IN/OUT_ACL stage. Based on various
>       * combinations of ct flags packets may hit only a subset of the logical
> @@ -7278,7 +6633,7 @@ build_acl_log(struct ds *actions, const struct nbrec_acl *acl,
>  }
>
>  static void
> -consider_acl(struct hmap *lflows, const struct ovn_datapath *od,
> +consider_acl(struct lflow_table *lflows, const struct ovn_datapath *od,
>               const struct nbrec_acl *acl, bool has_stateful,
>               bool ct_masked_mark, const struct shash *meter_groups,
>               uint64_t max_acl_tier, struct ds *match, struct ds *actions)
> @@ -7507,7 +6862,7 @@ ovn_update_ipv6_options(struct hmap *lr_ports)
>  static void
>  build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
>                          const struct ovn_datapath *od,
> -                        struct hmap *lflows,
> +                        struct lflow_table *lflows,
>                          const char *default_acl_action,
>                          const struct shash *meter_groups,
>                          struct ds *match,
> @@ -7582,7 +6937,8 @@ build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
>  }
>
>  static void
> -build_acl_log_related_flows(const struct ovn_datapath *od, struct hmap *lflows,
> +build_acl_log_related_flows(const struct ovn_datapath *od,
> +                            struct lflow_table *lflows,
>                              const struct nbrec_acl *acl, bool has_stateful,
>                              bool ct_masked_mark,
>                              const struct shash *meter_groups,
> @@ -7658,7 +7014,7 @@ static void
>  build_acls(const struct ls_stateful_record *ls_stateful_rec,
>             const struct ovn_datapath *od,
>             const struct chassis_features *features,
> -           struct hmap *lflows,
> +           struct lflow_table *lflows,
>             const struct ls_port_group_table *ls_port_groups,
>             const struct shash *meter_groups)
>  {
> @@ -7902,7 +7258,7 @@ build_acls(const struct ls_stateful_record *ls_stateful_rec,
>  }
>
>  static void
> -build_qos(struct ovn_datapath *od, struct hmap *lflows) {
> +build_qos(struct ovn_datapath *od, struct lflow_table *lflows) {
>      struct ds action = DS_EMPTY_INITIALIZER;
>
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_MARK, 0, "1", "next;");
> @@ -7963,7 +7319,7 @@ build_qos(struct ovn_datapath *od, struct hmap *lflows) {
>  }
>
>  static void
> -build_lb_rules_pre_stateful(struct hmap *lflows,
> +build_lb_rules_pre_stateful(struct lflow_table *lflows,
>                              struct ovn_lb_datapaths *lb_dps,
>                              bool ct_lb_mark,
>                              const struct ovn_datapaths *ls_datapaths,
> @@ -8065,7 +7421,8 @@ build_lb_rules_pre_stateful(struct hmap *lflows,
>   *
>   */
>  static void
> -build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
> +build_lb_affinity_lr_flows(struct lflow_table *lflows,
> +                           const struct ovn_northd_lb *lb,
>                             struct ovn_lb_vip *lb_vip, char *new_lb_match,
>                             char *lb_action, const unsigned long *dp_bitmap,
>                             const struct ovn_datapaths *lr_datapaths)
> @@ -8252,7 +7609,7 @@ build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
>   *
>   */
>  static void
> -build_lb_affinity_ls_flows(struct hmap *lflows,
> +build_lb_affinity_ls_flows(struct lflow_table *lflows,
>                             struct ovn_lb_datapaths *lb_dps,
>                             struct ovn_lb_vip *lb_vip,
>                             const struct ovn_datapaths *ls_datapaths)
> @@ -8396,7 +7753,7 @@ build_lb_affinity_ls_flows(struct hmap *lflows,
>
>  static void
>  build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
> -                                        struct hmap *lflows)
> +                                        struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_LB_AFF_CHECK, 0, "1", "next;");
> @@ -8405,7 +7762,7 @@ build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
>
>  static void
>  build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
> -                                        struct hmap *lflows)
> +                                        struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      ovn_lflow_add(lflows, od, S_ROUTER_IN_LB_AFF_CHECK, 0, "1", "next;");
> @@ -8413,7 +7770,7 @@ build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
>  }
>
>  static void
> -build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
> +build_lb_rules(struct lflow_table *lflows, struct ovn_lb_datapaths *lb_dps,
>                 const struct ovn_datapaths *ls_datapaths,
>                 const struct chassis_features *features, struct ds *match,
>                 struct ds *action, const struct shash *meter_groups,
> @@ -8493,7 +7850,7 @@ build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
>  static void
>  build_stateful(struct ovn_datapath *od,
>                 const struct chassis_features *features,
> -               struct hmap *lflows)
> +               struct lflow_table *lflows)
>  {
>      const char *ct_block_action = features->ct_no_masked_label
>                                    ? "ct_mark.blocked"
> @@ -8544,7 +7901,7 @@ build_stateful(struct ovn_datapath *od,
>  static void
>  build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
>                   const struct ovn_datapath *od,
> -                 struct hmap *lflows)
> +                 struct lflow_table *lflows)
>  {
>      /* Ingress Pre-Hairpin/Nat-Hairpin/Hairpin tabled (Priority 0).
>       * Packets that don't need hairpinning should continue processing.
> @@ -8601,7 +7958,7 @@ build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
>  }
>
>  static void
> -build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
> +build_vtep_hairpin(struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      if (!od->has_vtep_lports) {
>          /* There is no need in these flows if datapath has no vtep lports. */
> @@ -8649,7 +8006,7 @@ build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
>
>  /* Build logical flows for the forwarding groups */
>  static void
> -build_fwd_group_lflows(struct ovn_datapath *od, struct hmap *lflows)
> +build_fwd_group_lflows(struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      if (!od->nbs->n_forwarding_groups) {
> @@ -8830,7 +8187,8 @@ build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
>                                          uint32_t priority,
>                                          const struct ovn_datapath *od,
>                                          const struct lr_nat_record *lrnat_rec,
> -                                        struct hmap *lflows)
> +                                        struct lflow_table *lflows,
> +                                        struct lflow_ref *lflow_ref)
>  {
>      struct ds eth_src = DS_EMPTY_INITIALIZER;
>      struct ds match = DS_EMPTY_INITIALIZER;
> @@ -8854,8 +8212,10 @@ build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
>      ds_put_format(&match,
>                    "eth.src == %s && (arp.op == 1 || rarp.op == 3 || nd_ns)",
>                    ds_cstr(&eth_src));
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_L2_LKUP, priority, ds_cstr(&match),
> -                  "outport = \""MC_FLOOD_L2"\"; output;");
> +    ovn_lflow_add_with_lflow_ref(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
> +                                 ds_cstr(&match),
> +                                 "outport = \""MC_FLOOD_L2"\"; output;",
> +                                 lflow_ref);
>
>      ds_destroy(&eth_src);
>      ds_destroy(&match);
> @@ -8920,11 +8280,11 @@ lrouter_port_ipv6_reachable(const struct ovn_port *op,
>   * switching domain as regular broadcast.
>   */
>  static void
> -build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
> -                                 struct ovn_port *patch_op,
> -                                 const struct ovn_datapath *od,
> -                                 uint32_t priority, struct hmap *lflows,
> -                                 const struct ovsdb_idl_row *stage_hint)
> +build_lswitch_rport_arp_req_flow(
> +    const char *ips, int addr_family, struct ovn_port *patch_op,
> +    const struct ovn_datapath *od, uint32_t priority,
> +    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
> +    struct lflow_ref *lflow_ref)
>  {
>      struct ds match   = DS_EMPTY_INITIALIZER;
>      struct ds actions = DS_EMPTY_INITIALIZER;
> @@ -8938,14 +8298,17 @@ build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
>          ds_put_format(&actions, "clone {outport = %s; output; }; "
>                                  "outport = \""MC_FLOOD_L2"\"; output;",
>                        patch_op->json_key);
> -        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
> -                                priority, ds_cstr(&match),
> -                                ds_cstr(&actions), stage_hint);
> +        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
> +                                          priority, ds_cstr(&match),
> +                                          ds_cstr(&actions), stage_hint,
> +                                          lflow_ref);
>      } else {
>          ds_put_format(&actions, "outport = %s; output;", patch_op->json_key);
> -        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
> -                                ds_cstr(&match), ds_cstr(&actions),
> -                                stage_hint);
> +        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
> +                                          priority, ds_cstr(&match),
> +                                          ds_cstr(&actions),
> +                                          stage_hint,
> +                                          lflow_ref);
>      }
>
>      ds_destroy(&match);
> @@ -8963,7 +8326,7 @@ static void
>  build_lswitch_rport_arp_req_flows(struct ovn_port *op,
>                                    struct ovn_datapath *sw_od,
>                                    struct ovn_port *sw_op,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    const struct ovsdb_idl_row *stage_hint)
>  {
>      if (!op || !op->nbrp) {
> @@ -8981,12 +8344,12 @@ build_lswitch_rport_arp_req_flows(struct ovn_port *op,
>      for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
>          build_lswitch_rport_arp_req_flow(
>              op->lrp_networks.ipv4_addrs[i].addr_s, AF_INET, sw_op, sw_od, 80,
> -            lflows, stage_hint);
> +            lflows, stage_hint, sw_op->lflow_ref);
>      }
>      for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
>          build_lswitch_rport_arp_req_flow(
>              op->lrp_networks.ipv6_addrs[i].addr_s, AF_INET6, sw_op, sw_od, 80,
> -            lflows, stage_hint);
> +            lflows, stage_hint, sw_op->lflow_ref);
>      }
>  }
>
> @@ -9001,7 +8364,8 @@ static void
>  build_lswitch_rport_arp_req_flows_for_lbnats(
>      struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
>      const struct ovn_datapath *sw_od, struct ovn_port *sw_op,
> -    struct hmap *lflows, const struct ovsdb_idl_row *stage_hint)
> +    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
> +    struct lflow_ref *lflow_ref)
>  {
>      if (!op || !op->nbrp) {
>          return;
> @@ -9030,7 +8394,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                  lrouter_port_ipv4_reachable(op, ipv4_addr)) {
>                  build_lswitch_rport_arp_req_flow(
>                      ip_addr, AF_INET, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          }
>          SSET_FOR_EACH (ip_addr, &lr_stateful_rec->lb_ips->ips_v6_reachable) {
> @@ -9043,7 +8407,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                  lrouter_port_ipv6_reachable(op, &ipv6_addr)) {
>                  build_lswitch_rport_arp_req_flow(
>                      ip_addr, AF_INET6, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          }
>      }
> @@ -9058,7 +8422,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>      if (sw_od->n_router_ports != sw_od->nbs->n_ports) {
>          build_lswitch_rport_arp_req_self_orig_flow(op, 75, sw_od,
>                                                     lr_stateful_rec->lrnat_rec,
> -                                                   lflows);
> +                                                   lflows, lflow_ref);
>      }
>
>      for (size_t i = 0; i < lr_stateful_rec->lrnat_rec->n_nat_entries; i++) {
> @@ -9082,14 +8446,14 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                                 nat->external_ip)) {
>                  build_lswitch_rport_arp_req_flow(
>                      nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          } else {
>              if (!sset_contains(&lr_stateful_rec->lb_ips->ips_v4,
>                                 nat->external_ip)) {
>                  build_lswitch_rport_arp_req_flow(
>                      nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          }
>      }
> @@ -9116,7 +8480,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                                 nat->external_ip)) {
>                  build_lswitch_rport_arp_req_flow(
>                      nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          } else {
>              if (!lr_stateful_rec ||
> @@ -9124,7 +8488,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                                 nat->external_ip)) {
>                  build_lswitch_rport_arp_req_flow(
>                      nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          }
>      }
> @@ -9135,7 +8499,7 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>                             struct lport_addresses *lsp_addrs,
>                             struct ovn_port *inport, bool is_external,
>                             const struct shash *meter_groups,
> -                           struct hmap *lflows)
> +                           struct lflow_table *lflows)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>
> @@ -9166,7 +8530,7 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>                                op->json_key);
>              }
>
> -            ovn_lflow_add_with_hint__(lflows, op->od,
> +            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
>                                        S_SWITCH_IN_DHCP_OPTIONS, 100,
>                                        ds_cstr(&match),
>                                        ds_cstr(&options_action),
> @@ -9174,7 +8538,8 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>                                        copp_meter_get(COPP_DHCPV4_OPTS,
>                                                       op->od->nbs->copp,
>                                                       meter_groups),
> -                                      &op->nbsp->dhcpv4_options->header_);
> +                                      &op->nbsp->dhcpv4_options->header_,
> +                                      op->lflow_ref);
>              ds_clear(&match);
>
>              /* If REGBIT_DHCP_OPTS_RESULT is set, it means the
> @@ -9193,7 +8558,8 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>              ovn_lflow_add_with_lport_and_hint(
>                  lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
>                  ds_cstr(&match), ds_cstr(&response_action), inport->key,
> -                &op->nbsp->dhcpv4_options->header_);
> +                &op->nbsp->dhcpv4_options->header_,
> +                op->lflow_ref);
>              ds_destroy(&options_action);
>              ds_destroy(&response_action);
>              ds_destroy(&ipv4_addr_match);
> @@ -9220,7 +8586,8 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>                  ovn_lflow_add_with_lport_and_hint(
>                      lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
>                      ds_cstr(&match),dhcp_actions, op->key,
> -                    &op->nbsp->dhcpv4_options->header_);
> +                    &op->nbsp->dhcpv4_options->header_,
> +                    op->lflow_ref);
>              }
>              break;
>          }
> @@ -9233,7 +8600,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>                             struct lport_addresses *lsp_addrs,
>                             struct ovn_port *inport, bool is_external,
>                             const struct shash *meter_groups,
> -                           struct hmap *lflows)
> +                           struct lflow_table *lflows)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>
> @@ -9255,7 +8622,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>                                op->json_key);
>              }
>
> -            ovn_lflow_add_with_hint__(lflows, op->od,
> +            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
>                                        S_SWITCH_IN_DHCP_OPTIONS, 100,
>                                        ds_cstr(&match),
>                                        ds_cstr(&options_action),
> @@ -9263,7 +8630,8 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>                                        copp_meter_get(COPP_DHCPV6_OPTS,
>                                                       op->od->nbs->copp,
>                                                       meter_groups),
> -                                      &op->nbsp->dhcpv6_options->header_);
> +                                      &op->nbsp->dhcpv6_options->header_,
> +                                      op->lflow_ref);
>
>              /* If REGBIT_DHCP_OPTS_RESULT is set to 1, it means the
>               * put_dhcpv6_opts action is successful */
> @@ -9271,7 +8639,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>              ovn_lflow_add_with_lport_and_hint(
>                  lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
>                  ds_cstr(&match), ds_cstr(&response_action), inport->key,
> -                &op->nbsp->dhcpv6_options->header_);
> +                &op->nbsp->dhcpv6_options->header_, op->lflow_ref);
>              ds_destroy(&options_action);
>              ds_destroy(&response_action);
>
> @@ -9303,7 +8671,8 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>                  ovn_lflow_add_with_lport_and_hint(
>                      lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
>                      ds_cstr(&match),dhcp6_actions, op->key,
> -                    &op->nbsp->dhcpv6_options->header_);
> +                    &op->nbsp->dhcpv6_options->header_,
> +                    op->lflow_ref);
>              }
>              break;
>          }
> @@ -9314,7 +8683,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>  static void
>  build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
>                                                   const struct ovn_port *port,
> -                                                 struct hmap *lflows)
> +                                                 struct lflow_table *lflows)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>
> @@ -9334,7 +8703,7 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
>                      ovn_lflow_add_with_lport_and_hint(
>                          lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
>                          ds_cstr(&match),  debug_drop_action(), port->key,
> -                        &op->nbsp->header_);
> +                        &op->nbsp->header_, op->lflow_ref);
>                  }
>                  for (size_t l = 0; l < rp->lsp_addrs[k].n_ipv6_addrs; l++) {
>                      ds_clear(&match);
> @@ -9350,7 +8719,7 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
>                      ovn_lflow_add_with_lport_and_hint(
>                          lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
>                          ds_cstr(&match), debug_drop_action(), port->key,
> -                        &op->nbsp->header_);
> +                        &op->nbsp->header_, op->lflow_ref);
>                  }
>
>                  ds_clear(&match);
> @@ -9366,7 +8735,8 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
>                                                    100, ds_cstr(&match),
>                                                    debug_drop_action(),
>                                                    port->key,
> -                                                  &op->nbsp->header_);
> +                                                  &op->nbsp->header_,
> +                                                  op->lflow_ref);
>              }
>          }
>      }
> @@ -9381,7 +8751,7 @@ is_vlan_transparent(const struct ovn_datapath *od)
>
>  static void
>  build_lswitch_lflows_l2_unknown(struct ovn_datapath *od,
> -                                struct hmap *lflows)
> +                                struct lflow_table *lflows)
>  {
>      /* Ingress table 25/26: Destination lookup for unknown MACs. */
>      if (od->has_unknown) {
> @@ -9402,7 +8772,7 @@ static void
>  build_lswitch_lflows_pre_acl_and_acl(
>      struct ovn_datapath *od,
>      const struct chassis_features *features,
> -    struct hmap *lflows,
> +    struct lflow_table *lflows,
>      const struct shash *meter_groups)
>  {
>      ovs_assert(od->nbs);
> @@ -9418,7 +8788,7 @@ build_lswitch_lflows_pre_acl_and_acl(
>   * 100). */
>  static void
>  build_lswitch_lflows_admission_control(struct ovn_datapath *od,
> -                                       struct hmap *lflows)
> +                                       struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>
> @@ -9453,7 +8823,7 @@ build_lswitch_lflows_admission_control(struct ovn_datapath *od,
>
>  static void
>  build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
> -                                          struct hmap *lflows,
> +                                          struct lflow_table *lflows,
>                                            struct ds *match)
>  {
>      ovs_assert(op->nbsp);
> @@ -9465,14 +8835,14 @@ build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
>      ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                        S_SWITCH_IN_ARP_ND_RSP, 100,
>                                        ds_cstr(match), "next;", op->key,
> -                                      &op->nbsp->header_);
> +                                      &op->nbsp->header_, op->lflow_ref);
>  }
>
>  /* Ingress table 19: ARP/ND responder, reply for known IPs.
>   * (priority 50). */
>  static void
>  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> -                                         struct hmap *lflows,
> +                                         struct lflow_table *lflows,
>                                           const struct hmap *ls_ports,
>                                           const struct shash *meter_groups,
>                                           struct ds *actions,
> @@ -9557,7 +8927,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                                                S_SWITCH_IN_ARP_ND_RSP, 100,
>                                                ds_cstr(match),
>                                                ds_cstr(actions), vparent,
> -                                              &vp->nbsp->header_);
> +                                              &vp->nbsp->header_,
> +                                              op->lflow_ref);
>          }
>
>          free(tokstr);
> @@ -9601,11 +8972,12 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                      "output;",
>                      op->lsp_addrs[i].ea_s, op->lsp_addrs[i].ea_s,
>                      op->lsp_addrs[i].ipv4_addrs[j].addr_s);
> -                ovn_lflow_add_with_hint(lflows, op->od,
> -                                        S_SWITCH_IN_ARP_ND_RSP, 50,
> -                                        ds_cstr(match),
> -                                        ds_cstr(actions),
> -                                        &op->nbsp->header_);
> +                ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                                  S_SWITCH_IN_ARP_ND_RSP, 50,
> +                                                  ds_cstr(match),
> +                                                  ds_cstr(actions),
> +                                                  &op->nbsp->header_,
> +                                                  op->lflow_ref);
>
>                  /* Do not reply to an ARP request from the port that owns
>                   * the address (otherwise a DHCP client that ARPs to check
> @@ -9624,7 +8996,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                                                    S_SWITCH_IN_ARP_ND_RSP,
>                                                    100, ds_cstr(match),
>                                                    "next;", op->key,
> -                                                  &op->nbsp->header_);
> +                                                  &op->nbsp->header_,
> +                                                  op->lflow_ref);
>              }
>
>              /* For ND solicitations, we need to listen for both the
> @@ -9654,15 +9027,16 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                          op->lsp_addrs[i].ipv6_addrs[j].addr_s,
>                          op->lsp_addrs[i].ipv6_addrs[j].addr_s,
>                          op->lsp_addrs[i].ea_s);
> -                ovn_lflow_add_with_hint__(lflows, op->od,
> -                                          S_SWITCH_IN_ARP_ND_RSP, 50,
> -                                          ds_cstr(match),
> -                                          ds_cstr(actions),
> -                                          NULL,
> -                                          copp_meter_get(COPP_ND_NA,
> -                                              op->od->nbs->copp,
> -                                              meter_groups),
> -                                          &op->nbsp->header_);
> +                ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
> +                                                    S_SWITCH_IN_ARP_ND_RSP, 50,
> +                                                    ds_cstr(match),
> +                                                    ds_cstr(actions),
> +                                                    NULL,
> +                                                    copp_meter_get(COPP_ND_NA,
> +                                                        op->od->nbs->copp,
> +                                                        meter_groups),
> +                                                    &op->nbsp->header_,
> +                                                    op->lflow_ref);
>
>                  /* Do not reply to a solicitation from the port that owns
>                   * the address (otherwise DAD detection will fail). */
> @@ -9671,7 +9045,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                                                    S_SWITCH_IN_ARP_ND_RSP,
>                                                    100, ds_cstr(match),
>                                                    "next;", op->key,
> -                                                  &op->nbsp->header_);
> +                                                  &op->nbsp->header_,
> +                                                  op->lflow_ref);
>              }
>          }
>      }
> @@ -9717,8 +9092,12 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                  ea_s,
>                  ea_s);
>
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_ARP_ND_RSP,
> -                30, ds_cstr(match), ds_cstr(actions), &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                              S_SWITCH_IN_ARP_ND_RSP,
> +                                              30, ds_cstr(match),
> +                                              ds_cstr(actions),
> +                                              &op->nbsp->header_,
> +                                              op->lflow_ref);
>          }
>
>          /* Add IPv6 NDP responses.
> @@ -9761,15 +9140,16 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                      lsp_is_router(op->nbsp) ? "nd_na_router" : "nd_na",
>                      ea_s,
>                      ea_s);
> -            ovn_lflow_add_with_hint__(lflows, op->od,
> -                                      S_SWITCH_IN_ARP_ND_RSP, 30,
> -                                      ds_cstr(match),
> -                                      ds_cstr(actions),
> -                                      NULL,
> -                                      copp_meter_get(COPP_ND_NA,
> -                                          op->od->nbs->copp,
> -                                          meter_groups),
> -                                      &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
> +                                                S_SWITCH_IN_ARP_ND_RSP, 30,
> +                                                ds_cstr(match),
> +                                                ds_cstr(actions),
> +                                                NULL,
> +                                                copp_meter_get(COPP_ND_NA,
> +                                                    op->od->nbs->copp,
> +                                                    meter_groups),
> +                                                &op->nbsp->header_,
> +                                                op->lflow_ref);
>              ds_destroy(&ip6_dst_match);
>              ds_destroy(&nd_target_match);
>          }
> @@ -9780,7 +9160,7 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>   * (priority 0)*/
>  static void
>  build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
> -                                       struct hmap *lflows)
> +                                       struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1", "next;");
> @@ -9791,7 +9171,7 @@ build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
>  static void
>  build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
>                                       const struct hmap *ls_ports,
> -                                     struct hmap *lflows,
> +                                     struct lflow_table *lflows,
>                                       struct ds *actions,
>                                       struct ds *match)
>  {
> @@ -9867,7 +9247,7 @@ build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
>   * priority 100 flows. */
>  static void
>  build_lswitch_dhcp_options_and_response(struct ovn_port *op,
> -                                        struct hmap *lflows,
> +                                        struct lflow_table *lflows,
>                                          const struct shash *meter_groups)
>  {
>      ovs_assert(op->nbsp);
> @@ -9922,7 +9302,7 @@ build_lswitch_dhcp_options_and_response(struct ovn_port *op,
>   * (priority 0). */
>  static void
>  build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
> -                                        struct hmap *lflows)
> +                                        struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_OPTIONS, 0, "1", "next;");
> @@ -9937,7 +9317,7 @@ build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
>  */
>  static void
>  build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
> -                                      struct hmap *lflows,
> +                                      struct lflow_table *lflows,
>                                        const struct shash *meter_groups)
>  {
>      ovs_assert(od->nbs);
> @@ -9968,7 +9348,7 @@ build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
>   * binding the external ports. */
>  static void
>  build_lswitch_external_port(struct ovn_port *op,
> -                            struct hmap *lflows)
> +                            struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbsp);
>      if (!lsp_is_external(op->nbsp)) {
> @@ -9984,7 +9364,7 @@ build_lswitch_external_port(struct ovn_port *op,
>   * (priority 70 - 100). */
>  static void
>  build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
> -                                        struct hmap *lflows,
> +                                        struct lflow_table *lflows,
>                                          struct ds *actions,
>                                          const struct shash *meter_groups)
>  {
> @@ -10077,7 +9457,7 @@ build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
>   * (priority 90). */
>  static void
>  build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
> -                                struct hmap *lflows,
> +                                struct lflow_table *lflows,
>                                  struct ds *actions,
>                                  struct ds *match)
>  {
> @@ -10157,7 +9537,8 @@ build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
>
>  /* Ingress table 25: Destination lookup, unicast handling (priority 50), */
>  static void
> -build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
> +build_lswitch_ip_unicast_lookup(struct ovn_port *op,
> +                                struct lflow_table *lflows,
>                                  struct ds *actions, struct ds *match)
>  {
>      ovs_assert(op->nbsp);
> @@ -10190,10 +9571,12 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
>
>              ds_clear(actions);
>              ds_put_format(actions, action, op->json_key);
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
> -                                    50, ds_cstr(match),
> -                                    ds_cstr(actions),
> -                                    &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                              S_SWITCH_IN_L2_LKUP,
> +                                              50, ds_cstr(match),
> +                                              ds_cstr(actions),
> +                                              &op->nbsp->header_,
> +                                              op->lflow_ref);
>          } else if (!strcmp(op->nbsp->addresses[i], "unknown")) {
>              continue;
>          } else if (is_dynamic_lsp_address(op->nbsp->addresses[i])) {
> @@ -10208,10 +9591,12 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
>
>              ds_clear(actions);
>              ds_put_format(actions, action, op->json_key);
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
> -                                    50, ds_cstr(match),
> -                                    ds_cstr(actions),
> -                                    &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                              S_SWITCH_IN_L2_LKUP,
> +                                              50, ds_cstr(match),
> +                                              ds_cstr(actions),
> +                                              &op->nbsp->header_,
> +                                              op->lflow_ref);
>          } else if (!strcmp(op->nbsp->addresses[i], "router")) {
>              if (!op->peer || !op->peer->nbrp
>                  || !ovs_scan(op->peer->nbrp->mac,
> @@ -10263,10 +9648,11 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
>
>              ds_clear(actions);
>              ds_put_format(actions, action, op->json_key);
> -            ovn_lflow_add_with_hint(lflows, op->od,
> -                                    S_SWITCH_IN_L2_LKUP, 50,
> -                                    ds_cstr(match), ds_cstr(actions),
> -                                    &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                              S_SWITCH_IN_L2_LKUP, 50,
> +                                              ds_cstr(match), ds_cstr(actions),
> +                                              &op->nbsp->header_,
> +                                              op->lflow_ref);
>          } else {
>              static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
>
> @@ -10281,7 +9667,8 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
>  static void
>  build_lswitch_ip_unicast_lookup_for_nats(
>      struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
> -    struct hmap *lflows, struct ds *match, struct ds *actions)
> +    struct lflow_table *lflows, struct ds *match, struct ds *actions,
> +    struct lflow_ref *lflow_ref)
>  {
>      ovs_assert(op->nbsp);
>
> @@ -10317,11 +9704,12 @@ build_lswitch_ip_unicast_lookup_for_nats(
>
>              ds_clear(actions);
>              ds_put_format(actions, action, op->json_key);
> -            ovn_lflow_add_with_hint(lflows, op->od,
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
>                                      S_SWITCH_IN_L2_LKUP, 50,
>                                      ds_cstr(match),
>                                      ds_cstr(actions),
> -                                    &op->nbsp->header_);
> +                                    &op->nbsp->header_,
> +                                    lflow_ref);
>          }
>      }
>  }
> @@ -10561,7 +9949,7 @@ get_outport_for_routing_policy_nexthop(struct ovn_datapath *od,
>  }
>
>  static void
> -build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
> +build_routing_policy_flow(struct lflow_table *lflows, struct ovn_datapath *od,
>                            const struct hmap *lr_ports,
>                            const struct nbrec_logical_router_policy *rule,
>                            const struct ovsdb_idl_row *stage_hint)
> @@ -10626,7 +10014,8 @@ build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
>  }
>
>  static void
> -build_ecmp_routing_policy_flows(struct hmap *lflows, struct ovn_datapath *od,
> +build_ecmp_routing_policy_flows(struct lflow_table *lflows,
> +                                struct ovn_datapath *od,
>                                  const struct hmap *lr_ports,
>                                  const struct nbrec_logical_router_policy *rule,
>                                  uint16_t ecmp_group_id)
> @@ -10762,7 +10151,7 @@ get_route_table_id(struct simap *route_tables, const char *route_table_name)
>  }
>
>  static void
> -build_route_table_lflow(struct ovn_datapath *od, struct hmap *lflows,
> +build_route_table_lflow(struct ovn_datapath *od, struct lflow_table *lflows,
>                          struct nbrec_logical_router_port *lrp,
>                          struct simap *route_tables)
>  {
> @@ -11173,7 +10562,7 @@ find_static_route_outport(struct ovn_datapath *od, const struct hmap *lr_ports,
>  }
>
>  static void
> -add_ecmp_symmetric_reply_flows(struct hmap *lflows,
> +add_ecmp_symmetric_reply_flows(struct lflow_table *lflows,
>                                 struct ovn_datapath *od,
>                                 bool ct_masked_mark,
>                                 const char *port_ip,
> @@ -11338,7 +10727,7 @@ add_ecmp_symmetric_reply_flows(struct hmap *lflows,
>  }
>
>  static void
> -build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> +build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
>                        bool ct_masked_mark, const struct hmap *lr_ports,
>                        struct ecmp_groups_node *eg)
>
> @@ -11425,12 +10814,12 @@ build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
>  }
>
>  static void
> -add_route(struct hmap *lflows, struct ovn_datapath *od,
> +add_route(struct lflow_table *lflows, struct ovn_datapath *od,
>            const struct ovn_port *op, const char *lrp_addr_s,
>            const char *network_s, int plen, const char *gateway,
>            bool is_src_route, const uint32_t rtb_id,
>            const struct ovsdb_idl_row *stage_hint, bool is_discard_route,
> -          int ofs)
> +          int ofs, struct lflow_ref *lflow_ref)
>  {
>      bool is_ipv4 = strchr(network_s, '.') ? true : false;
>      struct ds match = DS_EMPTY_INITIALIZER;
> @@ -11473,14 +10862,17 @@ add_route(struct hmap *lflows, struct ovn_datapath *od,
>          ds_put_format(&actions, "ip.ttl--; %s", ds_cstr(&common_actions));
>      }
>
> -    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_IP_ROUTING, priority,
> -                            ds_cstr(&match), ds_cstr(&actions),
> -                            stage_hint);
> +    ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_ROUTER_IN_IP_ROUTING,
> +                                      priority, ds_cstr(&match),
> +                                      ds_cstr(&actions), stage_hint,
> +                                      lflow_ref);
>      if (op && op->has_bfd) {
>          ds_put_format(&match, " && udp.dst == 3784");
> -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_ROUTING,
> -                                priority + 1, ds_cstr(&match),
> -                                ds_cstr(&common_actions), stage_hint);
> +        ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                          S_ROUTER_IN_IP_ROUTING,
> +                                          priority + 1, ds_cstr(&match),
> +                                          ds_cstr(&common_actions),\
> +                                          stage_hint, lflow_ref);
>      }
>      ds_destroy(&match);
>      ds_destroy(&common_actions);
> @@ -11488,7 +10880,7 @@ add_route(struct hmap *lflows, struct ovn_datapath *od,
>  }
>
>  static void
> -build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> +build_static_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
>                          const struct hmap *lr_ports,
>                          const struct parsed_route *route_)
>  {
> @@ -11514,7 +10906,7 @@ build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
>      add_route(lflows, route_->is_discard_route ? od : out_port->od, out_port,
>                lrp_addr_s, prefix_s, route_->plen, route->nexthop,
>                route_->is_src_route, route_->route_table_id, &route->header_,
> -              route_->is_discard_route, ofs);
> +              route_->is_discard_route, ofs, NULL);
>
>      free(prefix_s);
>  }
> @@ -11577,7 +10969,7 @@ struct lrouter_nat_lb_flows_ctx {
>
>      int prio;
>
> -    struct hmap *lflows;
> +    struct lflow_table *lflows;
>      const struct shash *meter_groups;
>  };
>
> @@ -11709,7 +11101,7 @@ build_lrouter_nat_flows_for_lb(
>      struct ovn_northd_lb_vip *vips_nb,
>      const struct ovn_datapaths *lr_datapaths,
>      const struct lr_stateful_table *lr_stateful_table,
> -    struct hmap *lflows,
> +    struct lflow_table *lflows,
>      struct ds *match, struct ds *action,
>      const struct shash *meter_groups,
>      const struct chassis_features *features,
> @@ -11878,7 +11270,7 @@ build_lrouter_nat_flows_for_lb(
>
>  static void
>  build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> -                           struct hmap *lflows,
> +                           struct lflow_table *lflows,
>                             const struct shash *meter_groups,
>                             const struct ovn_datapaths *ls_datapaths,
>                             const struct chassis_features *features,
> @@ -11939,7 +11331,7 @@ build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
>   */
>  static void
>  build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    const struct ovn_datapaths *lr_datapaths,
>                                    struct ds *match)
>  {
> @@ -11965,7 +11357,7 @@ build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
>
>  static void
>  build_lrouter_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> -                           struct hmap *lflows,
> +                           struct lflow_table *lflows,
>                             const struct shash *meter_groups,
>                             const struct ovn_datapaths *lr_datapaths,
>                             const struct lr_stateful_table *lr_stateful_table,
> @@ -12123,7 +11515,7 @@ lrouter_dnat_and_snat_is_stateless(const struct nbrec_nat *nat)
>   */
>  static inline void
>  lrouter_nat_add_ext_ip_match(const struct ovn_datapath *od,
> -                             struct hmap *lflows, struct ds *match,
> +                             struct lflow_table *lflows, struct ds *match,
>                               const struct nbrec_nat *nat,
>                               bool is_v6, bool is_src, int cidr_bits)
>  {
> @@ -12190,7 +11582,7 @@ build_lrouter_arp_flow(const struct ovn_datapath *od, struct ovn_port *op,
>                         const char *ip_address, const char *eth_addr,
>                         struct ds *extra_match, bool drop, uint16_t priority,
>                         const struct ovsdb_idl_row *hint,
> -                       struct hmap *lflows)
> +                       struct lflow_table *lflows)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>      struct ds actions = DS_EMPTY_INITIALIZER;
> @@ -12240,7 +11632,8 @@ build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
>                        const char *sn_ip_address, const char *eth_addr,
>                        struct ds *extra_match, bool drop, uint16_t priority,
>                        const struct ovsdb_idl_row *hint,
> -                      struct hmap *lflows, const struct shash *meter_groups)
> +                      struct lflow_table *lflows,
> +                      const struct shash *meter_groups)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>      struct ds actions = DS_EMPTY_INITIALIZER;
> @@ -12291,7 +11684,7 @@ build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
>  static void
>  build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
>                                struct ovn_nat *nat_entry,
> -                              struct hmap *lflows,
> +                              struct lflow_table *lflows,
>                                const struct shash *meter_groups)
>  {
>      struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
> @@ -12314,7 +11707,7 @@ build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
>  static void
>  build_lrouter_port_nat_arp_nd_flow(struct ovn_port *op,
>                                     struct ovn_nat *nat_entry,
> -                                   struct hmap *lflows,
> +                                   struct lflow_table *lflows,
>                                     const struct shash *meter_groups)
>  {
>      struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
> @@ -12388,7 +11781,7 @@ build_lrouter_drop_own_dest(struct ovn_port *op,
>                              const struct lr_stateful_record *lr_stateful_rec,
>                              enum ovn_stage stage,
>                              uint16_t priority, bool drop_snat_ip,
> -                            struct hmap *lflows)
> +                            struct lflow_table *lflows)
>  {
>      struct ds match_ips = DS_EMPTY_INITIALIZER;
>
> @@ -12453,7 +11846,7 @@ build_lrouter_drop_own_dest(struct ovn_port *op,
>  }
>
>  static void
> -build_lrouter_force_snat_flows(struct hmap *lflows,
> +build_lrouter_force_snat_flows(struct lflow_table *lflows,
>                                 const struct ovn_datapath *od,
>                                 const char *ip_version, const char *ip_addr,
>                                 const char *context)
> @@ -12484,7 +11877,7 @@ build_lrouter_force_snat_flows(struct hmap *lflows,
>   */
>  static void
>  build_lrouter_icmp_packet_toobig_admin_flows(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -12509,7 +11902,7 @@ build_lrouter_icmp_packet_toobig_admin_flows(
>
>  static void
>  build_lswitch_icmp_packet_toobig_admin_flows(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbsp);
> @@ -12548,7 +11941,7 @@ build_lswitch_icmp_packet_toobig_admin_flows(
>  static void
>  build_lrouter_force_snat_flows_op(struct ovn_port *op,
>                                    const struct lr_nat_record *lrnat_rec,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -12620,7 +12013,7 @@ build_lrouter_force_snat_flows_op(struct ovn_port *op,
>  }
>
>  static void
> -build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
> +build_lrouter_bfd_flows(struct lflow_table *lflows, struct ovn_port *op,
>                          const struct shash *meter_groups)
>  {
>      if (!op->has_bfd) {
> @@ -12675,7 +12068,7 @@ build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
>   */
>  static void
>  build_adm_ctrl_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows)
> +        struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>
> @@ -12726,7 +12119,7 @@ build_gateway_get_l2_hdr_size(struct ovn_port *op)
>   * function.
>   */
>  static void OVS_PRINTF_FORMAT(9, 10)
> -build_gateway_mtu_flow(struct hmap *lflows, struct ovn_port *op,
> +build_gateway_mtu_flow(struct lflow_table *lflows, struct ovn_port *op,
>                         enum ovn_stage stage, uint16_t prio_low,
>                         uint16_t prio_high, struct ds *match,
>                         struct ds *actions, const struct ovsdb_idl_row *hint,
> @@ -12787,7 +12180,7 @@ consider_l3dgw_port_is_centralized(struct ovn_port *op)
>   */
>  static void
>  build_adm_ctrl_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -12841,7 +12234,7 @@ build_adm_ctrl_flows_for_lrouter_port(
>   * lflows for logical routers. */
>  static void
>  build_neigh_learning_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
>  {
> @@ -12972,7 +12365,7 @@ build_neigh_learning_flows_for_lrouter(
>   * for logical router ports. */
>  static void
>  build_neigh_learning_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -13034,7 +12427,7 @@ build_neigh_learning_flows_for_lrouter_port(
>   * Adv (RA) options and response. */
>  static void
>  build_ND_RA_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
>  {
> @@ -13149,7 +12542,8 @@ build_ND_RA_flows_for_lrouter_port(
>  /* Logical router ingress table ND_RA_OPTIONS & ND_RA_RESPONSE: RS
>   * responder, by default goto next. (priority 0). */
>  static void
> -build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
> +build_ND_RA_flows_for_lrouter(struct ovn_datapath *od,
> +                              struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      ovn_lflow_add(lflows, od, S_ROUTER_IN_ND_RA_OPTIONS, 0, "1", "next;");
> @@ -13160,7 +12554,7 @@ build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
>   * by default goto next. (priority 0). */
>  static void
>  build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
> -                                       struct hmap *lflows)
> +                                       struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING_PRE, 0, "1",
> @@ -13188,21 +12582,23 @@ build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
>   */
>  static void
>  build_ip_routing_flows_for_lrp(
> -        struct ovn_port *op, struct hmap *lflows)
> +        struct ovn_port *op, struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbrp);
>      for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
>          add_route(lflows, op->od, op, op->lrp_networks.ipv4_addrs[i].addr_s,
>                    op->lrp_networks.ipv4_addrs[i].network_s,
>                    op->lrp_networks.ipv4_addrs[i].plen, NULL, false, 0,
> -                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
> +                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
> +                  NULL);
>      }
>
>      for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
>          add_route(lflows, op->od, op, op->lrp_networks.ipv6_addrs[i].addr_s,
>                    op->lrp_networks.ipv6_addrs[i].network_s,
>                    op->lrp_networks.ipv6_addrs[i].plen, NULL, false, 0,
> -                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
> +                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
> +                  NULL);
>      }
>  }
>
> @@ -13215,8 +12611,9 @@ build_ip_routing_flows_for_lrp(
>   */
>  static void
>  build_ip_routing_flows_for_router_type_lsp(
> -        struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
> -        const struct hmap *lr_ports, struct hmap *lflows)
> +    struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
> +    const struct hmap *lr_ports, struct lflow_table *lflows,
> +    struct lflow_ref *lflow_ref)
>  {
>      ovs_assert(op->nbsp);
>      if (!lsp_is_router(op->nbsp)) {
> @@ -13252,7 +12649,8 @@ build_ip_routing_flows_for_router_type_lsp(
>                              laddrs->ipv4_addrs[k].network_s,
>                              laddrs->ipv4_addrs[k].plen, NULL, false, 0,
>                              &peer->nbrp->header_, false,
> -                            ROUTE_PRIO_OFFSET_CONNECTED);
> +                            ROUTE_PRIO_OFFSET_CONNECTED,
> +                            lflow_ref);
>                  }
>              }
>              destroy_routable_addresses(&ra);
> @@ -13263,7 +12661,7 @@ build_ip_routing_flows_for_router_type_lsp(
>  static void
>  build_static_route_flows_for_lrouter(
>          struct ovn_datapath *od, const struct chassis_features *features,
> -        struct hmap *lflows, const struct hmap *lr_ports,
> +        struct lflow_table *lflows, const struct hmap *lr_ports,
>          const struct hmap *bfd_connections)
>  {
>      ovs_assert(od->nbr);
> @@ -13327,7 +12725,7 @@ build_static_route_flows_for_lrouter(
>   */
>  static void
>  build_mcast_lookup_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(od->nbr);
> @@ -13428,7 +12826,7 @@ build_mcast_lookup_flows_for_lrouter(
>   * advances to the next table for ARP/ND resolution. */
>  static void
>  build_ingress_policy_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          const struct hmap *lr_ports)
>  {
>      ovs_assert(od->nbr);
> @@ -13462,7 +12860,7 @@ build_ingress_policy_flows_for_lrouter(
>  /* Local router ingress table ARP_RESOLVE: ARP Resolution. */
>  static void
>  build_arp_resolve_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows)
> +        struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      /* Multicast packets already have the outport set so just advance to
> @@ -13480,10 +12878,12 @@ build_arp_resolve_flows_for_lrouter(
>  }
>
>  static void
> -routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
> +routable_addresses_to_lflows(struct lflow_table *lflows,
> +                             struct ovn_port *router_port,
>                               struct ovn_port *peer,
>                               const struct lr_stateful_record *lr_stateful_rec,
> -                             struct ds *match, struct ds *actions)
> +                             struct ds *match, struct ds *actions,
> +                             struct lflow_ref *lflow_ref)
>  {
>      struct ovn_port_routable_addresses ra =
>          get_op_routable_addresses(router_port, lr_stateful_rec);
> @@ -13507,8 +12907,9 @@ routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
>
>          ds_clear(actions);
>          ds_put_format(actions, "eth.dst = %s; next;", ra.laddrs[i].ea_s);
> -        ovn_lflow_add(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE, 100,
> -                      ds_cstr(match), ds_cstr(actions));
> +        ovn_lflow_add_with_lflow_ref(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE,
> +                                     100, ds_cstr(match), ds_cstr(actions),
> +                                     lflow_ref);
>      }
>      destroy_routable_addresses(&ra);
>  }
> @@ -13525,7 +12926,8 @@ routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
>
>  /* This function adds ARP resolve flows related to a LRP. */
>  static void
> -build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
> +build_arp_resolve_flows_for_lrp(struct ovn_port *op,
> +                                struct lflow_table *lflows,
>                                  struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -13600,7 +13002,7 @@ build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
>  /* This function adds ARP resolve flows related to a LSP. */
>  static void
>  build_arp_resolve_flows_for_lsp(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          const struct hmap *lr_ports,
>          struct ds *match, struct ds *actions)
>  {
> @@ -13642,11 +13044,12 @@ build_arp_resolve_flows_for_lsp(
>
>                      ds_clear(actions);
>                      ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> -                    ovn_lflow_add_with_hint(lflows, peer->od,
> +                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
>                                              S_ROUTER_IN_ARP_RESOLVE, 100,
>                                              ds_cstr(match),
>                                              ds_cstr(actions),
> -                                            &op->nbsp->header_);
> +                                            &op->nbsp->header_,
> +                                            op->lflow_ref);
>                  }
>              }
>
> @@ -13673,11 +13076,12 @@ build_arp_resolve_flows_for_lsp(
>
>                      ds_clear(actions);
>                      ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> -                    ovn_lflow_add_with_hint(lflows, peer->od,
> +                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
>                                              S_ROUTER_IN_ARP_RESOLVE, 100,
>                                              ds_cstr(match),
>                                              ds_cstr(actions),
> -                                            &op->nbsp->header_);
> +                                            &op->nbsp->header_,
> +                                            op->lflow_ref);
>                  }
>              }
>          }
> @@ -13721,10 +13125,11 @@ build_arp_resolve_flows_for_lsp(
>                  ds_clear(actions);
>                  ds_put_format(actions, "eth.dst = %s; next;",
>                                            router_port->lrp_networks.ea_s);
> -                ovn_lflow_add_with_hint(lflows, peer->od,
> +                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
>                                          S_ROUTER_IN_ARP_RESOLVE, 100,
>                                          ds_cstr(match), ds_cstr(actions),
> -                                        &op->nbsp->header_);
> +                                        &op->nbsp->header_,
> +                                        op->lflow_ref);
>              }
>
>              if (router_port->lrp_networks.n_ipv6_addrs) {
> @@ -13737,10 +13142,11 @@ build_arp_resolve_flows_for_lsp(
>                  ds_clear(actions);
>                  ds_put_format(actions, "eth.dst = %s; next;",
>                                router_port->lrp_networks.ea_s);
> -                ovn_lflow_add_with_hint(lflows, peer->od,
> +                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
>                                          S_ROUTER_IN_ARP_RESOLVE, 100,
>                                          ds_cstr(match), ds_cstr(actions),
> -                                        &op->nbsp->header_);
> +                                        &op->nbsp->header_,
> +                                        op->lflow_ref);
>              }
>          }
>      }
> @@ -13748,10 +13154,11 @@ build_arp_resolve_flows_for_lsp(
>
>  static void
>  build_arp_resolve_flows_for_lsp_routable_addresses(
> -        struct ovn_port *op, struct hmap *lflows,
> -        const struct hmap *lr_ports,
> -        const struct lr_stateful_table *lr_stateful_table,
> -        struct ds *match, struct ds *actions)
> +    struct ovn_port *op, struct lflow_table *lflows,
> +    const struct hmap *lr_ports,
> +    const struct lr_stateful_table *lr_stateful_table,
> +    struct ds *match, struct ds *actions,
> +    struct lflow_ref *lflow_ref)
>  {
>      if (!lsp_is_router(op->nbsp)) {
>          return;
> @@ -13785,13 +13192,15 @@ build_arp_resolve_flows_for_lsp_routable_addresses(
>              lr_stateful_rec = lr_stateful_table_find_by_index(
>                  lr_stateful_table, router_port->od->index);
>              routable_addresses_to_lflows(lflows, router_port, peer,
> -                                         lr_stateful_rec, match, actions);
> +                                         lr_stateful_rec, match, actions,
> +                                         lflow_ref);
>          }
>      }
>  }
>
>  static void
> -build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
> +build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu,
> +                            struct lflow_table *lflows,
>                              const struct shash *meter_groups, struct ds *match,
>                              struct ds *actions, enum ovn_stage stage,
>                              struct ovn_port *outport)
> @@ -13884,7 +13293,7 @@ build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
>
>  static void
>  build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    const struct hmap *lr_ports,
>                                    const struct shash *meter_groups,
>                                    struct ds *match,
> @@ -13934,7 +13343,7 @@ build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
>   * */
>  static void
>  build_check_pkt_len_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          const struct hmap *lr_ports,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
> @@ -13961,7 +13370,7 @@ build_check_pkt_len_flows_for_lrouter(
>  /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
>  static void
>  build_gateway_redirect_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(od->nbr);
> @@ -14005,8 +13414,8 @@ build_gateway_redirect_flows_for_lrouter(
>  /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
>  static void
>  build_lr_gateway_redirect_flows_for_nats(
> -    const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
> -    struct hmap *lflows, struct ds *match, struct ds *actions)
> +        const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
> +        struct lflow_table *lflows, struct ds *match, struct ds *actions)
>  {
>      ovs_assert(od->nbr);
>      for (size_t i = 0; i < od->n_l3dgw_ports; i++) {
> @@ -14075,7 +13484,7 @@ build_lr_gateway_redirect_flows_for_nats(
>   * and sends an ARP/IPv6 NA request (priority 100). */
>  static void
>  build_arp_request_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
>  {
> @@ -14153,7 +13562,7 @@ build_arp_request_flows_for_lrouter(
>   */
>  static void
>  build_egress_delivery_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -14195,7 +13604,7 @@ build_egress_delivery_flows_for_lrouter_port(
>
>  static void
>  build_misc_local_traffic_drop_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows)
> +        struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      /* Allow IGMP and MLD packets (with TTL = 1) if the router is
> @@ -14277,7 +13686,7 @@ build_misc_local_traffic_drop_flows_for_lrouter(
>
>  static void
>  build_dhcpv6_reply_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match)
>  {
>      ovs_assert(op->nbrp);
> @@ -14297,7 +13706,7 @@ build_dhcpv6_reply_flows_for_lrouter_port(
>
>  static void
>  build_ipv6_input_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
>  {
> @@ -14466,7 +13875,7 @@ build_ipv6_input_flows_for_lrouter_port(
>  static void
>  build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
>                                    const struct lr_nat_record *lrnat_rec,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    const struct shash *meter_groups)
>  {
>      ovs_assert(od->nbr);
> @@ -14518,7 +13927,7 @@ build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
>  /* Logical router ingress table 3: IP Input for IPv4. */
>  static void
>  build_lrouter_ipv4_ip_input(struct ovn_port *op,
> -                            struct hmap *lflows,
> +                            struct lflow_table *lflows,
>                              struct ds *match, struct ds *actions,
>                              const struct shash *meter_groups)
>  {
> @@ -14722,7 +14131,7 @@ build_lrouter_ipv4_ip_input(struct ovn_port *op,
>  /* Logical router ingress table 3: IP Input for IPv4. */
>  static void
>  build_lrouter_ipv4_ip_input_for_lbnats(
> -    struct ovn_port *op, struct hmap *lflows,
> +    struct ovn_port *op, struct lflow_table *lflows,
>      const struct lr_stateful_record *lr_stateful_rec,
>      struct ds *match, const struct shash *meter_groups)
>  {
> @@ -14842,7 +14251,7 @@ build_lrouter_in_unsnat_match(const struct ovn_datapath *od,
>  }
>
>  static void
> -build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
> +build_lrouter_in_unsnat_stateless_flow(struct lflow_table *lflows,
>                                         const struct ovn_datapath *od,
>                                         const struct nbrec_nat *nat,
>                                         struct ds *match,
> @@ -14864,7 +14273,7 @@ build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
> +build_lrouter_in_unsnat_in_czone_flow(struct lflow_table *lflows,
>                                        const struct ovn_datapath *od,
>                                        const struct nbrec_nat *nat,
>                                        struct ds *match, bool distributed_nat,
> @@ -14898,7 +14307,7 @@ build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_in_unsnat_flow(struct hmap *lflows,
> +build_lrouter_in_unsnat_flow(struct lflow_table *lflows,
>                               const struct ovn_datapath *od,
>                               const struct nbrec_nat *nat, struct ds *match,
>                               bool distributed_nat, bool is_v6,
> @@ -14920,7 +14329,7 @@ build_lrouter_in_unsnat_flow(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_in_dnat_flow(struct hmap *lflows,
> +build_lrouter_in_dnat_flow(struct lflow_table *lflows,
>                             const struct ovn_datapath *od,
>                             const struct lr_nat_record *lrnat_rec,
>                             const struct nbrec_nat *nat, struct ds *match,
> @@ -14992,7 +14401,7 @@ build_lrouter_in_dnat_flow(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_out_undnat_flow(struct hmap *lflows,
> +build_lrouter_out_undnat_flow(struct lflow_table *lflows,
>                                const struct ovn_datapath *od,
>                                const struct nbrec_nat *nat, struct ds *match,
>                                struct ds *actions, bool distributed_nat,
> @@ -15043,7 +14452,7 @@ build_lrouter_out_undnat_flow(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_out_is_dnat_local(struct hmap *lflows,
> +build_lrouter_out_is_dnat_local(struct lflow_table *lflows,
>                                  const struct ovn_datapath *od,
>                                  const struct nbrec_nat *nat, struct ds *match,
>                                  struct ds *actions, bool distributed_nat,
> @@ -15074,7 +14483,7 @@ build_lrouter_out_is_dnat_local(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_out_snat_match(struct hmap *lflows,
> +build_lrouter_out_snat_match(struct lflow_table *lflows,
>                               const struct ovn_datapath *od,
>                               const struct nbrec_nat *nat, struct ds *match,
>                               bool distributed_nat, int cidr_bits, bool is_v6,
> @@ -15103,7 +14512,7 @@ build_lrouter_out_snat_match(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
> +build_lrouter_out_snat_stateless_flow(struct lflow_table *lflows,
>                                        const struct ovn_datapath *od,
>                                        const struct nbrec_nat *nat,
>                                        struct ds *match, struct ds *actions,
> @@ -15146,7 +14555,7 @@ build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
> +build_lrouter_out_snat_in_czone_flow(struct lflow_table *lflows,
>                                       const struct ovn_datapath *od,
>                                       const struct nbrec_nat *nat,
>                                       struct ds *match,
> @@ -15208,7 +14617,7 @@ build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_out_snat_flow(struct hmap *lflows,
> +build_lrouter_out_snat_flow(struct lflow_table *lflows,
>                              const struct ovn_datapath *od,
>                              const struct nbrec_nat *nat, struct ds *match,
>                              struct ds *actions, bool distributed_nat,
> @@ -15254,7 +14663,7 @@ build_lrouter_out_snat_flow(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
> +build_lrouter_ingress_nat_check_pkt_len(struct lflow_table *lflows,
>                                          const struct nbrec_nat *nat,
>                                          const struct ovn_datapath *od,
>                                          bool is_v6, struct ds *match,
> @@ -15326,7 +14735,7 @@ build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
>  }
>
>  static void
> -build_lrouter_ingress_flow(struct hmap *lflows,
> +build_lrouter_ingress_flow(struct lflow_table *lflows,
>                             const struct ovn_datapath *od,
>                             const struct nbrec_nat *nat, struct ds *match,
>                             struct ds *actions, struct eth_addr mac,
> @@ -15506,7 +14915,7 @@ lrouter_check_nat_entry(const struct ovn_datapath *od,
>
>  /* NAT, Defrag and load balancing. */
>  static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
> -                                                     struct hmap *lflows)
> +                                                struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>
> @@ -15532,7 +14941,7 @@ static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
>  static void
>  build_lrouter_nat_defrag_and_lb(
>      const struct lr_stateful_record *lr_stateful_rec,
> -    const struct ovn_datapath *od, struct hmap *lflows,
> +    const struct ovn_datapath *od, struct lflow_table *lflows,
>      const struct hmap *ls_ports, const struct hmap *lr_ports,
>      struct ds *match, struct ds *actions,
>      const struct shash *meter_groups,
> @@ -15911,31 +15320,30 @@ build_lsp_lflows_for_lbnats(struct ovn_port *lsp,
>                              const struct lr_stateful_record *lr_stateful_rec,
>                              const struct lr_stateful_table *lr_stateful_table,
>                              const struct hmap *lr_ports,
> -                            struct hmap *lflows,
> +                            struct lflow_table *lflows,
>                              struct ds *match,
> -                            struct ds *actions)
> +                            struct ds *actions,
> +                            struct lflow_ref *lflow_ref)
>  {
>      ovs_assert(lsp->nbsp);
>      ovs_assert(lsp->peer);
> -    start_collecting_lflows();
>      build_lswitch_rport_arp_req_flows_for_lbnats(
>          lsp->peer, lr_stateful_rec, lsp->od, lsp,
> -        lflows, &lsp->nbsp->header_);
> +        lflows, &lsp->nbsp->header_, lflow_ref);
>      build_ip_routing_flows_for_router_type_lsp(lsp, lr_stateful_table,
> -                                               lr_ports, lflows);
> +                                               lr_ports, lflows,
> +                                               lflow_ref);
>      build_arp_resolve_flows_for_lsp_routable_addresses(
> -        lsp, lflows, lr_ports, lr_stateful_table, match, actions);
> +        lsp, lflows, lr_ports, lr_stateful_table, match, actions, lflow_ref);
>      build_lswitch_ip_unicast_lookup_for_nats(lsp, lr_stateful_rec, lflows,
> -                                             match, actions);
> -    link_ovn_port_to_lflows(lsp, &collected_lflows);
> -    end_collecting_lflows();
> +                                             match, actions, lflow_ref);
>  }
>
>  static void
>  build_lbnat_lflows_iterate_by_lsp(
>      struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
>      const struct hmap *lr_ports, struct ds *match, struct ds *actions,
> -    struct hmap *lflows)
> +    struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbsp);
>
> @@ -15948,8 +15356,9 @@ build_lbnat_lflows_iterate_by_lsp(
>                                                        op->peer->od->index);
>      ovs_assert(lr_stateful_rec);
>
> -    build_lsp_lflows_for_lbnats(op, lr_stateful_rec, lr_stateful_table,
> -                                lr_ports, lflows, match, actions);
> +    build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
> +                                lr_stateful_table, lr_ports, lflows,
> +                                match, actions, op->stateful_lflow_ref);
>  }
>
>  static void
> @@ -15957,7 +15366,7 @@ build_lrp_lflows_for_lbnats(struct ovn_port *op,
>                              const struct lr_stateful_record *lr_stateful_rec,
>                              const struct shash *meter_groups,
>                              struct ds *match, struct ds *actions,
> -                            struct hmap *lflows)
> +                            struct lflow_table *lflows)
>  {
>      /* Drop IP traffic destined to router owned IPs except if the IP is
>       * also a SNAT IP. Those are dropped later, in stage
> @@ -15992,7 +15401,7 @@ static void
>  build_lbnat_lflows_iterate_by_lrp(
>      struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
>      const struct shash *meter_groups, struct ds *match,
> -    struct ds *actions, struct hmap *lflows)
> +    struct ds *actions, struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbrp);
>
> @@ -16008,7 +15417,7 @@ build_lbnat_lflows_iterate_by_lrp(
>  static void
>  build_lr_stateful_flows(const struct lr_stateful_record *lr_stateful_rec,
>                          const struct ovn_datapaths *lr_datapaths,
> -                        struct hmap *lflows,
> +                        struct lflow_table *lflows,
>                          const struct hmap *ls_ports,
>                          const struct hmap *lr_ports,
>                          struct ds *match,
> @@ -16036,7 +15445,7 @@ build_ls_stateful_flows(const struct ls_stateful_record *ls_stateful_rec,
>                          const struct ls_port_group_table *ls_pgs,
>                          const struct chassis_features *features,
>                          const struct shash *meter_groups,
> -                        struct hmap *lflows)
> +                        struct lflow_table *lflows)
>  {
>      build_ls_stateful_rec_pre_acls(ls_stateful_rec, od, ls_pgs, lflows);
>      build_ls_stateful_rec_pre_lb(ls_stateful_rec, od, lflows);
> @@ -16053,7 +15462,7 @@ struct lswitch_flow_build_info {
>      const struct ls_port_group_table *ls_port_groups;
>      const struct lr_stateful_table *lr_stateful_table;
>      const struct ls_stateful_table *ls_stateful_table;
> -    struct hmap *lflows;
> +    struct lflow_table *lflows;
>      struct hmap *igmp_groups;
>      const struct shash *meter_groups;
>      const struct hmap *lb_dps_map;
> @@ -16136,10 +15545,9 @@ build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
>                                           const struct shash *meter_groups,
>                                           struct ds *match,
>                                           struct ds *actions,
> -                                         struct hmap *lflows)
> +                                         struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbsp);
> -    start_collecting_lflows();
>
>      /* Build Logical Switch Flows. */
>      build_lswitch_port_sec_op(op, lflows, actions, match);
> @@ -16155,9 +15563,6 @@ build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
>
>      /* Build Logical Router Flows. */
>      build_arp_resolve_flows_for_lsp(op, lflows, lr_ports, match, actions);
> -
> -    link_ovn_port_to_lflows(op, &collected_lflows);
> -    end_collecting_lflows();
>  }
>
>  /* Helper function to combine all lflow generation which is iterated by logical
> @@ -16203,6 +15608,8 @@ build_lflows_thread(void *arg)
>      struct ovn_port *op;
>      int bnum;
>
> +    /* Note:  lflow_ref is not thread safe.  Ensure that op->lflow_ref
> +     * is not accessed by multiple threads at the same time. */
>      while (!stop_parallel_processing()) {
>          wait_for_work(control);
>          lsi = (struct lswitch_flow_build_info *) control->data;
> @@ -16372,7 +15779,7 @@ noop_callback(struct worker_pool *pool OVS_UNUSED,
>      /* Do nothing */
>  }
>
> -/* Fixes the hmap size (hmap->n) after parallel building the lflow_map when
> +/* Fixes the hmap size (hmap->n) after parallel building the lflow_table when
>   * dp-groups is enabled, because in that case all threads are updating the
>   * global lflow hmap. Although the lflow_hash_lock prevents currently inserting
>   * to the same hash bucket, the hmap->n is updated currently by all threads and
> @@ -16382,7 +15789,7 @@ noop_callback(struct worker_pool *pool OVS_UNUSED,
>   * after the worker threads complete the tasks in each iteration before any
>   * future operations on the lflow map. */
>  static void
> -fix_flow_map_size(struct hmap *lflow_map,
> +fix_flow_table_size(struct lflow_table *lflow_table,
>                    struct lswitch_flow_build_info *lsiv,
>                    size_t n_lsiv)
>  {
> @@ -16390,7 +15797,7 @@ fix_flow_map_size(struct hmap *lflow_map,
>      for (size_t i = 0; i < n_lsiv; i++) {
>          total += lsiv[i].thread_lflow_counter;
>      }
> -    lflow_map->n = total;
> +    lflow_table_set_size(lflow_table, total);
>  }
>
>  static void
> @@ -16402,7 +15809,7 @@ build_lswitch_and_lrouter_flows(
>      const struct ls_port_group_table *ls_pgs,
>      const struct lr_stateful_table *lr_stateful_table,
>      const struct ls_stateful_table *ls_stateful_table,
> -    struct hmap *lflows,
> +    struct lflow_table *lflows,
>      struct hmap *igmp_groups,
>      const struct shash *meter_groups,
>      const struct hmap *lb_dps_map,
> @@ -16449,7 +15856,7 @@ build_lswitch_and_lrouter_flows(
>
>          /* Run thread pool. */
>          run_pool_callback(build_lflows_pool, NULL, NULL, noop_callback);
> -        fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
> +        fix_flow_table_size(lflows, lsiv, build_lflows_pool->size);
>
>          for (index = 0; index < build_lflows_pool->size; index++) {
>              ds_destroy(&lsiv[index].match);
> @@ -16570,24 +15977,6 @@ build_lswitch_and_lrouter_flows(
>      free(svc_check_match);
>  }
>
> -static ssize_t max_seen_lflow_size = 128;
> -
> -void
> -lflow_data_init(struct lflow_data *data)
> -{
> -    fast_hmap_size_for(&data->lflows, max_seen_lflow_size);
> -}
> -
> -void
> -lflow_data_destroy(struct lflow_data *data)
> -{
> -    struct ovn_lflow *lflow;
> -    HMAP_FOR_EACH_SAFE (lflow, hmap_node, &data->lflows) {
> -        ovn_lflow_destroy(&data->lflows, lflow);
> -    }
> -    hmap_destroy(&data->lflows);
> -}
> -
>  void run_update_worker_pool(int n_threads)
>  {
>      /* If number of threads has been updated (or initially set),
> @@ -16633,7 +16022,7 @@ create_sb_multicast_group(struct ovsdb_idl_txn *ovnsb_txn,
>   * constructing their contents based on the OVN_NB database. */
>  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
>                    struct lflow_input *input_data,
> -                  struct hmap *lflows)
> +                  struct lflow_table *lflows)
>  {
>      struct hmap mcast_groups;
>      struct hmap igmp_groups;
> @@ -16664,281 +16053,26 @@ void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
>      }
>
>      /* Parallel build may result in a suboptimal hash. Resize the
> -     * hash to a correct size before doing lookups */
> -
> -    hmap_expand(lflows);
> -
> -    if (hmap_count(lflows) > max_seen_lflow_size) {
> -        max_seen_lflow_size = hmap_count(lflows);
> -    }
> -
> -    stopwatch_start(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
> -    /* Collecting all unique datapath groups. */
> -    struct hmap ls_dp_groups = HMAP_INITIALIZER(&ls_dp_groups);
> -    struct hmap lr_dp_groups = HMAP_INITIALIZER(&lr_dp_groups);
> -    struct hmap single_dp_lflows;
> -
> -    /* Single dp_flows will never grow bigger than lflows,
> -     * thus the two hmaps will remain the same size regardless
> -     * of how many elements we remove from lflows and add to
> -     * single_dp_lflows.
> -     * Note - lflows is always sized for at least 128 flows.
> -     */
> -    fast_hmap_size_for(&single_dp_lflows, max_seen_lflow_size);
> -
> -    struct ovn_lflow *lflow;
> -    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
> -        struct ovn_datapath **datapaths_array;
> -        size_t n_datapaths;
> -
> -        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> -            n_datapaths = ods_size(input_data->ls_datapaths);
> -            datapaths_array = input_data->ls_datapaths->array;
> -        } else {
> -            n_datapaths = ods_size(input_data->lr_datapaths);
> -            datapaths_array = input_data->lr_datapaths->array;
> -        }
> -
> -        lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> -
> -        ovs_assert(lflow->n_ods);
> -
> -        if (lflow->n_ods == 1) {
> -            /* There is only one datapath, so it should be moved out of the
> -             * group to a single 'od'. */
> -            size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
> -                                       n_datapaths);
> -
> -            bitmap_set0(lflow->dpg_bitmap, index);
> -            lflow->od = datapaths_array[index];
> -
> -            /* Logical flow should be re-hashed to allow lookups. */
> -            uint32_t hash = hmap_node_hash(&lflow->hmap_node);
> -            /* Remove from lflows. */
> -            hmap_remove(lflows, &lflow->hmap_node);
> -            hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
> -                                                  hash);
> -            /* Add to single_dp_lflows. */
> -            hmap_insert_fast(&single_dp_lflows, &lflow->hmap_node, hash);
> -        }
> -    }
> -
> -    /* Merge multiple and single dp hashes. */
> -
> -    fast_hmap_merge(lflows, &single_dp_lflows);
> -
> -    hmap_destroy(&single_dp_lflows);
> -
> -    stopwatch_stop(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
> +     * lflow map to a correct size before doing lookups */
> +    lflow_table_expand(lflows);
> +
>      stopwatch_start(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
> -
> -    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
> -    /* Push changes to the Logical_Flow table to database. */
> -    const struct sbrec_logical_flow *sbflow;
> -    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow,
> -                                     input_data->sbrec_logical_flow_table) {
> -        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
> -        struct ovn_datapath *logical_datapath_od = NULL;
> -        size_t i;
> -
> -        /* Find one valid datapath to get the datapath type. */
> -        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
> -        if (dp) {
> -            logical_datapath_od = ovn_datapath_from_sbrec(
> -                                        &input_data->ls_datapaths->datapaths,
> -                                        &input_data->lr_datapaths->datapaths,
> -                                        dp);
> -            if (logical_datapath_od
> -                && ovn_datapath_is_stale(logical_datapath_od)) {
> -                logical_datapath_od = NULL;
> -            }
> -        }
> -        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
> -            logical_datapath_od = ovn_datapath_from_sbrec(
> -                                        &input_data->ls_datapaths->datapaths,
> -                                        &input_data->lr_datapaths->datapaths,
> -                                        dp_group->datapaths[i]);
> -            if (logical_datapath_od
> -                && !ovn_datapath_is_stale(logical_datapath_od)) {
> -                break;
> -            }
> -            logical_datapath_od = NULL;
> -        }
> -
> -        if (!logical_datapath_od) {
> -            /* This lflow has no valid logical datapaths. */
> -            sbrec_logical_flow_delete(sbflow);
> -            continue;
> -        }
> -
> -        enum ovn_pipeline pipeline
> -            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
> -
> -        lflow = ovn_lflow_find(
> -            lflows, dp_group ? NULL : logical_datapath_od,
> -            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
> -                            pipeline, sbflow->table_id),
> -            sbflow->priority, sbflow->match, sbflow->actions,
> -            sbflow->controller_meter, sbflow->hash);
> -        if (lflow) {
> -            struct hmap *dp_groups;
> -            size_t n_datapaths;
> -            bool is_switch;
> -
> -            lflow->sb_uuid = sbflow->header_.uuid;
> -            is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
> -            if (is_switch) {
> -                n_datapaths = ods_size(input_data->ls_datapaths);
> -                dp_groups = &ls_dp_groups;
> -            } else {
> -                n_datapaths = ods_size(input_data->lr_datapaths);
> -                dp_groups = &lr_dp_groups;
> -            }
> -            if (input_data->ovn_internal_version_changed) {
> -                const char *stage_name = smap_get_def(&sbflow->external_ids,
> -                                                  "stage-name", "");
> -                const char *stage_hint = smap_get_def(&sbflow->external_ids,
> -                                                  "stage-hint", "");
> -                const char *source = smap_get_def(&sbflow->external_ids,
> -                                                  "source", "");
> -
> -                if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
> -                    sbrec_logical_flow_update_external_ids_setkey(sbflow,
> -                     "stage-name", ovn_stage_to_str(lflow->stage));
> -                }
> -                if (lflow->stage_hint) {
> -                    if (strcmp(stage_hint, lflow->stage_hint)) {
> -                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
> -                        "stage-hint", lflow->stage_hint);
> -                    }
> -                }
> -                if (lflow->where) {
> -                    if (strcmp(source, lflow->where)) {
> -                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
> -                        "source", lflow->where);
> -                    }
> -                }
> -            }
> -
> -            if (lflow->od) {
> -                sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> -                sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
> -            } else {
> -                lflow->dpg = ovn_dp_group_get_or_create(
> -                                ovnsb_txn, dp_groups, dp_group,
> -                                lflow->n_ods, lflow->dpg_bitmap,
> -                                n_datapaths, is_switch,
> -                                input_data->ls_datapaths,
> -                                input_data->lr_datapaths);
> -
> -                sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
> -                sbrec_logical_flow_set_logical_dp_group(sbflow,
> -                                                        lflow->dpg->dp_group);
> -            }
> -
> -            /* This lflow updated.  Not needed anymore. */
> -            hmap_remove(lflows, &lflow->hmap_node);
> -            hmap_insert(&lflows_temp, &lflow->hmap_node,
> -                        hmap_node_hash(&lflow->hmap_node));
> -        } else {
> -            sbrec_logical_flow_delete(sbflow);
> -        }
> -    }
> -
> -    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
> -        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
> -        uint8_t table = ovn_stage_get_table(lflow->stage);
> -        struct hmap *dp_groups;
> -        size_t n_datapaths;
> -        bool is_switch;
> -
> -        is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
> -        if (is_switch) {
> -            n_datapaths = ods_size(input_data->ls_datapaths);
> -            dp_groups = &ls_dp_groups;
> -        } else {
> -            n_datapaths = ods_size(input_data->lr_datapaths);
> -            dp_groups = &lr_dp_groups;
> -        }
> -
> -        lflow->sb_uuid = uuid_random();
> -        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
> -                                                        &lflow->sb_uuid);
> -        if (lflow->od) {
> -            sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> -        } else {
> -            lflow->dpg = ovn_dp_group_get_or_create(
> -                                ovnsb_txn, dp_groups, NULL,
> -                                lflow->n_ods, lflow->dpg_bitmap,
> -                                n_datapaths, is_switch,
> -                                input_data->ls_datapaths,
> -                                input_data->lr_datapaths);
> -
> -            sbrec_logical_flow_set_logical_dp_group(sbflow,
> -                                                    lflow->dpg->dp_group);
> -        }
> -
> -        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
> -        sbrec_logical_flow_set_table_id(sbflow, table);
> -        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
> -        sbrec_logical_flow_set_match(sbflow, lflow->match);
> -        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
> -        if (lflow->io_port) {
> -            struct smap tags = SMAP_INITIALIZER(&tags);
> -            smap_add(&tags, "in_out_port", lflow->io_port);
> -            sbrec_logical_flow_set_tags(sbflow, &tags);
> -            smap_destroy(&tags);
> -        }
> -        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
> -
> -        /* Trim the source locator lflow->where, which looks something like
> -         * "ovn/northd/northd.c:1234", down to just the part following the
> -         * last slash, e.g. "northd.c:1234". */
> -        const char *slash = strrchr(lflow->where, '/');
> -#if _WIN32
> -        const char *backslash = strrchr(lflow->where, '\\');
> -        if (!slash || backslash > slash) {
> -            slash = backslash;
> -        }
> -#endif
> -        const char *where = slash ? slash + 1 : lflow->where;
> -
> -        struct smap ids = SMAP_INITIALIZER(&ids);
> -        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
> -        smap_add(&ids, "source", where);
> -        if (lflow->stage_hint) {
> -            smap_add(&ids, "stage-hint", lflow->stage_hint);
> -        }
> -        sbrec_logical_flow_set_external_ids(sbflow, &ids);
> -        smap_destroy(&ids);
> -        hmap_remove(lflows, &lflow->hmap_node);
> -        hmap_insert(&lflows_temp, &lflow->hmap_node,
> -                    hmap_node_hash(&lflow->hmap_node));
> -    }
> -    hmap_swap(lflows, &lflows_temp);
> -    hmap_destroy(&lflows_temp);
> +    lflow_table_sync_to_sb(lflows, ovnsb_txn, input_data->ls_datapaths,
> +                           input_data->lr_datapaths,
> +                           input_data->ovn_internal_version_changed,
> +                           input_data->sbrec_logical_flow_table,
> +                           input_data->sbrec_logical_dp_group_table);
>
>      stopwatch_stop(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
> -    struct ovn_dp_group *dpg;
> -    HMAP_FOR_EACH_POP (dpg, node, &ls_dp_groups) {
> -        bitmap_free(dpg->bitmap);
> -        free(dpg);
> -    }
> -    hmap_destroy(&ls_dp_groups);
> -    HMAP_FOR_EACH_POP (dpg, node, &lr_dp_groups) {
> -        bitmap_free(dpg->bitmap);
> -        free(dpg);
> -    }
> -    hmap_destroy(&lr_dp_groups);
>
>      /* Push changes to the Multicast_Group table to database. */
>      const struct sbrec_multicast_group *sbmc;
> -    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (sbmc,
> -                                input_data->sbrec_multicast_group_table) {
> +    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (
> +            sbmc, input_data->sbrec_multicast_group_table) {
>          struct ovn_datapath *od = ovn_datapath_from_sbrec(
> -                                       &input_data->ls_datapaths->datapaths,
> -                                       &input_data->lr_datapaths->datapaths,
> -                                       sbmc->datapath);
> +            &input_data->ls_datapaths->datapaths,
> +            &input_data->lr_datapaths->datapaths,
> +            sbmc->datapath);
>
>          if (!od || ovn_datapath_is_stale(od)) {
>              sbrec_multicast_group_delete(sbmc);
> @@ -16978,120 +16112,22 @@ void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
>      hmap_destroy(&mcast_groups);
>  }
>
> -static void
> -sync_lsp_lflows_to_sb(struct ovsdb_idl_txn *ovnsb_txn,
> -                      struct lflow_input *lflow_input,
> -                      struct hmap *lflows,
> -                      struct ovn_lflow *lflow)
> -{
> -    size_t n_datapaths;
> -    struct ovn_datapath **datapaths_array;
> -    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> -        n_datapaths = ods_size(lflow_input->ls_datapaths);
> -        datapaths_array = lflow_input->ls_datapaths->array;
> -    } else {
> -        n_datapaths = ods_size(lflow_input->lr_datapaths);
> -        datapaths_array = lflow_input->lr_datapaths->array;
> -    }
> -    uint32_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> -    ovs_assert(n_ods == 1);
> -    /* There is only one datapath, so it should be moved out of the
> -     * group to a single 'od'. */
> -    size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
> -                               n_datapaths);
> -
> -    bitmap_set0(lflow->dpg_bitmap, index);
> -    lflow->od = datapaths_array[index];
> -
> -    /* Logical flow should be re-hashed to allow lookups. */
> -    uint32_t hash = hmap_node_hash(&lflow->hmap_node);
> -    /* Remove from lflows. */
> -    hmap_remove(lflows, &lflow->hmap_node);
> -    hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
> -                                          hash);
> -    /* Add back. */
> -    hmap_insert(lflows, &lflow->hmap_node, hash);
> -
> -    /* Sync to SB. */
> -    const struct sbrec_logical_flow *sbflow;
> -    /* Note: uuid_random acquires a global mutex. If we parallelize the sync to
> -     * SB this may become a bottleneck. */
> -    lflow->sb_uuid = uuid_random();
> -    sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
> -                                                    &lflow->sb_uuid);
> -    const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
> -    uint8_t table = ovn_stage_get_table(lflow->stage);
> -    sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> -    sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
> -    sbrec_logical_flow_set_pipeline(sbflow, pipeline);
> -    sbrec_logical_flow_set_table_id(sbflow, table);
> -    sbrec_logical_flow_set_priority(sbflow, lflow->priority);
> -    sbrec_logical_flow_set_match(sbflow, lflow->match);
> -    sbrec_logical_flow_set_actions(sbflow, lflow->actions);
> -    if (lflow->io_port) {
> -        struct smap tags = SMAP_INITIALIZER(&tags);
> -        smap_add(&tags, "in_out_port", lflow->io_port);
> -        sbrec_logical_flow_set_tags(sbflow, &tags);
> -        smap_destroy(&tags);
> -    }
> -    sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
> -    /* Trim the source locator lflow->where, which looks something like
> -     * "ovn/northd/northd.c:1234", down to just the part following the
> -     * last slash, e.g. "northd.c:1234". */
> -    const char *slash = strrchr(lflow->where, '/');
> -#if _WIN32
> -    const char *backslash = strrchr(lflow->where, '\\');
> -    if (!slash || backslash > slash) {
> -        slash = backslash;
> -    }
> -#endif
> -    const char *where = slash ? slash + 1 : lflow->where;
> -
> -    struct smap ids = SMAP_INITIALIZER(&ids);
> -    smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
> -    smap_add(&ids, "source", where);
> -    if (lflow->stage_hint) {
> -        smap_add(&ids, "stage-hint", lflow->stage_hint);
> -    }
> -    sbrec_logical_flow_set_external_ids(sbflow, &ids);
> -    smap_destroy(&ids);
> -}
> -
> -static bool
> -delete_lflow_for_lsp(struct ovn_port *op, bool is_update,
> -                     const struct sbrec_logical_flow_table *sb_lflow_table,
> -                     struct hmap *lflows)
> -{
> -    struct lflow_ref_node *lfrn;
> -    const char *operation = is_update ? "updated" : "deleted";
> -    LIST_FOR_EACH_SAFE (lfrn, lflow_list_node, &op->lflows) {
> -        VLOG_DBG("Deleting SB lflow "UUID_FMT" for %s port %s",
> -                 UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
> -
> -        const struct sbrec_logical_flow *sblflow =
> -            sbrec_logical_flow_table_get_for_uuid(sb_lflow_table,
> -                                              &lfrn->lflow->sb_uuid);
> -        if (sblflow) {
> -            sbrec_logical_flow_delete(sblflow);
> -        } else {
> -            static struct vlog_rate_limit rl =
> -                VLOG_RATE_LIMIT_INIT(1, 1);
> -            VLOG_WARN_RL(&rl, "SB lflow "UUID_FMT" not found when handling "
> -                         "%s port %s. Recompute.",
> -                         UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
> -            return false;
> -        }
> +void
> +lflow_reset_northd_refs(struct lflow_input *lflow_input)
> +{
> +    struct ovn_port *op;
>
> -        ovn_lflow_destroy(lflows, lfrn->lflow);
> +    HMAP_FOR_EACH (op, key_node, lflow_input->ls_ports) {
> +        lflow_ref_clear(op->lflow_ref);
> +        lflow_ref_clear(op->stateful_lflow_ref);
>      }
> -    return true;
>  }
>
>  bool
>  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>                                   struct tracked_ovn_ports *trk_lsps,
>                                   struct lflow_input *lflow_input,
> -                                 struct hmap *lflows)
> +                                 struct lflow_table *lflows)
>  {
>      struct hmapx_node *hmapx_node;
>      struct ovn_port *op;
> @@ -17100,13 +16136,15 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>          op = hmapx_node->data;
>          /* Make sure 'op' is an lsp and not lrp. */
>          ovs_assert(op->nbsp);
> -
> -        if (!delete_lflow_for_lsp(op, false,
> -                                  lflow_input->sbrec_logical_flow_table,
> -                                  lflows)) {
> -                return false;
> -            }
> -
> +        bool handled = lflow_ref_resync_flows(
> +            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
> +            lflow_input->lr_datapaths,
> +            lflow_input->ovn_internal_version_changed,
> +            lflow_input->sbrec_logical_flow_table,
> +            lflow_input->sbrec_logical_dp_group_table);
> +        if (!handled) {
> +            return false;
> +        }
>          /* No need to update SB multicast groups, thanks to weak
>           * references. */
>      }
> @@ -17115,13 +16153,8 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>          op = hmapx_node->data;
>          /* Make sure 'op' is an lsp and not lrp. */
>          ovs_assert(op->nbsp);
> -
> -        /* Delete old lflows. */
> -        if (!delete_lflow_for_lsp(op, true,
> -                                  lflow_input->sbrec_logical_flow_table,
> -                                  lflows)) {
> -            return false;
> -        }
> +        /* Clear old lflows. */
> +        lflow_ref_unlink_lflows(op->lflow_ref);
>
>          /* Generate new lflows. */
>          struct ds match = DS_EMPTY_INITIALIZER;
> @@ -17131,21 +16164,39 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>                                                   lflow_input->meter_groups,
>                                                   &match, &actions,
>                                                   lflows);
> -        build_lbnat_lflows_iterate_by_lsp(op, lflow_input->lr_stateful_table,
> -                                          lflow_input->lr_ports, &match,
> -                                          &actions, lflows);
> +        /* Sync the new flows to SB. */
> +        bool handled = lflow_ref_sync_lflows(
> +            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
> +            lflow_input->lr_datapaths,
> +            lflow_input->ovn_internal_version_changed,
> +            lflow_input->sbrec_logical_flow_table,
> +            lflow_input->sbrec_logical_dp_group_table);
> +        if (handled) {
> +            /* Now regenerate the stateful lflows for 'op' */
> +            /* Clear old lflows. */
> +            lflow_ref_unlink_lflows(op->stateful_lflow_ref);
> +            build_lbnat_lflows_iterate_by_lsp(op,
> +                                              lflow_input->lr_stateful_table,
> +                                              lflow_input->lr_ports, &match,
> +                                              &actions, lflows);
> +            handled = lflow_ref_sync_lflows(
> +                op->stateful_lflow_ref, lflows, ovnsb_txn,
> +                lflow_input->ls_datapaths,
> +                lflow_input->lr_datapaths,
> +                lflow_input->ovn_internal_version_changed,
> +                lflow_input->sbrec_logical_flow_table,
> +                lflow_input->sbrec_logical_dp_group_table);
> +        }
> +
>          ds_destroy(&match);
>          ds_destroy(&actions);
>
> +        if (!handled) {
> +            return false;
> +        }
> +
>          /* SB port_binding is not deleted, so don't update SB multicast
>           * groups. */
> -
> -        /* Sync the new flows to SB. */
> -        struct lflow_ref_node *lfrn;
> -        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
> -            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
> -                                  lfrn->lflow);
> -        }
>      }
>
>      HMAPX_FOR_EACH (hmapx_node, &trk_lsps->created) {
> @@ -17170,12 +16221,35 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>                                                   lflow_input->meter_groups,
>                                                   &match, &actions, lflows);
>
> -        build_lbnat_lflows_iterate_by_lsp(op, lflow_input->lr_stateful_table,
> -                                          lflow_input->lr_ports, &match,
> -                                          &actions, lflows);
> +        /* Sync the newly added flows to SB. */
> +        bool handled = lflow_ref_sync_lflows(
> +            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
> +            lflow_input->lr_datapaths,
> +            lflow_input->ovn_internal_version_changed,
> +            lflow_input->sbrec_logical_flow_table,
> +            lflow_input->sbrec_logical_dp_group_table);
> +        if (handled) {
> +            /* Now generate the stateful lflows for 'op' */
> +            build_lbnat_lflows_iterate_by_lsp(op,
> +                                              lflow_input->lr_stateful_table,
> +                                              lflow_input->lr_ports, &match,
> +                                              &actions, lflows);
> +            handled = lflow_ref_sync_lflows(
> +                op->stateful_lflow_ref, lflows, ovnsb_txn,
> +                lflow_input->ls_datapaths,
> +                lflow_input->lr_datapaths,
> +                lflow_input->ovn_internal_version_changed,
> +                lflow_input->sbrec_logical_flow_table,
> +                lflow_input->sbrec_logical_dp_group_table);
> +        }
> +
>          ds_destroy(&match);
>          ds_destroy(&actions);
>
> +        if (!handled) {
> +            return false;
> +        }
> +
>          /* Update SB multicast groups for the new port. */
>          if (!sbmc_flood) {
>              sbmc_flood = create_sb_multicast_group(ovnsb_txn,
> @@ -17199,13 +16273,6 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>              sbrec_multicast_group_update_ports_addvalue(sbmc_unknown,
>                                                          op->sb);
>          }
> -
> -        /* Sync the newly added flows to SB. */
> -        struct lflow_ref_node *lfrn;
> -        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
> -            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
> -                                    lfrn->lflow);
> -        }
>      }
>
>      return true;
> diff --git a/northd/northd.h b/northd/northd.h
> index 404abbe5b5..f9370be955 100644
> --- a/northd/northd.h
> +++ b/northd/northd.h
> @@ -23,6 +23,7 @@
>  #include "northd/en-port-group.h"
>  #include "northd/ipam.h"
>  #include "openvswitch/hmap.h"
> +#include "ovs-thread.h"
>
>  struct northd_input {
>      /* Northbound table references */
> @@ -164,13 +165,6 @@ struct northd_data {
>      struct northd_tracked_data trk_data;
>  };
>
> -struct lflow_data {
> -    struct hmap lflows;
> -};
> -
> -void lflow_data_init(struct lflow_data *);
> -void lflow_data_destroy(struct lflow_data *);
> -
>  struct lr_nat_table;
>
>  struct lflow_input {
> @@ -182,6 +176,7 @@ struct lflow_input {
>      const struct sbrec_logical_flow_table *sbrec_logical_flow_table;
>      const struct sbrec_multicast_group_table *sbrec_multicast_group_table;
>      const struct sbrec_igmp_group_table *sbrec_igmp_group_table;
> +    const struct sbrec_logical_dp_group_table *sbrec_logical_dp_group_table;
>
>      /* Indexes */
>      struct ovsdb_idl_index *sbrec_mcast_group_by_name_dp;
> @@ -201,6 +196,15 @@ struct lflow_input {
>      bool ovn_internal_version_changed;
>  };
>
> +extern int parallelization_state;
> +enum {
> +    STATE_NULL,               /* parallelization is off */
> +    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
> +    STATE_USE_PARALLELIZATION /* parallelization is on */
> +};
> +
> +extern thread_local size_t thread_lflow_counter;
> +
>  /*
>   * Multicast snooping and querier per datapath configuration.
>   */
> @@ -351,6 +355,179 @@ ovn_datapaths_find_by_index(const struct ovn_datapaths *ovn_datapaths,
>      return ovn_datapaths->array[od_index];
>  }
>
> +struct ovn_datapath *ovn_datapath_from_sbrec(
> +    const struct hmap *ls_datapaths, const struct hmap *lr_datapaths,
> +    const struct sbrec_datapath_binding *);
> +
> +static inline bool
> +ovn_datapath_is_stale(const struct ovn_datapath *od)
> +{
> +    return !od->nbr && !od->nbs;
> +};
> +
> +/* Pipeline stages. */
> +
> +/* The two purposes for which ovn-northd uses OVN logical datapaths. */
> +enum ovn_datapath_type {
> +    DP_SWITCH,                  /* OVN logical switch. */
> +    DP_ROUTER                   /* OVN logical router. */
> +};
> +
> +/* Returns an "enum ovn_stage" built from the arguments.
> + *
> + * (It's better to use ovn_stage_build() for type-safety reasons, but inline
> + * functions can't be used in enums or switch cases.) */
> +#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
> +    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
> +
> +/* A stage within an OVN logical switch or router.
> + *
> + * An "enum ovn_stage" indicates whether the stage is part of a logical switch
> + * or router, whether the stage is part of the ingress or egress pipeline, and
> + * the table within that pipeline.  The first three components are combined to
> + * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
> + * S_ROUTER_OUT_DELIVERY. */
> +enum ovn_stage {
> +#define PIPELINE_STAGES                                                   \
> +    /* Logical switch ingress stages. */                                  \
> +    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
> +    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
> +    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
> +    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
> +    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
> +    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
> +    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
> +    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
> +    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
> +    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
> +    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
> +                   "ls_in_acl_after_lb_eval")  \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
> +                   "ls_in_acl_after_lb_action")  \
> +    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
> +    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
> +    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
> +                                                                          \
> +    /* Logical switch egress stages. */                                   \
> +    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
> +    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
> +    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
> +    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
> +    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
> +    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
> +    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
> +    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
> +    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
> +    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
> +    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
> +                                                                      \
> +    /* Logical router ingress stages. */                              \
> +    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
> +    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
> +    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
> +    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
> +    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
> +    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
> +    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
> +    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
> +    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
> +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
> +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
> +    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
> +    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
> +    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
> +                                                                      \
> +    /* Logical router egress stages. */                               \
> +    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
> +                   "lr_out_chk_dnat_local")                                  \
> +    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
> +    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
> +    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
> +    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
> +    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
> +    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
> +
> +#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
> +    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
> +        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
> +    PIPELINE_STAGES
> +#undef PIPELINE_STAGE
> +};
> +
> +enum ovn_datapath_type ovn_stage_to_datapath_type(enum ovn_stage stage);
> +
> +
> +/* Returns 'od''s datapath type. */
> +static inline enum ovn_datapath_type
> +ovn_datapath_get_type(const struct ovn_datapath *od)
> +{
> +    return od->nbs ? DP_SWITCH : DP_ROUTER;
> +}
> +
> +/* Returns an "enum ovn_stage" built from the arguments. */
> +static inline enum ovn_stage
> +ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
> +                uint8_t table)
> +{
> +    return OVN_STAGE_BUILD(dp_type, pipeline, table);
> +}
> +
> +/* Returns the pipeline to which 'stage' belongs. */
> +static inline enum ovn_pipeline
> +ovn_stage_get_pipeline(enum ovn_stage stage)
> +{
> +    return (stage >> 8) & 1;
> +}
> +
> +/* Returns the pipeline name to which 'stage' belongs. */
> +static inline const char *
> +ovn_stage_get_pipeline_name(enum ovn_stage stage)
> +{
> +    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
> +}
> +
> +/* Returns the table to which 'stage' belongs. */
> +static inline uint8_t
> +ovn_stage_get_table(enum ovn_stage stage)
> +{
> +    return stage & 0xff;
> +}
> +
> +/* Returns a string name for 'stage'. */
> +static inline const char *
> +ovn_stage_to_str(enum ovn_stage stage)
> +{
> +    switch (stage) {
> +#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> +        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
> +    PIPELINE_STAGES
> +#undef PIPELINE_STAGE
> +        default: return "<unknown>";
> +    }
> +}
> +
>  /* A logical switch port or logical router port.
>   *
>   * In steady state, an ovn_port points to a northbound Logical_Switch_Port
> @@ -441,8 +618,10 @@ struct ovn_port {
>      /* Temporarily used for traversing a list (or hmap) of ports. */
>      bool visited;
>
> -    /* List of struct lflow_ref_node that points to the lflows generated by
> -     * this ovn_port.
> +    /* Only used for the router type LSP whose peer is l3dgw_port */
> +    bool enable_router_port_acl;
> +
> +    /* Reference of lflows generated for this ovn_port.
>       *
>       * This data is initialized and destroyed by the en_northd node, but
>       * populated and used only by the en_lflow node. Ideally this data should
> @@ -460,11 +639,19 @@ struct ovn_port {
>       * Adding the list here is more straightforward. The drawback is that we
>       * need to keep in mind that this data belongs to en_lflow node, so never
>       * access it from any other nodes.
> +     *
> +     * 'lflow_ref' is used to reference generic logical flows generated for
> +     *  this ovn_port.
> +     *
> +     * 'stateful_lflow_ref' is used for logical switch ports of type
> +     * 'patch/router' to reference logical flows generated fo this ovn_port
> +     *  from the 'lr_stateful' record of the peer port's datapath.
> +     *
> +     * Note: lflow_ref is not thread safe.  Only one thread should
> +     * access ovn_ports->lflow_ref at any given time.
>       */
> -    struct ovs_list lflows;
> -
> -    /* Only used for the router type LSP whose peer is l3dgw_port */
> -    bool enable_router_port_acl;
> +    struct lflow_ref *lflow_ref;
> +    struct lflow_ref *stateful_lflow_ref;
>  };
>
>  void ovnnb_db_run(struct northd_input *input_data,
> @@ -487,13 +674,17 @@ void northd_destroy(struct northd_data *data);
>  void northd_init(struct northd_data *data);
>  void northd_indices_create(struct northd_data *data,
>                             struct ovsdb_idl *ovnsb_idl);
> +
> +struct lflow_table;
>  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
>                    struct lflow_input *input_data,
> -                  struct hmap *lflows);
> +                  struct lflow_table *);
> +void lflow_reset_northd_refs(struct lflow_input *);
> +
>  bool lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>                                        struct tracked_ovn_ports *,
>                                        struct lflow_input *,
> -                                      struct hmap *lflows);
> +                                      struct lflow_table *lflows);
>  bool northd_handle_sb_port_binding_changes(
>      const struct sbrec_port_binding_table *, struct hmap *ls_ports,
>      struct hmap *lr_ports);
> diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
> index deb3194cbd..0c0c00ca6d 100644
> --- a/northd/ovn-northd.c
> +++ b/northd/ovn-northd.c
> @@ -856,6 +856,10 @@ main(int argc, char *argv[])
>          ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
>                               &sbrec_port_group_columns[i]);
>      }
> +    for (size_t i = 0; i < SBREC_LOGICAL_DP_GROUP_N_COLUMNS; i++) {
> +        ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
> +                             &sbrec_logical_dp_group_columns[i]);
> +    }
>
>      unixctl_command_register("sb-connection-status", "", 0, 0,
>                               ovn_conn_show, ovnsb_idl_loop.idl);
> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> index f5cf4f25c9..25e45506b7 100644
> --- a/tests/ovn-northd.at
> +++ b/tests/ovn-northd.at
> @@ -11352,6 +11352,222 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE
>  AT_CLEANUP
>  ])
>
> +OVN_FOR_EACH_NORTHD_NO_HV([
> +AT_SETUP([Load balancer incremental processing for multiple LBs with same VIPs])
> +ovn_start
> +
> +check ovn-nbctl ls-add sw0
> +check ovn-nbctl ls-add sw1
> +check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
> +check ovn-nbctl --wait=sb lb-add lb2 10.0.0.10:80 10.0.0.3:80
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb ls-lb-add sw0 lb1
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> +sw0_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw0)
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" = ""])
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb ls-lb-add sw1 lb2
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +# Clear the SB:Logical_Flow.logical_dp_groups column of all the
> +# logical flows and then modify the NB:Load_balancer.  ovn-northd
> +# should resync the logical flows.
> +for l in $(ovn-sbctl --bare --columns _uuid list logical_flow)
> +do
> +    ovn-sbctl clear logical_flow $l logical_dp_group
> +done
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb set load_balancer lb1 options:foo=bar
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb clear load_balancer lb2 vips
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" = ""])
> +
> +# Add back the vip to lb2.
> +check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
> +
> +# Create additional logical switches and associate lb1 to sw0, sw1 and sw2
> +# and associate lb2 to sw3, sw4 and sw5
> +check ovn-nbctl ls-add sw2
> +check ovn-nbctl ls-add sw3
> +check ovn-nbctl ls-add sw4
> +check ovn-nbctl ls-add sw5
> +check ovn-nbctl --wait=sb ls-lb-del sw1 lb2
> +check ovn-nbctl ls-lb-add sw1 lb1
> +check ovn-nbctl ls-lb-add sw2 lb1
> +check ovn-nbctl ls-lb-add sw3 lb2
> +check ovn-nbctl ls-lb-add sw4 lb2
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb ls-lb-add sw5 lb2
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +sw1_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw1)
> +sw2_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw2)
> +sw3_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw3)
> +sw4_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw4)
> +sw5_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw5)
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> +
> +echo "dpgrp_dps - $dpgrp_dps"
> +
> +# Clear the vips for lb2.  The logical lb logical flow dp group should have
> +# only sw0, sw1 and sw2 uuids.
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb clear load_balancer lb2 vips
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [1], [ignore])
> +
> +# Clear the vips for lb1.  The logical flow should be deleted.
> +check ovn-nbctl --wait=sb clear load_balancer lb1 vips
> +
> +AT_CHECK([ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid], [1], [ignore], [ignore])
> +
> +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> +AT_CHECK([test "$lb_lflow_uuid" = ""])
> +
> +
> +# Now add back the vips,  create another lb with the same vips and associate to
> +# sw0 and sw1
> +check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
> +check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
> +check ovn-nbctl --wait=sb lb-add lb3 10.0.0.10:80 10.0.0.3:80
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +
> +check ovn-nbctl ls-lb-add sw0 lb3
> +check ovn-nbctl --wait=sb ls-lb-add sw1 lb3
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> +
> +# Now clear lb1 vips.
> +# Since lb3 is associated with sw0 and sw1, the logical flow db group
> +# should have reference to sw0 and sw1, but not to sw2.
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb clear load_balancer lb1 vips
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +echo "dpgrp dps - $dpgrp_dps"
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> +
> +# Now clear lb3 vips.  The logical flow db group
> +# should have reference only to sw3, sw4 and sw5 because lb2 is
> +# associated to them.
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb clear load_balancer lb3 vips
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +echo "dpgrp dps - $dpgrp_dps"
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> +
> +AT_CLEANUP
> +])
> +
>  OVN_FOR_EACH_NORTHD_NO_HV([
>  AT_SETUP([Logical router incremental processing for NAT])
>
> --
> 2.43.0
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
Dumitru Ceara Feb. 2, 2024, 12:08 p.m. UTC | #2
On 1/30/24 22:21, numans@ovn.org wrote:
> From: Numan Siddique <numans@ovn.org>
> 
> ovn_lflow_add() and other related functions/macros are now moved
> into a separate module - lflow-mgr.c.  This module maintains a
> table 'struct lflow_table' for the logical flows.  lflow table
> maintains a hmap to store the logical flows.
> 
> It also maintains the logical switch and router dp groups.
> 
> Previous commits which added lflow incremental processing for
> the VIF logical ports, stored the references to
> the logical ports' lflows using 'struct lflow_ref_list'.  This
> struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
> It is  modified a bit to store the resource to lflow references.
> 
> Example usage of 'struct lflow_ref'.
> 
> 'struct ovn_port' maintains 2 instances of lflow_ref.  i,e
> 
> struct ovn_port {
>    ...
>    ...
>    struct lflow_ref *lflow_ref;
>    struct lflow_ref *stateful_lflow_ref;
> };
> 
> All the logical flows generated by
> build_lswitch_and_lrouter_iterate_by_lsp() uses the ovn_port->lflow_ref.
> 
> All the logical flows generated by build_lsp_lflows_for_lbnats()
> uses the ovn_port->stateful_lflow_ref.
> 
> When handling the ovn_port changes incrementally, the lflows referenced
> in 'struct ovn_port' are cleared and regenerated and synced to the
> SB logical flows.
> 
> eg.
> 
> lflow_ref_clear_lflows(op->lflow_ref);
> build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
> lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);
> 
> This patch does few more changes:
>   -  Logical flows are now hashed without the logical
>      datapaths.  If a logical flow is referenced by just one
>      datapath, we don't rehash it.
> 
>   -  The synthetic 'hash' column of sbrec_logical_flow now
>      doesn't use the logical datapath.  This means that
>      when ovn-northd is updated/upgraded and has this commit,
>      all the logical flows with 'logical_datapath' column
>      set will get deleted and re-added causing some disruptions.
> 
>   -  With the commit [1] which added I-P support for logical
>      port changes, multiple logical flows with same match 'M'
>      and actions 'A' are generated and stored without the
>      dp groups, which was not the case prior to
>      that patch.
>      One example to generate these lflows is:
>              ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
>              ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
> 	     ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"
> 
>      Now with this patch we go back to the earlier way.  i.e
>      one logical flow with logical_dp_groups set.
> 
>   -  With this patch any updates to a logical port which
>      doesn't result in new logical flows will not result in
>      deletion and addition of same logical flows.
>      Eg.
>      ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
>      will be a no-op to the SB logical flow table.
> 
> [1] - 8bbd678("northd: Incremental processing of VIF additions in 'lflow' node.")
> 
> Signed-off-by: Numan Siddique <numans@ovn.org>
> ---

[...]

> +
> +/* Logical flow sync using 'struct lflow_ref'
> + * ==========================================
> + * The 'struct lflow_ref' represents a collection of (or references to)
> + * logical flows (struct ovn_lflow) which belong to a logical entity 'E'.
> + * This entity 'E' is external to lflow manager (see northd.h and northd.c)
> + * Eg. logical datapath (struct ovn_datapath), logical switch and router ports
> + * (struct ovn_port), load balancer (struct lb_datapath) etc.
> + *
> + * General guidelines on using 'struct lflow_ref'.
> + *   - For an entity 'E', create an instance of lflow_ref
> + *           E->lflow_ref = lflow_ref_create();
> + *
> + *   - For each logical flow L(M, A) generated for the entity 'E'
> + *     pass E->lflow_ref when adding L(M, A) to the lflow table.
> + *     Eg. lflow_table_add_lflow(lflow_table, od_of_E, M, A, .., E->lflow_ref);
> + *
> + * If lflows L1, L2 and L3 are generated for 'E', then
> + * E->lflow_ref stores these in its hmap.
> + * i.e E->lflow_ref->lflow_ref_nodes = hmap[LRN(L1, E1), LRN(L2, E1),
> + *                                          LRN(L3, E1)]
> + *
> + * LRN is an instance of 'struct lflow_ref_node'.
> + * 'struct lflow_ref_node' is used to store a logical lflow L(M, A) as a
> + * reference in the lflow_ref.  It is possible that an lflow L(M,A) can be
> + * referenced by one or more lflow_ref's.  For each reference, an instance of
> + * this struct 'lflow_ref_node' is created.
> + *
> + * For example, if entity E1 generates lflows L1, L2 and L3
> + * and entity E2 generates lflows L1, L3, and L4 then
> + * an instance of this struct is created for each entity.
> + * For example LRN(L1, E1).
> + *
> + * Each logical flow's L also maintains a list of its references in the
> + * ovn_lflow->referenced_by list.
> + *
> + *
> + *
> + *                L1            L2             L3             L4
> + *                |             |  (list)      |              |
> + *   (lflow_ref)  v             v              v              v
> + *  ----------------------------------------------------------------------
> + * | E1 (hmap) => LRN(L1,E1) => LRN(L2, E1) => LRN(L3, E1)    |           |
> + * |              |                            |              |           |
> + * |              v                            v              v           |
> + * | E2 (hmap) => LRN(L1,E2) ================> LRN(L3, E2) => LRN(L4, E2) |
> + *  ----------------------------------------------------------------------
> + *
> + *
> + * Life cycle of 'struct lflow_ref_node'
> + * =====================================
> + * For a given logical flow L1 and entity E1's lflow_ref,
> + *  1. LRN(L1, E1) is created in lflow_table_add_lflow() and its 'linked' flag
> + *     is set to true.
> + *  2. LRN(L1, E1) is stored in the hmap - E1->lflow_ref->lflow_ref_nodes.
> + *  3. LRN(L1, E1) is also stored in the linked list L1->referenced_by.
> + *  4. LRN(L1, E1)->linked is set to false when the client calls
> + *     lflow_ref_unlink_lflows(E1->lflow_ref).
> + *  5. LRN(L1, E1)->linked is set to true again when the client calls
> + *     lflow_table_add_lflow(L1, ..., E1->lflow_ref) and LRN(L1, E1)
> + *     is already present.
> + *  6. LRN(L1, E1) is destroyed if LRN(L1, E1)->linked is false
> + *     when the client calls lflow_ref_sync_lflows().
> + *  7. LRN(L1, E1) is also destroyed in lflow_ref_clear(E1->lflow_ref).
> + *
> + *
> + * Incremental lflow generation for a logical entity
> + * =================================================
> + * Lets take the above example again.
> + *
> + *
> + *                L1            L2             L3             L4
> + *                |             |  (list)      |              |
> + *   (lflow_ref)  v             v              v              v
> + *  ----------------------------------------------------------------------
> + * | E1 (hmap) => LRN(L1,E1) => LRN(L2, E1) => LRN(L3, E1)    |           |
> + * |              |                            |              |           |
> + * |              v                            v              v           |
> + * | E2 (hmap) => LRN(L1,E2) ================> LRN(L3, E2) => LRN(L4, E2) |
> + *  ----------------------------------------------------------------------
> + *
> + *
> + * L1 is referenced by E1 and E2
> + * L2 is referenced by just E1
> + * L3 is referenced by E1 and E2
> + * L4 is referenced by just E2
> + *
> + * L1->dpg_bitmap = [E1->od->index, E2->od->index]
> + * L2->dpg_bitmap = [E1->od->index]
> + * L3->dpg_bitmap = [E1->od->index, E2->od->index]
> + * L4->dpg_bitmap = [E2->od->index]
> + *
> + *
> + * When 'E' gets updated,
> + *   1.  the client should first call
> + *       lflow_ref_unlink_lflows(E1->lflow_ref);
> + *
> + *       This function sets the 'linked' flag to false and clears the dp bitmap
> + *       of linked lflows.
> + *
> + *       LRN(L1,E1)->linked = false;
> + *       LRN(L2,E1)->linked = false;
> + *       LRN(L3,E1)->linked = false;
> + *
> + *       bitmap status of all lflows in the lflows table
> + *       -----------------------------------------------
> + *       L1->dpg_bitmap = [E2->od->index]
> + *       L2->dpg_bitmap = []
> + *       L3->dpg_bitmap = [E2->od->index]
> + *       L4->dpg_bitmap = [E2->od->index]
> + *
> + *   2.  In step (2), client should generate the logical flows again for 'E1'.
> + *       Lets say it calls:
> + *       lflow_table_add_lflow(lflow_table, L3, E1->lflow_ref)
> + *       lflow_table_add_lflow(lflow_table, L5, E1->lflow_ref)
> + *
> + *       So, E1 generates the flows L3 and L5 and discards L1 and L2.
> + *
> + *       Below is the state of LRNs of E1
> + *       LRN(L1,E1)->linked = false;
> + *       LRN(L2,E1)->linked = false;
> + *       LRN(L3,E1)->linked = true;
> + *       LRN(L5,E1)->linked = true;
> + *
> + *       bitmap status of all lflows in the lflow table after end of step (2)
> + *       --------------------------------------------------------------------
> + *       L1->dpg_bitmap = [E2->od->index]
> + *       L2->dpg_bitmap = []
> + *       L3->dpg_bitmap = [E1->od->index, E2->od->index]
> + *       L4->dpg_bitmap = [E2->od->index]
> + *       L5->dpg_bitmap = [E1->od->index]
> + *
> + *   3.  In step (3), client should sync the E1's lflows by calling
> + *       lflow_ref_sync_lflows(E1->lflow_ref,....);
> + *
> + *       Below is how the logical flows in SB DB gets updated:
> + *       lflow L1:
> + *              SB:L1->logical_dp_group = NULL;
> + *              SB:L1->logical_datapath = E2->od;
> + *
> + *       lflow L2: L2 is deleted since no datapath is using it.
> + *
> + *       lflow L3: No changes
> + *
> + *       lflow L5: New row is created for this.
> + *
> + * After step (3)
> + *
> + *                L1            L5             L3             L4
> + *                |             |  (list)      |              |
> + *   (lflow_ref)  v             v              v              v
> + *  ----------------------------------------------------------------------
> + * | E1 (hmap) ===============> LRN(L2, E1) => LRN(L3, E1)    |           |
> + * |              |                            |              |           |
> + * |              v                            v              v           |
> + * | E2 (hmap) => LRN(L1,E2) ================> LRN(L3, E2) => LRN(L4, E2) |
> + *  ----------------------------------------------------------------------
> + *
> + * Thread safety in lflow_ref
> + * ==========================
> + * The function lflow_table_add_lflow() is not thread safe for lflow_ref.
> + * Client should ensure that same instance of lflow_ref's are not used
> + * by multiple threads when calling lflow_table_add_lflow().
> + *
> + * One way to ensure thread safety is to maintain array of hash locks
> + * in each lflow_ref just like how we have static variable lflow_hash_locks
> + * of type ovs_mutex. This would mean that client has to reconsile the

s/reconsile/reconcile

> + * lflow_ref hmap lflow_ref_nodes (by calling hmap_expand()) after the
> + * lflow generation is complete.  (See lflow_table_expand()).
> + *
> + * Presently the client of lflow manager (northd.c) doesn't call
> + * lflow_table_add_lflow() in multiple threads for the same lflow_ref.
> + * But it may change in the future and we may need to add the thread
> + * safety support.
> + *
> + * Until then care should be taken by the contributors to avoid this
> + * scenario.
> + */

Thanks for documenting this!

>  
> +extern int parallelization_state;
> +enum {
> +    STATE_NULL,               /* parallelization is off */
> +    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
> +    STATE_USE_PARALLELIZATION /* parallelization is on */
> +};
> +
> +extern thread_local size_t thread_lflow_counter;
> +

I'm not a fan of this TBH.  We define and initialize this counter in
northd.c but we increment it here.  I think this is a sign of the fact
that we didn't split out the lflow managment properly.  I guess the
logic that runs lflow build in parallel (currently in northd.c) should
be more abstract and should be moved to lflow-mgr.c.

I think that's a large change too though so not really possible to do in
this release cycle but can we please add a TODO item for it?

With these small comments addressed:
Acked-by: Dumitru Ceara <dceara@redhat.com>

Thanks,
Dumitru
diff mbox series

Patch

diff --git a/lib/ovn-util.c b/lib/ovn-util.c
index 3e69a25347..ee5cbcdc3c 100644
--- a/lib/ovn-util.c
+++ b/lib/ovn-util.c
@@ -622,13 +622,10 @@  ovn_pipeline_from_name(const char *pipeline)
 uint32_t
 sbrec_logical_flow_hash(const struct sbrec_logical_flow *lf)
 {
-    const struct sbrec_datapath_binding *ld = lf->logical_datapath;
-    uint32_t hash = ovn_logical_flow_hash(lf->table_id,
-                                          ovn_pipeline_from_name(lf->pipeline),
-                                          lf->priority, lf->match,
-                                          lf->actions);
-
-    return ld ? ovn_logical_flow_hash_datapath(&ld->header_.uuid, hash) : hash;
+    return ovn_logical_flow_hash(lf->table_id,
+                                 ovn_pipeline_from_name(lf->pipeline),
+                                 lf->priority, lf->match,
+                                 lf->actions);
 }
 
 uint32_t
@@ -641,13 +638,6 @@  ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
     return hash_string(actions, hash);
 }
 
-uint32_t
-ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
-                               uint32_t hash)
-{
-    return hash_add(hash, uuid_hash(logical_datapath));
-}
-
 
 struct tnlid_node {
     struct hmap_node hmap_node;
diff --git a/lib/ovn-util.h b/lib/ovn-util.h
index 16e054812c..042e6bf82c 100644
--- a/lib/ovn-util.h
+++ b/lib/ovn-util.h
@@ -146,8 +146,6 @@  uint32_t sbrec_logical_flow_hash(const struct sbrec_logical_flow *);
 uint32_t ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
                                uint16_t priority,
                                const char *match, const char *actions);
-uint32_t ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
-                                        uint32_t hash);
 void ovn_conn_show(struct unixctl_conn *conn, int argc OVS_UNUSED,
                    const char *argv[] OVS_UNUSED, void *idl_);
 
diff --git a/northd/automake.mk b/northd/automake.mk
index a178541759..7c6d56a4ff 100644
--- a/northd/automake.mk
+++ b/northd/automake.mk
@@ -33,7 +33,9 @@  northd_ovn_northd_SOURCES = \
 	northd/inc-proc-northd.c \
 	northd/inc-proc-northd.h \
 	northd/ipam.c \
-	northd/ipam.h
+	northd/ipam.h \
+	northd/lflow-mgr.c \
+	northd/lflow-mgr.h
 northd_ovn_northd_LDADD = \
 	lib/libovn.la \
 	$(OVSDB_LIBDIR)/libovsdb.la \
diff --git a/northd/en-lflow.c b/northd/en-lflow.c
index b0161b98d9..fafdc24465 100644
--- a/northd/en-lflow.c
+++ b/northd/en-lflow.c
@@ -24,6 +24,7 @@ 
 #include "en-ls-stateful.h"
 #include "en-northd.h"
 #include "en-meters.h"
+#include "lflow-mgr.h"
 
 #include "lib/inc-proc-eng.h"
 #include "northd.h"
@@ -58,6 +59,8 @@  lflow_get_input_data(struct engine_node *node,
         EN_OVSDB_GET(engine_get_input("SB_multicast_group", node));
     lflow_input->sbrec_igmp_group_table =
         EN_OVSDB_GET(engine_get_input("SB_igmp_group", node));
+    lflow_input->sbrec_logical_dp_group_table =
+        EN_OVSDB_GET(engine_get_input("SB_logical_dp_group", node));
 
     lflow_input->sbrec_mcast_group_by_name_dp =
            engine_ovsdb_node_get_index(
@@ -90,17 +93,19 @@  void en_lflow_run(struct engine_node *node, void *data)
     struct hmap bfd_connections = HMAP_INITIALIZER(&bfd_connections);
     lflow_input.bfd_connections = &bfd_connections;
 
+    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
+
     struct lflow_data *lflow_data = data;
-    lflow_data_destroy(lflow_data);
-    lflow_data_init(lflow_data);
+    lflow_table_clear(lflow_data->lflow_table);
+    lflow_reset_northd_refs(&lflow_input);
 
-    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
     build_bfd_table(eng_ctx->ovnsb_idl_txn,
                     lflow_input.nbrec_bfd_table,
                     lflow_input.sbrec_bfd_table,
                     lflow_input.lr_ports,
                     &bfd_connections);
-    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input, &lflow_data->lflows);
+    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input,
+                 lflow_data->lflow_table);
     bfd_cleanup_connections(lflow_input.nbrec_bfd_table,
                             &bfd_connections);
     hmap_destroy(&bfd_connections);
@@ -131,7 +136,8 @@  lflow_northd_handler(struct engine_node *node,
 
     if (!lflow_handle_northd_port_changes(eng_ctx->ovnsb_idl_txn,
                                           &northd_data->trk_data.trk_lsps,
-                                          &lflow_input, &lflow_data->lflows)) {
+                                          &lflow_input,
+                                          lflow_data->lflow_table)) {
         return false;
     }
 
@@ -160,11 +166,13 @@  void *en_lflow_init(struct engine_node *node OVS_UNUSED,
                      struct engine_arg *arg OVS_UNUSED)
 {
     struct lflow_data *data = xmalloc(sizeof *data);
-    lflow_data_init(data);
+    data->lflow_table = lflow_table_alloc();
+    lflow_table_init(data->lflow_table);
     return data;
 }
 
-void en_lflow_cleanup(void *data)
+void en_lflow_cleanup(void *data_)
 {
-    lflow_data_destroy(data);
+    struct lflow_data *data = data_;
+    lflow_table_destroy(data->lflow_table);
 }
diff --git a/northd/en-lflow.h b/northd/en-lflow.h
index 5417b2faff..f7325c56b1 100644
--- a/northd/en-lflow.h
+++ b/northd/en-lflow.h
@@ -9,6 +9,12 @@ 
 
 #include "lib/inc-proc-eng.h"
 
+struct lflow_table;
+
+struct lflow_data {
+    struct lflow_table *lflow_table;
+};
+
 void en_lflow_run(struct engine_node *node, void *data);
 void *en_lflow_init(struct engine_node *node, struct engine_arg *arg);
 void en_lflow_cleanup(void *data);
diff --git a/northd/inc-proc-northd.c b/northd/inc-proc-northd.c
index 9ce4279ee8..0e17bfe2e6 100644
--- a/northd/inc-proc-northd.c
+++ b/northd/inc-proc-northd.c
@@ -99,7 +99,8 @@  static unixctl_cb_func chassis_features_list;
     SB_NODE(bfd, "bfd") \
     SB_NODE(fdb, "fdb") \
     SB_NODE(static_mac_binding, "static_mac_binding") \
-    SB_NODE(chassis_template_var, "chassis_template_var")
+    SB_NODE(chassis_template_var, "chassis_template_var") \
+    SB_NODE(logical_dp_group, "logical_dp_group")
 
 enum sb_engine_node {
 #define SB_NODE(NAME, NAME_STR) SB_##NAME,
@@ -229,6 +230,7 @@  void inc_proc_northd_init(struct ovsdb_idl_loop *nb,
     engine_add_input(&en_lflow, &en_sb_igmp_group, NULL);
     engine_add_input(&en_lflow, &en_lr_stateful, NULL);
     engine_add_input(&en_lflow, &en_ls_stateful, NULL);
+    engine_add_input(&en_lflow, &en_sb_logical_dp_group, NULL);
     engine_add_input(&en_lflow, &en_northd, lflow_northd_handler);
     engine_add_input(&en_lflow, &en_port_group, lflow_port_group_handler);
 
diff --git a/northd/lflow-mgr.c b/northd/lflow-mgr.c
new file mode 100644
index 0000000000..3b423192bb
--- /dev/null
+++ b/northd/lflow-mgr.c
@@ -0,0 +1,1420 @@ 
+/*
+ * Copyright (c) 2024, Red Hat, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at:
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <config.h>
+
+/* OVS includes */
+#include "include/openvswitch/thread.h"
+#include "lib/bitmap.h"
+#include "openvswitch/vlog.h"
+
+/* OVN includes */
+#include "debug.h"
+#include "lflow-mgr.h"
+#include "lib/ovn-parallel-hmap.h"
+
+VLOG_DEFINE_THIS_MODULE(lflow_mgr);
+
+/* Static function declarations. */
+struct ovn_lflow;
+
+static void ovn_lflow_init(struct ovn_lflow *, struct ovn_datapath *od,
+                           size_t dp_bitmap_len, enum ovn_stage stage,
+                           uint16_t priority, char *match,
+                           char *actions, char *io_port,
+                           char *ctrl_meter, char *stage_hint,
+                           const char *where);
+static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
+                                        enum ovn_stage stage,
+                                        uint16_t priority, const char *match,
+                                        const char *actions,
+                                        const char *ctrl_meter, uint32_t hash);
+static void ovn_lflow_destroy(struct lflow_table *lflow_table,
+                              struct ovn_lflow *lflow);
+static char *ovn_lflow_hint(const struct ovsdb_idl_row *row);
+
+static struct ovn_lflow *do_ovn_lflow_add(
+    struct lflow_table *, const struct ovn_datapath *,
+    const unsigned long *dp_bitmap, size_t dp_bitmap_len, uint32_t hash,
+    enum ovn_stage stage, uint16_t priority, const char *match,
+    const char *actions, const char *io_port,
+    const char *ctrl_meter,
+    const struct ovsdb_idl_row *stage_hint,
+    const char *where);
+
+
+static struct ovs_mutex *lflow_hash_lock(const struct hmap *lflow_table,
+                                         uint32_t hash);
+static void lflow_hash_unlock(struct ovs_mutex *hash_lock);
+
+static struct ovn_dp_group *ovn_dp_group_get(
+    struct hmap *dp_groups, size_t desired_n,
+    const unsigned long *desired_bitmap,
+    size_t bitmap_len);
+static struct ovn_dp_group *ovn_dp_group_create(
+    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
+    struct sbrec_logical_dp_group *, size_t desired_n,
+    const unsigned long *desired_bitmap,
+    size_t bitmap_len, bool is_switch,
+    const struct ovn_datapaths *ls_datapaths,
+    const struct ovn_datapaths *lr_datapaths);
+static struct ovn_dp_group *ovn_dp_group_get(
+    struct hmap *dp_groups, size_t desired_n,
+    const unsigned long *desired_bitmap,
+    size_t bitmap_len);
+static struct sbrec_logical_dp_group *ovn_sb_insert_or_update_logical_dp_group(
+    struct ovsdb_idl_txn *ovnsb_txn,
+    struct sbrec_logical_dp_group *,
+    const unsigned long *dpg_bitmap,
+    const struct ovn_datapaths *);
+static struct ovn_dp_group *ovn_dp_group_find(const struct hmap *dp_groups,
+                                              const unsigned long *dpg_bitmap,
+                                              size_t bitmap_len,
+                                              uint32_t hash);
+static void ovn_dp_group_use(struct ovn_dp_group *);
+static void ovn_dp_group_release(struct hmap *dp_groups,
+                                 struct ovn_dp_group *);
+static void ovn_dp_group_destroy(struct ovn_dp_group *dpg);
+static void ovn_dp_group_add_with_reference(struct ovn_lflow *,
+                                            const struct ovn_datapath *od,
+                                            const unsigned long *dp_bitmap,
+                                            size_t bitmap_len);
+
+static bool lflow_ref_sync_lflows__(
+    struct lflow_ref  *, struct lflow_table *,
+    struct ovsdb_idl_txn *ovnsb_txn,
+    const struct ovn_datapaths *ls_datapaths,
+    const struct ovn_datapaths *lr_datapaths,
+    bool ovn_internal_version_changed,
+    const struct sbrec_logical_flow_table *,
+    const struct sbrec_logical_dp_group_table *);
+static bool sync_lflow_to_sb(struct ovn_lflow *,
+                             struct ovsdb_idl_txn *ovnsb_txn,
+                             struct lflow_table *,
+                             const struct ovn_datapaths *ls_datapaths,
+                             const struct ovn_datapaths *lr_datapaths,
+                             bool ovn_internal_version_changed,
+                             const struct sbrec_logical_flow *sbflow,
+                             const struct sbrec_logical_dp_group_table *);
+
+extern int parallelization_state;
+extern thread_local size_t thread_lflow_counter;
+
+struct dp_refcnt;
+static struct dp_refcnt *dp_refcnt_find(struct hmap *dp_refcnts_map,
+                                        size_t dp_index);
+static void dp_refcnt_use(struct hmap *dp_refcnts_map, size_t dp_index);
+static bool dp_refcnt_release(struct hmap *dp_refcnts_map, size_t dp_index);
+static void ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *);
+static struct lflow_ref_node *lflow_ref_node_find(struct hmap *lflow_ref_nodes,
+                                                  struct ovn_lflow *lflow,
+                                                  uint32_t lflow_hash);
+static void lflow_ref_node_destroy(struct lflow_ref_node *);
+
+static bool lflow_hash_lock_initialized = false;
+/* The lflow_hash_lock is a mutex array that protects updates to the shared
+ * lflow table across threads when parallel lflow build and dp-group are both
+ * enabled. To avoid high contention between threads, a big array of mutexes
+ * are used instead of just one. This is possible because when parallel build
+ * is used we only use hmap_insert_fast() to update the hmap, which would not
+ * touch the bucket array but only the list in a single bucket. We only need to
+ * make sure that when adding lflows to the same hash bucket, the same lock is
+ * used, so that no two threads can add to the bucket at the same time.  It is
+ * ok that the same lock is used to protect multiple buckets, so a fixed sized
+ * mutex array is used instead of 1-1 mapping to the hash buckets. This
+ * simplies the implementation while effectively reduces lock contention
+ * because the chance that different threads contending the same lock amongst
+ * the big number of locks is very low. */
+#define LFLOW_HASH_LOCK_MASK 0xFFFF
+static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
+
+/* Full thread safety analysis is not possible with hash locks, because
+ * they are taken conditionally based on the 'parallelization_state' and
+ * a flow hash.  Also, the order in which two hash locks are taken is not
+ * predictable during the static analysis.
+ *
+ * Since the order of taking two locks depends on a random hash, to avoid
+ * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
+ * of hash locks is similar to a single mutex.
+ *
+ * Using a fake mutex to partially simulate thread safety restrictions, as
+ * if it were actually a single mutex.
+ *
+ * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
+ * nature of the lock.  Unlike other attributes, it applies to the
+ * implementation and not to the interface.  So, we can define a function
+ * that acquires the lock without analysing the way it does that.
+ */
+extern struct ovs_mutex fake_hash_mutex;
+
+/* Represents a logical ovn flow (lflow).
+ *
+ * A logical flow with match 'M' and actions 'A' - L(M, A) is created
+ * when lflow engine node (northd.c) calls lflow_table_add_lflow
+ * (or one of the helper macros ovn_lflow_add_*).
+ *
+ * Each lflow is stored in the lflow_table (see 'struct lflow_table' below)
+ * and possibly referenced by zero or more lflow_refs
+ * (see 'struct lflow_ref' and 'struct lflow_ref_node' below).
+ *
+ * */
+struct ovn_lflow {
+    struct hmap_node hmap_node;
+
+    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
+    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
+    enum ovn_stage stage;
+    uint16_t priority;
+    char *match;
+    char *actions;
+    char *io_port;
+    char *stage_hint;
+    char *ctrl_meter;
+    size_t n_ods;                /* Number of datapaths referenced by 'od' and
+                                  * 'dpg_bitmap'. */
+    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
+    const char *where;
+
+    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
+    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
+    struct hmap dp_refcnts_map; /* Maintains the number of times this ovn_lflow
+                                 * is referenced by a given datapath.
+                                 * Contains 'struct dp_refcnt' in the map. */
+};
+
+/* Logical flow table. */
+struct lflow_table {
+    struct hmap entries; /* hmap of lflows. */
+    struct hmap ls_dp_groups; /* hmap of logical switch dp groups. */
+    struct hmap lr_dp_groups; /* hmap of logical router dp groups. */
+    ssize_t max_seen_lflow_size;
+};
+
+struct lflow_table *
+lflow_table_alloc(void)
+{
+    struct lflow_table *lflow_table = xzalloc(sizeof *lflow_table);
+    lflow_table->max_seen_lflow_size = 128;
+
+    return lflow_table;
+}
+
+void
+lflow_table_init(struct lflow_table *lflow_table)
+{
+    fast_hmap_size_for(&lflow_table->entries,
+                       lflow_table->max_seen_lflow_size);
+    ovn_dp_groups_init(&lflow_table->ls_dp_groups);
+    ovn_dp_groups_init(&lflow_table->lr_dp_groups);
+}
+
+void
+lflow_table_clear(struct lflow_table *lflow_table)
+{
+    struct ovn_lflow *lflow;
+    HMAP_FOR_EACH_SAFE (lflow, hmap_node, &lflow_table->entries) {
+        ovn_lflow_destroy(lflow_table, lflow);
+    }
+
+    ovn_dp_groups_clear(&lflow_table->ls_dp_groups);
+    ovn_dp_groups_clear(&lflow_table->lr_dp_groups);
+}
+
+void
+lflow_table_destroy(struct lflow_table *lflow_table)
+{
+    lflow_table_clear(lflow_table);
+    hmap_destroy(&lflow_table->entries);
+    ovn_dp_groups_destroy(&lflow_table->ls_dp_groups);
+    ovn_dp_groups_destroy(&lflow_table->lr_dp_groups);
+    free(lflow_table);
+}
+
+void
+lflow_table_expand(struct lflow_table *lflow_table)
+{
+    hmap_expand(&lflow_table->entries);
+
+    if (hmap_count(&lflow_table->entries) >
+            lflow_table->max_seen_lflow_size) {
+        lflow_table->max_seen_lflow_size = hmap_count(&lflow_table->entries);
+    }
+}
+
+void
+lflow_table_set_size(struct lflow_table *lflow_table, size_t size)
+{
+    lflow_table->entries.n = size;
+}
+
+void
+lflow_table_sync_to_sb(struct lflow_table *lflow_table,
+                       struct ovsdb_idl_txn *ovnsb_txn,
+                       const struct ovn_datapaths *ls_datapaths,
+                       const struct ovn_datapaths *lr_datapaths,
+                       bool ovn_internal_version_changed,
+                       const struct sbrec_logical_flow_table *sb_flow_table,
+                       const struct sbrec_logical_dp_group_table *dpgrp_table)
+{
+    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
+    struct hmap *lflows = &lflow_table->entries;
+    struct ovn_lflow *lflow;
+
+    /* Push changes to the Logical_Flow table to database. */
+    const struct sbrec_logical_flow *sbflow;
+    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow, sb_flow_table) {
+        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
+        struct ovn_datapath *logical_datapath_od = NULL;
+        size_t i;
+
+        /* Find one valid datapath to get the datapath type. */
+        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
+        if (dp) {
+            logical_datapath_od = ovn_datapath_from_sbrec(
+                &ls_datapaths->datapaths, &lr_datapaths->datapaths, dp);
+            if (logical_datapath_od
+                && ovn_datapath_is_stale(logical_datapath_od)) {
+                logical_datapath_od = NULL;
+            }
+        }
+        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
+            logical_datapath_od = ovn_datapath_from_sbrec(
+                &ls_datapaths->datapaths, &lr_datapaths->datapaths,
+                dp_group->datapaths[i]);
+            if (logical_datapath_od
+                && !ovn_datapath_is_stale(logical_datapath_od)) {
+                break;
+            }
+            logical_datapath_od = NULL;
+        }
+
+        if (!logical_datapath_od) {
+            /* This lflow has no valid logical datapaths. */
+            sbrec_logical_flow_delete(sbflow);
+            continue;
+        }
+
+        enum ovn_pipeline pipeline
+            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
+
+        lflow = ovn_lflow_find(
+            lflows,
+            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
+                            pipeline, sbflow->table_id),
+            sbflow->priority, sbflow->match, sbflow->actions,
+            sbflow->controller_meter, sbflow->hash);
+        if (lflow) {
+            sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
+                             lr_datapaths, ovn_internal_version_changed,
+                             sbflow, dpgrp_table);
+
+            hmap_remove(lflows, &lflow->hmap_node);
+            hmap_insert(&lflows_temp, &lflow->hmap_node,
+                        hmap_node_hash(&lflow->hmap_node));
+        } else {
+            sbrec_logical_flow_delete(sbflow);
+        }
+    }
+
+    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
+        sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
+                         lr_datapaths, ovn_internal_version_changed,
+                         NULL, dpgrp_table);
+
+        hmap_remove(lflows, &lflow->hmap_node);
+        hmap_insert(&lflows_temp, &lflow->hmap_node,
+                    hmap_node_hash(&lflow->hmap_node));
+    }
+    hmap_swap(lflows, &lflows_temp);
+    hmap_destroy(&lflows_temp);
+}
+
+/* Logical flow sync using 'struct lflow_ref'
+ * ==========================================
+ * The 'struct lflow_ref' represents a collection of (or references to)
+ * logical flows (struct ovn_lflow) which belong to a logical entity 'E'.
+ * This entity 'E' is external to lflow manager (see northd.h and northd.c)
+ * Eg. logical datapath (struct ovn_datapath), logical switch and router ports
+ * (struct ovn_port), load balancer (struct lb_datapath) etc.
+ *
+ * General guidelines on using 'struct lflow_ref'.
+ *   - For an entity 'E', create an instance of lflow_ref
+ *           E->lflow_ref = lflow_ref_create();
+ *
+ *   - For each logical flow L(M, A) generated for the entity 'E'
+ *     pass E->lflow_ref when adding L(M, A) to the lflow table.
+ *     Eg. lflow_table_add_lflow(lflow_table, od_of_E, M, A, .., E->lflow_ref);
+ *
+ * If lflows L1, L2 and L3 are generated for 'E', then
+ * E->lflow_ref stores these in its hmap.
+ * i.e E->lflow_ref->lflow_ref_nodes = hmap[LRN(L1, E1), LRN(L2, E1),
+ *                                          LRN(L3, E1)]
+ *
+ * LRN is an instance of 'struct lflow_ref_node'.
+ * 'struct lflow_ref_node' is used to store a logical lflow L(M, A) as a
+ * reference in the lflow_ref.  It is possible that an lflow L(M,A) can be
+ * referenced by one or more lflow_ref's.  For each reference, an instance of
+ * this struct 'lflow_ref_node' is created.
+ *
+ * For example, if entity E1 generates lflows L1, L2 and L3
+ * and entity E2 generates lflows L1, L3, and L4 then
+ * an instance of this struct is created for each entity.
+ * For example LRN(L1, E1).
+ *
+ * Each logical flow's L also maintains a list of its references in the
+ * ovn_lflow->referenced_by list.
+ *
+ *
+ *
+ *                L1            L2             L3             L4
+ *                |             |  (list)      |              |
+ *   (lflow_ref)  v             v              v              v
+ *  ----------------------------------------------------------------------
+ * | E1 (hmap) => LRN(L1,E1) => LRN(L2, E1) => LRN(L3, E1)    |           |
+ * |              |                            |              |           |
+ * |              v                            v              v           |
+ * | E2 (hmap) => LRN(L1,E2) ================> LRN(L3, E2) => LRN(L4, E2) |
+ *  ----------------------------------------------------------------------
+ *
+ *
+ * Life cycle of 'struct lflow_ref_node'
+ * =====================================
+ * For a given logical flow L1 and entity E1's lflow_ref,
+ *  1. LRN(L1, E1) is created in lflow_table_add_lflow() and its 'linked' flag
+ *     is set to true.
+ *  2. LRN(L1, E1) is stored in the hmap - E1->lflow_ref->lflow_ref_nodes.
+ *  3. LRN(L1, E1) is also stored in the linked list L1->referenced_by.
+ *  4. LRN(L1, E1)->linked is set to false when the client calls
+ *     lflow_ref_unlink_lflows(E1->lflow_ref).
+ *  5. LRN(L1, E1)->linked is set to true again when the client calls
+ *     lflow_table_add_lflow(L1, ..., E1->lflow_ref) and LRN(L1, E1)
+ *     is already present.
+ *  6. LRN(L1, E1) is destroyed if LRN(L1, E1)->linked is false
+ *     when the client calls lflow_ref_sync_lflows().
+ *  7. LRN(L1, E1) is also destroyed in lflow_ref_clear(E1->lflow_ref).
+ *
+ *
+ * Incremental lflow generation for a logical entity
+ * =================================================
+ * Lets take the above example again.
+ *
+ *
+ *                L1            L2             L3             L4
+ *                |             |  (list)      |              |
+ *   (lflow_ref)  v             v              v              v
+ *  ----------------------------------------------------------------------
+ * | E1 (hmap) => LRN(L1,E1) => LRN(L2, E1) => LRN(L3, E1)    |           |
+ * |              |                            |              |           |
+ * |              v                            v              v           |
+ * | E2 (hmap) => LRN(L1,E2) ================> LRN(L3, E2) => LRN(L4, E2) |
+ *  ----------------------------------------------------------------------
+ *
+ *
+ * L1 is referenced by E1 and E2
+ * L2 is referenced by just E1
+ * L3 is referenced by E1 and E2
+ * L4 is referenced by just E2
+ *
+ * L1->dpg_bitmap = [E1->od->index, E2->od->index]
+ * L2->dpg_bitmap = [E1->od->index]
+ * L3->dpg_bitmap = [E1->od->index, E2->od->index]
+ * L4->dpg_bitmap = [E2->od->index]
+ *
+ *
+ * When 'E' gets updated,
+ *   1.  the client should first call
+ *       lflow_ref_unlink_lflows(E1->lflow_ref);
+ *
+ *       This function sets the 'linked' flag to false and clears the dp bitmap
+ *       of linked lflows.
+ *
+ *       LRN(L1,E1)->linked = false;
+ *       LRN(L2,E1)->linked = false;
+ *       LRN(L3,E1)->linked = false;
+ *
+ *       bitmap status of all lflows in the lflows table
+ *       -----------------------------------------------
+ *       L1->dpg_bitmap = [E2->od->index]
+ *       L2->dpg_bitmap = []
+ *       L3->dpg_bitmap = [E2->od->index]
+ *       L4->dpg_bitmap = [E2->od->index]
+ *
+ *   2.  In step (2), client should generate the logical flows again for 'E1'.
+ *       Lets say it calls:
+ *       lflow_table_add_lflow(lflow_table, L3, E1->lflow_ref)
+ *       lflow_table_add_lflow(lflow_table, L5, E1->lflow_ref)
+ *
+ *       So, E1 generates the flows L3 and L5 and discards L1 and L2.
+ *
+ *       Below is the state of LRNs of E1
+ *       LRN(L1,E1)->linked = false;
+ *       LRN(L2,E1)->linked = false;
+ *       LRN(L3,E1)->linked = true;
+ *       LRN(L5,E1)->linked = true;
+ *
+ *       bitmap status of all lflows in the lflow table after end of step (2)
+ *       --------------------------------------------------------------------
+ *       L1->dpg_bitmap = [E2->od->index]
+ *       L2->dpg_bitmap = []
+ *       L3->dpg_bitmap = [E1->od->index, E2->od->index]
+ *       L4->dpg_bitmap = [E2->od->index]
+ *       L5->dpg_bitmap = [E1->od->index]
+ *
+ *   3.  In step (3), client should sync the E1's lflows by calling
+ *       lflow_ref_sync_lflows(E1->lflow_ref,....);
+ *
+ *       Below is how the logical flows in SB DB gets updated:
+ *       lflow L1:
+ *              SB:L1->logical_dp_group = NULL;
+ *              SB:L1->logical_datapath = E2->od;
+ *
+ *       lflow L2: L2 is deleted since no datapath is using it.
+ *
+ *       lflow L3: No changes
+ *
+ *       lflow L5: New row is created for this.
+ *
+ * After step (3)
+ *
+ *                L1            L5             L3             L4
+ *                |             |  (list)      |              |
+ *   (lflow_ref)  v             v              v              v
+ *  ----------------------------------------------------------------------
+ * | E1 (hmap) ===============> LRN(L2, E1) => LRN(L3, E1)    |           |
+ * |              |                            |              |           |
+ * |              v                            v              v           |
+ * | E2 (hmap) => LRN(L1,E2) ================> LRN(L3, E2) => LRN(L4, E2) |
+ *  ----------------------------------------------------------------------
+ *
+ * Thread safety in lflow_ref
+ * ==========================
+ * The function lflow_table_add_lflow() is not thread safe for lflow_ref.
+ * Client should ensure that same instance of lflow_ref's are not used
+ * by multiple threads when calling lflow_table_add_lflow().
+ *
+ * One way to ensure thread safety is to maintain array of hash locks
+ * in each lflow_ref just like how we have static variable lflow_hash_locks
+ * of type ovs_mutex. This would mean that client has to reconsile the
+ * lflow_ref hmap lflow_ref_nodes (by calling hmap_expand()) after the
+ * lflow generation is complete.  (See lflow_table_expand()).
+ *
+ * Presently the client of lflow manager (northd.c) doesn't call
+ * lflow_table_add_lflow() in multiple threads for the same lflow_ref.
+ * But it may change in the future and we may need to add the thread
+ * safety support.
+ *
+ * Until then care should be taken by the contributors to avoid this
+ * scenario.
+ */
+struct lflow_ref {
+    /* hmap of lfow ref nodes. hmap_node is 'struct lflow_ref_node *'. */
+    struct hmap lflow_ref_nodes;
+};
+
+struct lflow_ref_node {
+    /* hmap node in the hmap - 'struct lflow_ref->lflow_ref_nodes' */
+    struct hmap_node ref_node;
+    struct lflow_ref *lflow_ref; /* pointer to 'lflow_ref' it is part of. */
+
+    /* This list follows different objects that reference the same lflow. List
+     * head is ovn_lflow->referenced_by. */
+    struct ovs_list ref_list_node;
+    /* The lflow. */
+    struct ovn_lflow *lflow;
+
+    /* Index id of the datapath this lflow_ref_node belongs to. */
+    size_t dp_index;
+
+    /* Indicates if the lflow_ref_node for an lflow - L(M, A) is linked
+     * to datapath(s) or not.
+     * It is set to true when an lflow L(M, A) is referenced by an lflow ref
+     * in lflow_table_add_lflow().  It is set to false when it is unlinked
+     * from the datapath when lflow_ref_unlink_lflows() is called. */
+    bool linked;
+};
+
+struct lflow_ref *
+lflow_ref_create(void)
+{
+    struct lflow_ref *lflow_ref = xzalloc(sizeof *lflow_ref);
+    hmap_init(&lflow_ref->lflow_ref_nodes);
+    return lflow_ref;
+}
+
+void
+lflow_ref_clear(struct lflow_ref *lflow_ref)
+{
+    struct lflow_ref_node *lrn;
+    HMAP_FOR_EACH_SAFE (lrn, ref_node, &lflow_ref->lflow_ref_nodes) {
+        lflow_ref_node_destroy(lrn);
+    }
+}
+
+void
+lflow_ref_destroy(struct lflow_ref *lflow_ref)
+{
+    lflow_ref_clear(lflow_ref);
+    hmap_destroy(&lflow_ref->lflow_ref_nodes);
+    free(lflow_ref);
+}
+
+/* Unlinks the lflows referenced by the 'lflow_ref'.
+ * For each lflow_ref_node (lrn) in the lflow_ref, it basically clears
+ * the datapath id (lrn->dp_index) from the lrn->lflow's dpg bitmap.
+ */
+void
+lflow_ref_unlink_lflows(struct lflow_ref *lflow_ref)
+{
+    struct lflow_ref_node *lrn;
+
+    HMAP_FOR_EACH (lrn, ref_node, &lflow_ref->lflow_ref_nodes) {
+        if (dp_refcnt_release(&lrn->lflow->dp_refcnts_map,
+                              lrn->dp_index)) {
+            bitmap_set0(lrn->lflow->dpg_bitmap, lrn->dp_index);
+        }
+
+        lrn->linked = false;
+    }
+}
+
+bool
+lflow_ref_resync_flows(struct lflow_ref *lflow_ref,
+                       struct lflow_table *lflow_table,
+                       struct ovsdb_idl_txn *ovnsb_txn,
+                       const struct ovn_datapaths *ls_datapaths,
+                       const struct ovn_datapaths *lr_datapaths,
+                       bool ovn_internal_version_changed,
+                       const struct sbrec_logical_flow_table *sbflow_table,
+                       const struct sbrec_logical_dp_group_table *dpgrp_table)
+{
+    lflow_ref_unlink_lflows(lflow_ref);
+    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
+                                   ls_datapaths, lr_datapaths,
+                                   ovn_internal_version_changed, sbflow_table,
+                                   dpgrp_table);
+}
+
+bool
+lflow_ref_sync_lflows(struct lflow_ref *lflow_ref,
+                      struct lflow_table *lflow_table,
+                      struct ovsdb_idl_txn *ovnsb_txn,
+                      const struct ovn_datapaths *ls_datapaths,
+                      const struct ovn_datapaths *lr_datapaths,
+                      bool ovn_internal_version_changed,
+                      const struct sbrec_logical_flow_table *sbflow_table,
+                      const struct sbrec_logical_dp_group_table *dpgrp_table)
+{
+    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
+                                   ls_datapaths, lr_datapaths,
+                                   ovn_internal_version_changed, sbflow_table,
+                                   dpgrp_table);
+}
+
+/* Adds a logical flow to the logical flow table for the match 'match'
+ * and actions 'actions'.
+ *
+ * If a logical flow L(M, A) for the 'match' and 'actions' already exist then
+ *   - It will be no-op if L(M,A) was already added for the same datapath.
+ *   - if its a different datapath, then the datapath index (od->index)
+ *     is set in the lflow dp group bitmap.
+ *
+ * If 'lflow_ref' is not NULL then
+ *    - it first checks if the lflow is present in the lflow_ref or not
+ *    - if present, then it does nothing
+ *    - if not present, then it creates an lflow_ref_node object for
+ *      the [L(M, A), dp index] and adds ito the lflow_ref hmap.
+ *
+ * Note that this function is not thread safe for 'lflow_ref'.
+ * If 2 or more threads calls this function for the same 'lflow_ref',
+ * then it may corrupt the hmap.  Caller should ensure thread safety
+ * for such scenarios.
+ */
+void
+lflow_table_add_lflow(struct lflow_table *lflow_table,
+                      const struct ovn_datapath *od,
+                      const unsigned long *dp_bitmap, size_t dp_bitmap_len,
+                      enum ovn_stage stage, uint16_t priority,
+                      const char *match, const char *actions,
+                      const char *io_port, const char *ctrl_meter,
+                      const struct ovsdb_idl_row *stage_hint,
+                      const char *where,
+                      struct lflow_ref *lflow_ref)
+    OVS_EXCLUDED(fake_hash_mutex)
+{
+    struct ovs_mutex *hash_lock;
+    uint32_t hash;
+
+    ovs_assert(!od ||
+               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
+
+    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
+                                 ovn_stage_get_pipeline(stage),
+                                 priority, match,
+                                 actions);
+
+    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
+    struct ovn_lflow *lflow =
+        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
+                         dp_bitmap_len, hash, stage,
+                         priority, match, actions,
+                         io_port, ctrl_meter, stage_hint, where);
+
+    if (lflow_ref) {
+        /* lflow referencing is only supported if 'od' is not NULL. */
+        ovs_assert(od);
+
+        struct lflow_ref_node *lrn =
+            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow, hash);
+        if (!lrn) {
+            lrn = xzalloc(sizeof *lrn);
+            lrn->lflow = lflow;
+            lrn->lflow_ref = lflow_ref;
+            lrn->dp_index = od->index;
+            dp_refcnt_use(&lflow->dp_refcnts_map, lrn->dp_index);
+            ovs_list_insert(&lflow->referenced_by, &lrn->ref_list_node);
+            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node, hash);
+        }
+
+        lrn->linked = true;
+    }
+
+    lflow_hash_unlock(hash_lock);
+
+}
+
+void
+lflow_table_add_lflow_default_drop(struct lflow_table *lflow_table,
+                                   const struct ovn_datapath *od,
+                                   enum ovn_stage stage,
+                                   const char *where,
+                                   struct lflow_ref *lflow_ref)
+{
+    lflow_table_add_lflow(lflow_table, od, NULL, 0, stage, 0, "1",
+                          debug_drop_action(), NULL, NULL, NULL,
+                          where, lflow_ref);
+}
+
+/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
+ * doesn't exist, creates a new one and adds it to 'dp_groups'.
+ * If 'sb_group' is provided, function will try to re-use this group by
+ * either taking it directly, or by modifying, if it's not already in use. */
+struct ovn_dp_group *
+ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
+                           struct hmap *dp_groups,
+                           struct sbrec_logical_dp_group *sb_group,
+                           size_t desired_n,
+                           const unsigned long *desired_bitmap,
+                           size_t bitmap_len,
+                           bool is_switch,
+                           const struct ovn_datapaths *ls_datapaths,
+                           const struct ovn_datapaths *lr_datapaths)
+{
+    struct ovn_dp_group *dpg;
+
+    dpg = ovn_dp_group_get(dp_groups, desired_n, desired_bitmap, bitmap_len);
+    if (dpg) {
+        return dpg;
+    }
+
+    return ovn_dp_group_create(ovnsb_txn, dp_groups, sb_group, desired_n,
+                               desired_bitmap, bitmap_len, is_switch,
+                               ls_datapaths, lr_datapaths);
+}
+
+void
+ovn_dp_groups_clear(struct hmap *dp_groups)
+{
+    struct ovn_dp_group *dpg;
+    HMAP_FOR_EACH_POP (dpg, node, dp_groups) {
+        ovn_dp_group_destroy(dpg);
+    }
+}
+
+void
+ovn_dp_groups_destroy(struct hmap *dp_groups)
+{
+    ovn_dp_groups_clear(dp_groups);
+    hmap_destroy(dp_groups);
+}
+
+void
+lflow_hash_lock_init(void)
+{
+    if (!lflow_hash_lock_initialized) {
+        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
+            ovs_mutex_init(&lflow_hash_locks[i]);
+        }
+        lflow_hash_lock_initialized = true;
+    }
+}
+
+void
+lflow_hash_lock_destroy(void)
+{
+    if (lflow_hash_lock_initialized) {
+        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
+            ovs_mutex_destroy(&lflow_hash_locks[i]);
+        }
+    }
+    lflow_hash_lock_initialized = false;
+}
+
+/* static functions. */
+static void
+ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
+               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
+               char *match, char *actions, char *io_port, char *ctrl_meter,
+               char *stage_hint, const char *where)
+{
+    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
+    lflow->od = od;
+    lflow->stage = stage;
+    lflow->priority = priority;
+    lflow->match = match;
+    lflow->actions = actions;
+    lflow->io_port = io_port;
+    lflow->stage_hint = stage_hint;
+    lflow->ctrl_meter = ctrl_meter;
+    lflow->dpg = NULL;
+    lflow->where = where;
+    lflow->sb_uuid = UUID_ZERO;
+    hmap_init(&lflow->dp_refcnts_map);
+    ovs_list_init(&lflow->referenced_by);
+}
+
+static struct ovs_mutex *
+lflow_hash_lock(const struct hmap *lflow_table, uint32_t hash)
+    OVS_ACQUIRES(fake_hash_mutex)
+    OVS_NO_THREAD_SAFETY_ANALYSIS
+{
+    struct ovs_mutex *hash_lock = NULL;
+
+    if (parallelization_state == STATE_USE_PARALLELIZATION) {
+        hash_lock =
+            &lflow_hash_locks[hash & lflow_table->mask & LFLOW_HASH_LOCK_MASK];
+        ovs_mutex_lock(hash_lock);
+    }
+    return hash_lock;
+}
+
+static void
+lflow_hash_unlock(struct ovs_mutex *hash_lock)
+    OVS_RELEASES(fake_hash_mutex)
+    OVS_NO_THREAD_SAFETY_ANALYSIS
+{
+    if (hash_lock) {
+        ovs_mutex_unlock(hash_lock);
+    }
+}
+
+static bool
+ovn_lflow_equal(const struct ovn_lflow *a, enum ovn_stage stage,
+                uint16_t priority, const char *match,
+                const char *actions, const char *ctrl_meter)
+{
+    return (a->stage == stage
+            && a->priority == priority
+            && !strcmp(a->match, match)
+            && !strcmp(a->actions, actions)
+            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
+}
+
+static struct ovn_lflow *
+ovn_lflow_find(const struct hmap *lflows,
+               enum ovn_stage stage, uint16_t priority,
+               const char *match, const char *actions,
+               const char *ctrl_meter, uint32_t hash)
+{
+    struct ovn_lflow *lflow;
+    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
+        if (ovn_lflow_equal(lflow, stage, priority, match, actions,
+                            ctrl_meter)) {
+            return lflow;
+        }
+    }
+    return NULL;
+}
+
+static char *
+ovn_lflow_hint(const struct ovsdb_idl_row *row)
+{
+    if (!row) {
+        return NULL;
+    }
+    return xasprintf("%08x", row->uuid.parts[0]);
+}
+
+static void
+ovn_lflow_destroy(struct lflow_table *lflow_table, struct ovn_lflow *lflow)
+{
+    hmap_remove(&lflow_table->entries, &lflow->hmap_node);
+    bitmap_free(lflow->dpg_bitmap);
+    free(lflow->match);
+    free(lflow->actions);
+    free(lflow->io_port);
+    free(lflow->stage_hint);
+    free(lflow->ctrl_meter);
+    ovn_lflow_clear_dp_refcnts_map(lflow);
+    struct lflow_ref_node *lrn;
+    LIST_FOR_EACH_SAFE (lrn, ref_list_node, &lflow->referenced_by) {
+        lflow_ref_node_destroy(lrn);
+    }
+    free(lflow);
+}
+
+static struct ovn_lflow *
+do_ovn_lflow_add(struct lflow_table *lflow_table,
+                 const struct ovn_datapath *od,
+                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
+                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
+                 const char *match, const char *actions,
+                 const char *io_port, const char *ctrl_meter,
+                 const struct ovsdb_idl_row *stage_hint,
+                 const char *where)
+    OVS_REQUIRES(fake_hash_mutex)
+{
+    struct ovn_lflow *old_lflow;
+    struct ovn_lflow *lflow;
+
+    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
+    ovs_assert(bitmap_len);
+
+    old_lflow = ovn_lflow_find(&lflow_table->entries, stage,
+                               priority, match, actions, ctrl_meter, hash);
+    if (old_lflow) {
+        ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
+                                        bitmap_len);
+        return old_lflow;
+    }
+
+    lflow = xzalloc(sizeof *lflow);
+    /* While adding new logical flows we're not setting single datapath, but
+     * collecting a group.  'od' will be updated later for all flows with only
+     * one datapath in a group, so it could be hashed correctly. */
+    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
+                   xstrdup(match), xstrdup(actions),
+                   io_port ? xstrdup(io_port) : NULL,
+                   nullable_xstrdup(ctrl_meter),
+                   ovn_lflow_hint(stage_hint), where);
+
+    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
+
+    if (parallelization_state != STATE_USE_PARALLELIZATION) {
+        hmap_insert(&lflow_table->entries, &lflow->hmap_node, hash);
+    } else {
+        hmap_insert_fast(&lflow_table->entries, &lflow->hmap_node,
+                         hash);
+        thread_lflow_counter++;
+    }
+
+    return lflow;
+}
+
+static bool
+sync_lflow_to_sb(struct ovn_lflow *lflow,
+                 struct ovsdb_idl_txn *ovnsb_txn,
+                 struct lflow_table *lflow_table,
+                 const struct ovn_datapaths *ls_datapaths,
+                 const struct ovn_datapaths *lr_datapaths,
+                 bool ovn_internal_version_changed,
+                 const struct sbrec_logical_flow *sbflow,
+                 const struct sbrec_logical_dp_group_table *sb_dpgrp_table)
+{
+    struct sbrec_logical_dp_group *sbrec_dp_group = NULL;
+    struct ovn_dp_group *pre_sync_dpg = lflow->dpg;
+    struct ovn_datapath **datapaths_array;
+    struct hmap *dp_groups;
+    size_t n_datapaths;
+    bool is_switch;
+
+    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
+        n_datapaths = ods_size(ls_datapaths);
+        datapaths_array = ls_datapaths->array;
+        dp_groups = &lflow_table->ls_dp_groups;
+        is_switch = true;
+    } else {
+        n_datapaths = ods_size(lr_datapaths);
+        datapaths_array = lr_datapaths->array;
+        dp_groups = &lflow_table->lr_dp_groups;
+        is_switch = false;
+    }
+
+    lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
+    ovs_assert(lflow->n_ods);
+
+    if (lflow->n_ods == 1) {
+        /* There is only one datapath, so it should be moved out of the
+         * group to a single 'od'. */
+        size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
+                                    n_datapaths);
+
+        lflow->od = datapaths_array[index];
+        lflow->dpg = NULL;
+    } else {
+        lflow->od = NULL;
+    }
+
+    if (!sbflow) {
+        lflow->sb_uuid = uuid_random();
+        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
+                                                        &lflow->sb_uuid);
+        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
+        uint8_t table = ovn_stage_get_table(lflow->stage);
+        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
+        sbrec_logical_flow_set_table_id(sbflow, table);
+        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
+        sbrec_logical_flow_set_match(sbflow, lflow->match);
+        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
+        if (lflow->io_port) {
+            struct smap tags = SMAP_INITIALIZER(&tags);
+            smap_add(&tags, "in_out_port", lflow->io_port);
+            sbrec_logical_flow_set_tags(sbflow, &tags);
+            smap_destroy(&tags);
+        }
+        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
+
+        /* Trim the source locator lflow->where, which looks something like
+         * "ovn/northd/northd.c:1234", down to just the part following the
+         * last slash, e.g. "northd.c:1234". */
+        const char *slash = strrchr(lflow->where, '/');
+#if _WIN32
+        const char *backslash = strrchr(lflow->where, '\\');
+        if (!slash || backslash > slash) {
+            slash = backslash;
+        }
+#endif
+        const char *where = slash ? slash + 1 : lflow->where;
+
+        struct smap ids = SMAP_INITIALIZER(&ids);
+        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
+        smap_add(&ids, "source", where);
+        if (lflow->stage_hint) {
+            smap_add(&ids, "stage-hint", lflow->stage_hint);
+        }
+        sbrec_logical_flow_set_external_ids(sbflow, &ids);
+        smap_destroy(&ids);
+
+    } else {
+        lflow->sb_uuid = sbflow->header_.uuid;
+        sbrec_dp_group = sbflow->logical_dp_group;
+
+        if (ovn_internal_version_changed) {
+            const char *stage_name = smap_get_def(&sbflow->external_ids,
+                                                  "stage-name", "");
+            const char *stage_hint = smap_get_def(&sbflow->external_ids,
+                                                  "stage-hint", "");
+            const char *source = smap_get_def(&sbflow->external_ids,
+                                              "source", "");
+
+            if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
+                sbrec_logical_flow_update_external_ids_setkey(
+                    sbflow, "stage-name", ovn_stage_to_str(lflow->stage));
+            }
+            if (lflow->stage_hint) {
+                if (strcmp(stage_hint, lflow->stage_hint)) {
+                    sbrec_logical_flow_update_external_ids_setkey(
+                        sbflow, "stage-hint", lflow->stage_hint);
+                }
+            }
+            if (lflow->where) {
+
+                /* Trim the source locator lflow->where, which looks something
+                 * like "ovn/northd/northd.c:1234", down to just the part
+                 * following the last slash, e.g. "northd.c:1234". */
+                const char *slash = strrchr(lflow->where, '/');
+#if _WIN32
+                const char *backslash = strrchr(lflow->where, '\\');
+                if (!slash || backslash > slash) {
+                    slash = backslash;
+                }
+#endif
+                const char *where = slash ? slash + 1 : lflow->where;
+
+                if (strcmp(source, where)) {
+                    sbrec_logical_flow_update_external_ids_setkey(
+                        sbflow, "source", where);
+                }
+            }
+        }
+    }
+
+    if (lflow->od) {
+        sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
+        sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
+    } else {
+        sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
+        lflow->dpg = ovn_dp_group_get(dp_groups, lflow->n_ods,
+                                      lflow->dpg_bitmap,
+                                      n_datapaths);
+        if (lflow->dpg) {
+            /* Update the dpg's sb dp_group. */
+            lflow->dpg->dp_group = sbrec_logical_dp_group_table_get_for_uuid(
+                sb_dpgrp_table,
+                &lflow->dpg->dpg_uuid);
+
+            if (!lflow->dpg->dp_group) {
+                /* Ideally this should not happen.  But it can still happen
+                 * due to 2 reasons:
+                 * 1. There is a bug in the dp_group management.  We should
+                 *    perhaps assert here.
+                 * 2. A User or CMS may delete the logical_dp_groups in SB DB
+                 *    or clear the SB:Logical_flow.logical_dp_groups column
+                 *    (intentionally or accidentally)
+                 *
+                 * Because of (2) it is better to return false instead of
+                 * assert,so that we recover from th inconsistent SB DB.
+                 */
+                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
+                VLOG_WARN_RL(&rl, "SB Logical flow ["UUID_FMT"]'s "
+                            "logical_dp_group column is not set "
+                            "(which is unexpected).  It should have been "
+                            "referencing the dp group ["UUID_FMT"]",
+                            UUID_ARGS(&sbflow->header_.uuid),
+                            UUID_ARGS(&lflow->dpg->dpg_uuid));
+                return false;
+            }
+        } else {
+            lflow->dpg = ovn_dp_group_create(
+                                ovnsb_txn, dp_groups, sbrec_dp_group,
+                                lflow->n_ods, lflow->dpg_bitmap,
+                                n_datapaths, is_switch,
+                                ls_datapaths,
+                                lr_datapaths);
+        }
+        sbrec_logical_flow_set_logical_dp_group(sbflow,
+                                                lflow->dpg->dp_group);
+    }
+
+    if (pre_sync_dpg != lflow->dpg) {
+        ovn_dp_group_use(lflow->dpg);
+        ovn_dp_group_release(dp_groups, pre_sync_dpg);
+    }
+
+    return true;
+}
+
+static struct ovn_dp_group *
+ovn_dp_group_find(const struct hmap *dp_groups,
+                  const unsigned long *dpg_bitmap, size_t bitmap_len,
+                  uint32_t hash)
+{
+    struct ovn_dp_group *dpg;
+
+    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
+        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
+            return dpg;
+        }
+    }
+    return NULL;
+}
+
+static void
+ovn_dp_group_use(struct ovn_dp_group *dpg)
+{
+    if (dpg) {
+        dpg->refcnt++;
+    }
+}
+
+static void
+ovn_dp_group_release(struct hmap *dp_groups, struct ovn_dp_group *dpg)
+{
+    if (dpg && !--dpg->refcnt) {
+        hmap_remove(dp_groups, &dpg->node);
+        ovn_dp_group_destroy(dpg);
+    }
+}
+
+/* Destroys the ovn_dp_group and frees the memory.
+ * Caller should remove the dpg->node from the hmap before
+ * calling this. */
+static void
+ovn_dp_group_destroy(struct ovn_dp_group *dpg)
+{
+    bitmap_free(dpg->bitmap);
+    free(dpg);
+}
+
+static struct sbrec_logical_dp_group *
+ovn_sb_insert_or_update_logical_dp_group(
+                            struct ovsdb_idl_txn *ovnsb_txn,
+                            struct sbrec_logical_dp_group *dp_group,
+                            const unsigned long *dpg_bitmap,
+                            const struct ovn_datapaths *datapaths)
+{
+    const struct sbrec_datapath_binding **sb;
+    size_t n = 0, index;
+
+    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
+    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
+        sb[n++] = datapaths->array[index]->sb;
+    }
+    if (!dp_group) {
+        struct uuid dpg_uuid = uuid_random();
+        dp_group = sbrec_logical_dp_group_insert_persist_uuid(
+            ovnsb_txn, &dpg_uuid);
+    }
+    sbrec_logical_dp_group_set_datapaths(
+        dp_group, (struct sbrec_datapath_binding **) sb, n);
+    free(sb);
+
+    return dp_group;
+}
+
+static struct ovn_dp_group *
+ovn_dp_group_get(struct hmap *dp_groups, size_t desired_n,
+                 const unsigned long *desired_bitmap,
+                 size_t bitmap_len)
+{
+    uint32_t hash;
+
+    hash = hash_int(desired_n, 0);
+    return ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
+}
+
+/* Creates a new datapath group and adds it to 'dp_groups'.
+ * If 'sb_group' is provided, function will try to re-use this group by
+ * either taking it directly, or by modifying, if it's not already in use.
+ * Caller should first call ovn_dp_group_get() before calling this function. */
+static struct ovn_dp_group *
+ovn_dp_group_create(struct ovsdb_idl_txn *ovnsb_txn,
+                    struct hmap *dp_groups,
+                    struct sbrec_logical_dp_group *sb_group,
+                    size_t desired_n,
+                    const unsigned long *desired_bitmap,
+                    size_t bitmap_len,
+                    bool is_switch,
+                    const struct ovn_datapaths *ls_datapaths,
+                    const struct ovn_datapaths *lr_datapaths)
+{
+    struct ovn_dp_group *dpg;
+
+    bool update_dp_group = false, can_modify = false;
+    unsigned long *dpg_bitmap;
+    size_t i, n = 0;
+
+    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
+    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
+        struct ovn_datapath *datapath_od;
+
+        datapath_od = ovn_datapath_from_sbrec(
+                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
+                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
+                        sb_group->datapaths[i]);
+        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
+            break;
+        }
+        bitmap_set1(dpg_bitmap, datapath_od->index);
+        n++;
+    }
+    if (!sb_group || i != sb_group->n_datapaths) {
+        /* No group or stale group.  Not going to be used. */
+        update_dp_group = true;
+        can_modify = true;
+    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
+        /* The group in Sb is different. */
+        update_dp_group = true;
+        /* We can modify existing group if it's not already in use. */
+        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
+                                        bitmap_len, hash_int(n, 0));
+    }
+
+    bitmap_free(dpg_bitmap);
+
+    dpg = xzalloc(sizeof *dpg);
+    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
+    if (!update_dp_group) {
+        dpg->dp_group = sb_group;
+    } else {
+        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
+                            ovnsb_txn,
+                            can_modify ? sb_group : NULL,
+                            desired_bitmap,
+                            is_switch ? ls_datapaths : lr_datapaths);
+    }
+    dpg->dpg_uuid = dpg->dp_group->header_.uuid;
+    hmap_insert(dp_groups, &dpg->node, hash_int(desired_n, 0));
+
+    return dpg;
+}
+
+/* Adds an OVN datapath to a datapath group of existing logical flow.
+ * Version to use when hash bucket locking is NOT required or the corresponding
+ * hash lock is already taken. */
+static void
+ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
+                                const struct ovn_datapath *od,
+                                const unsigned long *dp_bitmap,
+                                size_t bitmap_len)
+    OVS_REQUIRES(fake_hash_mutex)
+{
+    if (od) {
+        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
+    }
+    if (dp_bitmap) {
+        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
+    }
+}
+
+static bool
+lflow_ref_sync_lflows__(struct lflow_ref  *lflow_ref,
+                        struct lflow_table *lflow_table,
+                        struct ovsdb_idl_txn *ovnsb_txn,
+                        const struct ovn_datapaths *ls_datapaths,
+                        const struct ovn_datapaths *lr_datapaths,
+                        bool ovn_internal_version_changed,
+                        const struct sbrec_logical_flow_table *sbflow_table,
+                        const struct sbrec_logical_dp_group_table *dpgrp_table)
+{
+    struct lflow_ref_node *lrn;
+    struct ovn_lflow *lflow;
+    HMAP_FOR_EACH_SAFE (lrn, ref_node, &lflow_ref->lflow_ref_nodes) {
+        lflow = lrn->lflow;
+        const struct sbrec_logical_flow *sblflow =
+            sbrec_logical_flow_table_get_for_uuid(sbflow_table,
+                                                  &lflow->sb_uuid);
+
+        struct hmap *dp_groups = NULL;
+        size_t n_datapaths;
+        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
+            dp_groups = &lflow_table->ls_dp_groups;
+            n_datapaths = ods_size(ls_datapaths);
+        } else {
+            dp_groups = &lflow_table->lr_dp_groups;
+            n_datapaths = ods_size(lr_datapaths);
+        }
+
+        size_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
+
+        if (n_ods) {
+            if (!sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
+                                  lr_datapaths, ovn_internal_version_changed,
+                                  sblflow, dpgrp_table)) {
+                return false;
+            }
+        }
+
+        if (!lrn->linked) {
+            lflow_ref_node_destroy(lrn);
+
+            if (ovs_list_is_empty(&lflow->referenced_by)) {
+                ovn_dp_group_release(dp_groups, lflow->dpg);
+                ovn_lflow_destroy(lflow_table, lflow);
+                if (sblflow) {
+                    sbrec_logical_flow_delete(sblflow);
+                }
+            }
+        }
+    }
+
+    return true;
+}
+
+/* Used for the datapath reference counting for a given 'struct ovn_lflow'.
+ * See the hmap 'dp_refcnts_map in 'struct ovn_lflow'.
+ * For a given lflow L(M, A) with match - M and actions - A, it can be
+ * referenced by multiple lflow_refs for the same datapath
+ * Eg. Two lflow_ref's - op->lflow_ref and op->stateful_lflow_ref of a
+ * datapath can have a reference to the same lflow L (M, A).  In this it
+ * is important to maintain this reference count so that the sync to the
+ * SB DB logical_flow is correct. */
+struct dp_refcnt {
+    struct hmap_node key_node;
+
+    size_t dp_index; /* datapath index.  Also used as hmap key. */
+    size_t refcnt;   /* reference counter. */
+};
+
+static struct dp_refcnt *
+dp_refcnt_find(struct hmap *dp_refcnts_map, size_t dp_index)
+{
+    struct dp_refcnt *dp_refcnt;
+    HMAP_FOR_EACH_WITH_HASH (dp_refcnt, key_node, dp_index, dp_refcnts_map) {
+        if (dp_refcnt->dp_index == dp_index) {
+            return dp_refcnt;
+        }
+    }
+
+    return NULL;
+}
+
+static void
+dp_refcnt_use(struct hmap *dp_refcnts_map, size_t dp_index)
+{
+    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
+
+    if (!dp_refcnt) {
+        dp_refcnt = xzalloc(sizeof *dp_refcnt);
+        dp_refcnt->dp_index = dp_index;
+
+        hmap_insert(dp_refcnts_map, &dp_refcnt->key_node, dp_index);
+    }
+
+    dp_refcnt->refcnt++;
+}
+
+/* Decrements the datapath's refcnt from the 'dp_refcnts_map' if it exists
+ * and returns true if the refcnt is 0 or if the dp refcnt doesn't exist. */
+static bool
+dp_refcnt_release(struct hmap *dp_refcnts_map, size_t dp_index)
+{
+    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
+    if (!dp_refcnt) {
+        return true;
+    }
+
+    if (!--dp_refcnt->refcnt) {
+        hmap_remove(dp_refcnts_map, &dp_refcnt->key_node);
+        free(dp_refcnt);
+        return true;
+    }
+
+    return false;
+}
+
+static void
+ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *lflow)
+{
+    struct dp_refcnt *dp_refcnt;
+
+    HMAP_FOR_EACH_POP (dp_refcnt, key_node, &lflow->dp_refcnts_map) {
+        free(dp_refcnt);
+    }
+
+    hmap_destroy(&lflow->dp_refcnts_map);
+}
+
+static struct lflow_ref_node *
+lflow_ref_node_find(struct hmap *lflow_ref_nodes, struct ovn_lflow *lflow,
+                    uint32_t lflow_hash)
+{
+    struct lflow_ref_node *lrn;
+    HMAP_FOR_EACH_WITH_HASH (lrn, ref_node, lflow_hash, lflow_ref_nodes) {
+        if (lrn->lflow == lflow) {
+            return lrn;
+        }
+    }
+
+    return NULL;
+}
+
+static void
+lflow_ref_node_destroy(struct lflow_ref_node *lrn)
+{
+    hmap_remove(&lrn->lflow_ref->lflow_ref_nodes, &lrn->ref_node);
+    ovs_list_remove(&lrn->ref_list_node);
+    free(lrn);
+}
diff --git a/northd/lflow-mgr.h b/northd/lflow-mgr.h
new file mode 100644
index 0000000000..211d6d9d36
--- /dev/null
+++ b/northd/lflow-mgr.h
@@ -0,0 +1,186 @@ 
+    /*
+ * Copyright (c) 2024, Red Hat, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at:
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef LFLOW_MGR_H
+#define LFLOW_MGR_H 1
+
+#include "include/openvswitch/hmap.h"
+#include "include/openvswitch/uuid.h"
+
+#include "northd.h"
+
+struct ovsdb_idl_txn;
+struct ovn_datapath;
+struct ovsdb_idl_row;
+
+/* lflow map which stores the logical flows. */
+struct lflow_table;
+struct lflow_table *lflow_table_alloc(void);
+void lflow_table_init(struct lflow_table *);
+void lflow_table_clear(struct lflow_table *);
+void lflow_table_destroy(struct lflow_table *);
+void lflow_table_expand(struct lflow_table *);
+void lflow_table_set_size(struct lflow_table *, size_t);
+void lflow_table_sync_to_sb(struct lflow_table *,
+                            struct ovsdb_idl_txn *ovnsb_txn,
+                            const struct ovn_datapaths *ls_datapaths,
+                            const struct ovn_datapaths *lr_datapaths,
+                            bool ovn_internal_version_changed,
+                            const struct sbrec_logical_flow_table *,
+                            const struct sbrec_logical_dp_group_table *);
+void lflow_table_destroy(struct lflow_table *);
+
+void lflow_hash_lock_init(void);
+void lflow_hash_lock_destroy(void);
+
+/* lflow mgr manages logical flows for a resource (like logical port
+ * or datapath). */
+struct lflow_ref;
+
+struct lflow_ref *lflow_ref_create(void);
+void lflow_ref_destroy(struct lflow_ref *);
+void lflow_ref_clear(struct lflow_ref *lflow_ref);
+void lflow_ref_unlink_lflows(struct lflow_ref *);
+bool lflow_ref_resync_flows(struct lflow_ref *,
+                            struct lflow_table *lflow_table,
+                            struct ovsdb_idl_txn *ovnsb_txn,
+                            const struct ovn_datapaths *ls_datapaths,
+                            const struct ovn_datapaths *lr_datapaths,
+                            bool ovn_internal_version_changed,
+                            const struct sbrec_logical_flow_table *,
+                            const struct sbrec_logical_dp_group_table *);
+bool lflow_ref_sync_lflows(struct lflow_ref *,
+                           struct lflow_table *lflow_table,
+                           struct ovsdb_idl_txn *ovnsb_txn,
+                           const struct ovn_datapaths *ls_datapaths,
+                           const struct ovn_datapaths *lr_datapaths,
+                           bool ovn_internal_version_changed,
+                           const struct sbrec_logical_flow_table *,
+                           const struct sbrec_logical_dp_group_table *);
+
+
+void lflow_table_add_lflow(struct lflow_table *, const struct ovn_datapath *,
+                           const unsigned long *dp_bitmap,
+                           size_t dp_bitmap_len, enum ovn_stage stage,
+                           uint16_t priority, const char *match,
+                           const char *actions, const char *io_port,
+                           const char *ctrl_meter,
+                           const struct ovsdb_idl_row *stage_hint,
+                           const char *where, struct lflow_ref *);
+void lflow_table_add_lflow_default_drop(struct lflow_table *,
+                                        const struct ovn_datapath *,
+                                        enum ovn_stage stage,
+                                        const char *where,
+                                        struct lflow_ref *);
+
+/* Adds a row with the specified contents to the Logical_Flow table. */
+#define ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
+                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
+                                  STAGE_HINT) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
+                          OVS_SOURCE_LOCATOR, NULL)
+
+#define ovn_lflow_add_with_lflow_ref_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, \
+                                            MATCH, ACTIONS, IN_OUT_PORT, \
+                                            CTRL_METER, STAGE_HINT, LFLOW_REF)\
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
+                          OVS_SOURCE_LOCATOR, LFLOW_REF)
+
+#define ovn_lflow_add_with_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
+                                ACTIONS, STAGE_HINT) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, NULL, NULL, STAGE_HINT,  \
+                          OVS_SOURCE_LOCATOR, NULL)
+
+#define ovn_lflow_add_with_lflow_ref_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
+                                          MATCH, ACTIONS, STAGE_HINT, \
+                                          LFLOW_REF) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, NULL, NULL, STAGE_HINT,  \
+                          OVS_SOURCE_LOCATOR, LFLOW_REF)
+
+#define ovn_lflow_add_with_dp_group(LFLOW_TABLE, DP_BITMAP, DP_BITMAP_LEN, \
+                                    STAGE, PRIORITY, MATCH, ACTIONS, \
+                                    STAGE_HINT) \
+    lflow_table_add_lflow(LFLOW_TABLE, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
+                          PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
+                          OVS_SOURCE_LOCATOR, NULL)
+
+#define ovn_lflow_add_default_drop(LFLOW_TABLE, OD, STAGE)                    \
+    lflow_table_add_lflow_default_drop(LFLOW_TABLE, OD, STAGE, \
+                                       OVS_SOURCE_LOCATOR, NULL)
+
+
+/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
+ * the IN_OUT_PORT argument, which tells the lport name that appears in the
+ * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
+ * not local to the chassis. The critiera of the lport to be added using this
+ * argument:
+ *
+ * - For ingress pipeline, the lport that is used to match "inport".
+ * - For egress pipeline, the lport that is used to match "outport".
+ *
+ * For now, only LS pipelines should use this macro.  */
+#define ovn_lflow_add_with_lport_and_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
+                                          MATCH, ACTIONS, IN_OUT_PORT, \
+                                          STAGE_HINT, LFLOW_REF) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, IN_OUT_PORT, NULL, STAGE_HINT, \
+                          OVS_SOURCE_LOCATOR, LFLOW_REF)
+
+#define ovn_lflow_add(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, NULL)
+
+#define ovn_lflow_add_with_lflow_ref(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
+                                     ACTIONS, LFLOW_REF) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, \
+                          LFLOW_REF)
+
+#define ovn_lflow_metered(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
+                          CTRL_METER) \
+    ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
+                              ACTIONS, NULL, CTRL_METER, NULL)
+
+struct sbrec_logical_dp_group;
+
+struct ovn_dp_group {
+    unsigned long *bitmap;
+    const struct sbrec_logical_dp_group *dp_group;
+    struct uuid dpg_uuid;
+    struct hmap_node node;
+    size_t refcnt;
+};
+
+static inline void
+ovn_dp_groups_init(struct hmap *dp_groups)
+{
+    hmap_init(dp_groups);
+}
+
+void ovn_dp_groups_clear(struct hmap *dp_groups);
+void ovn_dp_groups_destroy(struct hmap *dp_groups);
+struct ovn_dp_group *ovn_dp_group_get_or_create(
+    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
+    struct sbrec_logical_dp_group *sb_group,
+    size_t desired_n, const unsigned long *desired_bitmap,
+    size_t bitmap_len, bool is_switch,
+    const struct ovn_datapaths *ls_datapaths,
+    const struct ovn_datapaths *lr_datapaths);
+
+#endif /* LFLOW_MGR_H */
\ No newline at end of file
diff --git a/northd/northd.c b/northd/northd.c
index 467056053f..76004256f1 100644
--- a/northd/northd.c
+++ b/northd/northd.c
@@ -41,6 +41,7 @@ 
 #include "lib/ovn-sb-idl.h"
 #include "lib/ovn-util.h"
 #include "lib/lb.h"
+#include "lflow-mgr.h"
 #include "memory.h"
 #include "northd.h"
 #include "en-lb-data.h"
@@ -68,7 +69,7 @@ 
 VLOG_DEFINE_THIS_MODULE(northd);
 
 static bool controller_event_en;
-static bool lflow_hash_lock_initialized = false;
+
 
 static bool check_lsp_is_up;
 
@@ -97,116 +98,6 @@  static bool default_acl_drop;
 
 #define MAX_OVN_TAGS 4096
 
-/* Pipeline stages. */
-
-/* The two purposes for which ovn-northd uses OVN logical datapaths. */
-enum ovn_datapath_type {
-    DP_SWITCH,                  /* OVN logical switch. */
-    DP_ROUTER                   /* OVN logical router. */
-};
-
-/* Returns an "enum ovn_stage" built from the arguments.
- *
- * (It's better to use ovn_stage_build() for type-safety reasons, but inline
- * functions can't be used in enums or switch cases.) */
-#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
-    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
-
-/* A stage within an OVN logical switch or router.
- *
- * An "enum ovn_stage" indicates whether the stage is part of a logical switch
- * or router, whether the stage is part of the ingress or egress pipeline, and
- * the table within that pipeline.  The first three components are combined to
- * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
- * S_ROUTER_OUT_DELIVERY. */
-enum ovn_stage {
-#define PIPELINE_STAGES                                                   \
-    /* Logical switch ingress stages. */                                  \
-    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
-    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
-    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
-    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
-    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
-    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
-    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
-    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
-    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
-    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
-    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
-    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
-    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
-    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
-    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
-                   "ls_in_acl_after_lb_eval")  \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
-                   "ls_in_acl_after_lb_action")  \
-    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
-    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
-    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
-    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
-    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
-    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
-    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
-    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
-    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
-                                                                          \
-    /* Logical switch egress stages. */                                   \
-    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
-    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
-    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
-    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
-    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
-    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
-    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
-    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
-    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
-    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
-    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
-                                                                      \
-    /* Logical router ingress stages. */                              \
-    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
-    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
-    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
-    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
-    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
-    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
-    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
-    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
-    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
-    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
-    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
-    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
-    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
-    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
-    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
-    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
-    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
-    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
-    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
-    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
-    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
-    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
-                                                                      \
-    /* Logical router egress stages. */                               \
-    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
-                   "lr_out_chk_dnat_local")                                  \
-    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
-    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
-    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
-    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
-    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
-    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
-
-#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
-    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
-        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
-    PIPELINE_STAGES
-#undef PIPELINE_STAGE
-};
 
 /* Due to various hard-coded priorities need to implement ACLs, the
  * northbound database supports a smaller range of ACL priorities than
@@ -391,51 +282,9 @@  enum ovn_stage {
 #define ROUTE_PRIO_OFFSET_STATIC 1
 #define ROUTE_PRIO_OFFSET_CONNECTED 2
 
-/* Returns an "enum ovn_stage" built from the arguments. */
-static enum ovn_stage
-ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
-                uint8_t table)
-{
-    return OVN_STAGE_BUILD(dp_type, pipeline, table);
-}
-
-/* Returns the pipeline to which 'stage' belongs. */
-static enum ovn_pipeline
-ovn_stage_get_pipeline(enum ovn_stage stage)
-{
-    return (stage >> 8) & 1;
-}
-
-/* Returns the pipeline name to which 'stage' belongs. */
-static const char *
-ovn_stage_get_pipeline_name(enum ovn_stage stage)
-{
-    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
-}
-
-/* Returns the table to which 'stage' belongs. */
-static uint8_t
-ovn_stage_get_table(enum ovn_stage stage)
-{
-    return stage & 0xff;
-}
-
-/* Returns a string name for 'stage'. */
-static const char *
-ovn_stage_to_str(enum ovn_stage stage)
-{
-    switch (stage) {
-#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
-        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
-    PIPELINE_STAGES
-#undef PIPELINE_STAGE
-        default: return "<unknown>";
-    }
-}
-
 /* Returns the type of the datapath to which a flow with the given 'stage' may
  * be added. */
-static enum ovn_datapath_type
+enum ovn_datapath_type
 ovn_stage_to_datapath_type(enum ovn_stage stage)
 {
     switch (stage) {
@@ -680,13 +529,6 @@  ovn_datapath_destroy(struct hmap *datapaths, struct ovn_datapath *od)
     }
 }
 
-/* Returns 'od''s datapath type. */
-static enum ovn_datapath_type
-ovn_datapath_get_type(const struct ovn_datapath *od)
-{
-    return od->nbs ? DP_SWITCH : DP_ROUTER;
-}
-
 static struct ovn_datapath *
 ovn_datapath_find_(const struct hmap *datapaths,
                    const struct uuid *uuid)
@@ -722,13 +564,7 @@  ovn_datapath_find_by_key(struct hmap *datapaths, uint32_t dp_key)
     return NULL;
 }
 
-static bool
-ovn_datapath_is_stale(const struct ovn_datapath *od)
-{
-    return !od->nbr && !od->nbs;
-}
-
-static struct ovn_datapath *
+struct ovn_datapath *
 ovn_datapath_from_sbrec(const struct hmap *ls_datapaths,
                         const struct hmap *lr_datapaths,
                         const struct sbrec_datapath_binding *sb)
@@ -1297,19 +1133,6 @@  struct ovn_port_routable_addresses {
     size_t n_addrs;
 };
 
-/* A node that maintains link between an object (such as an ovn_port) and
- * a lflow. */
-struct lflow_ref_node {
-    /* This list follows different lflows referenced by the same object. List
-     * head is, for example, ovn_port->lflows.  */
-    struct ovs_list lflow_list_node;
-    /* This list follows different objects that reference the same lflow. List
-     * head is ovn_lflow->referenced_by. */
-    struct ovs_list ref_list_node;
-    /* The lflow. */
-    struct ovn_lflow *lflow;
-};
-
 static bool lsp_can_be_inc_processed(const struct nbrec_logical_switch_port *);
 
 static bool
@@ -1389,6 +1212,8 @@  ovn_port_set_nb(struct ovn_port *op,
     init_mcast_port_info(&op->mcast_info, op->nbsp, op->nbrp);
 }
 
+static bool lsp_is_router(const struct nbrec_logical_switch_port *nbsp);
+
 static struct ovn_port *
 ovn_port_create(struct hmap *ports, const char *key,
                 const struct nbrec_logical_switch_port *nbsp,
@@ -1407,12 +1232,14 @@  ovn_port_create(struct hmap *ports, const char *key,
     op->l3dgw_port = op->cr_port = NULL;
     hmap_insert(ports, &op->key_node, hash_string(op->key, 0));
 
-    ovs_list_init(&op->lflows);
+    op->lflow_ref = lflow_ref_create();
+    op->stateful_lflow_ref = lflow_ref_create();
+
     return op;
 }
 
 static void
-ovn_port_destroy_orphan(struct ovn_port *port)
+ovn_port_cleanup(struct ovn_port *port)
 {
     if (port->tunnel_key) {
         ovs_assert(port->od);
@@ -1422,6 +1249,8 @@  ovn_port_destroy_orphan(struct ovn_port *port)
         destroy_lport_addresses(&port->lsp_addrs[i]);
     }
     free(port->lsp_addrs);
+    port->n_lsp_addrs = 0;
+    port->lsp_addrs = NULL;
 
     if (port->peer) {
         port->peer->peer = NULL;
@@ -1431,18 +1260,22 @@  ovn_port_destroy_orphan(struct ovn_port *port)
         destroy_lport_addresses(&port->ps_addrs[i]);
     }
     free(port->ps_addrs);
+    port->ps_addrs = NULL;
+    port->n_ps_addrs = 0;
 
     destroy_lport_addresses(&port->lrp_networks);
     destroy_lport_addresses(&port->proxy_arp_addrs);
+}
+
+static void
+ovn_port_destroy_orphan(struct ovn_port *port)
+{
+    ovn_port_cleanup(port);
     free(port->json_key);
     free(port->key);
+    lflow_ref_destroy(port->lflow_ref);
+    lflow_ref_destroy(port->stateful_lflow_ref);
 
-    struct lflow_ref_node *l;
-    LIST_FOR_EACH_SAFE (l, lflow_list_node, &port->lflows) {
-        ovs_list_remove(&l->lflow_list_node);
-        ovs_list_remove(&l->ref_list_node);
-        free(l);
-    }
     free(port);
 }
 
@@ -3889,124 +3722,6 @@  build_lb_port_related_data(
     build_lswitch_lbs_from_lrouter(lr_datapaths, lb_dps_map, lb_group_dps_map);
 }
 
-
-struct ovn_dp_group {
-    unsigned long *bitmap;
-    struct sbrec_logical_dp_group *dp_group;
-    struct hmap_node node;
-};
-
-static struct ovn_dp_group *
-ovn_dp_group_find(const struct hmap *dp_groups,
-                  const unsigned long *dpg_bitmap, size_t bitmap_len,
-                  uint32_t hash)
-{
-    struct ovn_dp_group *dpg;
-
-    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
-        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
-            return dpg;
-        }
-    }
-    return NULL;
-}
-
-static struct sbrec_logical_dp_group *
-ovn_sb_insert_or_update_logical_dp_group(
-                            struct ovsdb_idl_txn *ovnsb_txn,
-                            struct sbrec_logical_dp_group *dp_group,
-                            const unsigned long *dpg_bitmap,
-                            const struct ovn_datapaths *datapaths)
-{
-    const struct sbrec_datapath_binding **sb;
-    size_t n = 0, index;
-
-    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
-    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
-        sb[n++] = datapaths->array[index]->sb;
-    }
-    if (!dp_group) {
-        dp_group = sbrec_logical_dp_group_insert(ovnsb_txn);
-    }
-    sbrec_logical_dp_group_set_datapaths(
-        dp_group, (struct sbrec_datapath_binding **) sb, n);
-    free(sb);
-
-    return dp_group;
-}
-
-/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
- * doesn't exist, creates a new one and adds it to 'dp_groups'.
- * If 'sb_group' is provided, function will try to re-use this group by
- * either taking it directly, or by modifying, if it's not already in use. */
-static struct ovn_dp_group *
-ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
-                           struct hmap *dp_groups,
-                           struct sbrec_logical_dp_group *sb_group,
-                           size_t desired_n,
-                           const unsigned long *desired_bitmap,
-                           size_t bitmap_len,
-                           bool is_switch,
-                           const struct ovn_datapaths *ls_datapaths,
-                           const struct ovn_datapaths *lr_datapaths)
-{
-    struct ovn_dp_group *dpg;
-    uint32_t hash;
-
-    hash = hash_int(desired_n, 0);
-    dpg = ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
-    if (dpg) {
-        return dpg;
-    }
-
-    bool update_dp_group = false, can_modify = false;
-    unsigned long *dpg_bitmap;
-    size_t i, n = 0;
-
-    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
-    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
-        struct ovn_datapath *datapath_od;
-
-        datapath_od = ovn_datapath_from_sbrec(
-                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
-                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
-                        sb_group->datapaths[i]);
-        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
-            break;
-        }
-        bitmap_set1(dpg_bitmap, datapath_od->index);
-        n++;
-    }
-    if (!sb_group || i != sb_group->n_datapaths) {
-        /* No group or stale group.  Not going to be used. */
-        update_dp_group = true;
-        can_modify = true;
-    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
-        /* The group in Sb is different. */
-        update_dp_group = true;
-        /* We can modify existing group if it's not already in use. */
-        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
-                                        bitmap_len, hash_int(n, 0));
-    }
-
-    bitmap_free(dpg_bitmap);
-
-    dpg = xzalloc(sizeof *dpg);
-    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
-    if (!update_dp_group) {
-        dpg->dp_group = sb_group;
-    } else {
-        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
-                            ovnsb_txn,
-                            can_modify ? sb_group : NULL,
-                            desired_bitmap,
-                            is_switch ? ls_datapaths : lr_datapaths);
-    }
-    hmap_insert(dp_groups, &dpg->node, hash);
-
-    return dpg;
-}
-
 struct sb_lb {
     struct hmap_node hmap_node;
 
@@ -4820,28 +4535,20 @@  ovn_port_find_in_datapath(struct ovn_datapath *od,
     return NULL;
 }
 
-static struct ovn_port *
-ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
-               const char *key, const struct nbrec_logical_switch_port *nbsp,
-               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
-               struct ovs_list *lflows,
-               const struct sbrec_mirror_table *sbrec_mirror_table,
-               const struct sbrec_chassis_table *sbrec_chassis_table,
-               struct ovsdb_idl_index *sbrec_chassis_by_name,
-               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
+static bool
+ls_port_init(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
+             struct hmap *ls_ports, struct ovn_datapath *od,
+             const struct sbrec_port_binding *sb,
+             const struct sbrec_mirror_table *sbrec_mirror_table,
+             const struct sbrec_chassis_table *sbrec_chassis_table,
+             struct ovsdb_idl_index *sbrec_chassis_by_name,
+             struct ovsdb_idl_index *sbrec_chassis_by_hostname)
 {
-    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
-                                          NULL);
-    parse_lsp_addrs(op);
     op->od = od;
-    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
-    if (lflows) {
-        ovs_list_splice(&op->lflows, lflows->next, lflows);
-    }
-
+    parse_lsp_addrs(op);
     /* Assign explicitly requested tunnel ids first. */
     if (!ovn_port_assign_requested_tnl_id(sbrec_chassis_table, op)) {
-        return NULL;
+        return false;
     }
     if (sb) {
         op->sb = sb;
@@ -4858,14 +4565,57 @@  ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
     }
     /* Assign new tunnel ids where needed. */
     if (!ovn_port_allocate_key(sbrec_chassis_table, ls_ports, op)) {
-        return NULL;
+        return false;
     }
     ovn_port_update_sbrec(ovnsb_txn, sbrec_chassis_by_name,
                           sbrec_chassis_by_hostname, NULL, sbrec_mirror_table,
                           op, NULL, NULL);
+    return true;
+}
+
+static struct ovn_port *
+ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
+               const char *key, const struct nbrec_logical_switch_port *nbsp,
+               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
+               const struct sbrec_mirror_table *sbrec_mirror_table,
+               const struct sbrec_chassis_table *sbrec_chassis_table,
+               struct ovsdb_idl_index *sbrec_chassis_by_name,
+               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
+{
+    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
+                                          NULL);
+    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
+    if (!ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
+                      sbrec_mirror_table, sbrec_chassis_table,
+                      sbrec_chassis_by_name, sbrec_chassis_by_hostname)) {
+        ovn_port_destroy(ls_ports, op);
+        return NULL;
+    }
+
     return op;
 }
 
+static bool
+ls_port_reinit(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
+                struct hmap *ls_ports,
+                const struct nbrec_logical_switch_port *nbsp,
+                const struct nbrec_logical_router_port *nbrp,
+                struct ovn_datapath *od,
+                const struct sbrec_port_binding *sb,
+                const struct sbrec_mirror_table *sbrec_mirror_table,
+                const struct sbrec_chassis_table *sbrec_chassis_table,
+                struct ovsdb_idl_index *sbrec_chassis_by_name,
+                struct ovsdb_idl_index *sbrec_chassis_by_hostname)
+{
+    ovn_port_cleanup(op);
+    op->sb = sb;
+    ovn_port_set_nb(op, nbsp, nbrp);
+    op->l3dgw_port = op->cr_port = NULL;
+    return ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
+                        sbrec_mirror_table, sbrec_chassis_table,
+                        sbrec_chassis_by_name, sbrec_chassis_by_hostname);
+}
+
 /* Returns true if the logical switch has changes which can be
  * incrementally handled.
  * Presently supports i-p for the below changes:
@@ -5005,7 +4755,7 @@  ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
                 goto fail;
             }
             op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
-                                new_nbsp->name, new_nbsp, od, NULL, NULL,
+                                new_nbsp->name, new_nbsp, od, NULL,
                                 ni->sbrec_mirror_table,
                                 ni->sbrec_chassis_table,
                                 ni->sbrec_chassis_by_name,
@@ -5036,17 +4786,12 @@  ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
                 op->visited = true;
                 continue;
             }
-            struct ovs_list lflows = OVS_LIST_INITIALIZER(&lflows);
-            ovs_list_splice(&lflows, op->lflows.next, &op->lflows);
-            ovn_port_destroy(&nd->ls_ports, op);
-            op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
-                                new_nbsp->name, new_nbsp, od, sb, &lflows,
-                                ni->sbrec_mirror_table,
+            if (!ls_port_reinit(op, ovnsb_idl_txn, &nd->ls_ports,
+                                new_nbsp, NULL,
+                                od, sb, ni->sbrec_mirror_table,
                                 ni->sbrec_chassis_table,
                                 ni->sbrec_chassis_by_name,
-                                ni->sbrec_chassis_by_hostname);
-            ovs_assert(ovs_list_is_empty(&lflows));
-            if (!op) {
+                                ni->sbrec_chassis_by_hostname)) {
                 goto fail;
             }
             add_op_to_northd_tracked_ports(&trk_lsps->updated, op);
@@ -5991,170 +5736,7 @@  ovn_igmp_group_destroy(struct hmap *igmp_groups,
  * function of most of the northbound database.
  */
 
-struct ovn_lflow {
-    struct hmap_node hmap_node;
-    struct ovs_list list_node;   /* For temporary list of lflows. Don't remove
-                                    at destroy. */
-
-    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
-    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
-    enum ovn_stage stage;
-    uint16_t priority;
-    char *match;
-    char *actions;
-    char *io_port;
-    char *stage_hint;
-    char *ctrl_meter;
-    size_t n_ods;                /* Number of datapaths referenced by 'od' and
-                                  * 'dpg_bitmap'. */
-    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
-
-    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
-    const char *where;
-
-    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
-};
-
-static void ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow);
-static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
-                                        const struct ovn_datapath *od,
-                                        enum ovn_stage stage,
-                                        uint16_t priority, const char *match,
-                                        const char *actions,
-                                        const char *ctrl_meter, uint32_t hash);
-
-static char *
-ovn_lflow_hint(const struct ovsdb_idl_row *row)
-{
-    if (!row) {
-        return NULL;
-    }
-    return xasprintf("%08x", row->uuid.parts[0]);
-}
-
-static bool
-ovn_lflow_equal(const struct ovn_lflow *a, const struct ovn_datapath *od,
-                enum ovn_stage stage, uint16_t priority, const char *match,
-                const char *actions, const char *ctrl_meter)
-{
-    return (a->od == od
-            && a->stage == stage
-            && a->priority == priority
-            && !strcmp(a->match, match)
-            && !strcmp(a->actions, actions)
-            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
-}
-
-enum {
-    STATE_NULL,               /* parallelization is off */
-    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
-    STATE_USE_PARALLELIZATION /* parallelization is on */
-};
-static int parallelization_state = STATE_NULL;
-
-static void
-ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
-               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
-               char *match, char *actions, char *io_port, char *ctrl_meter,
-               char *stage_hint, const char *where)
-{
-    ovs_list_init(&lflow->list_node);
-    ovs_list_init(&lflow->referenced_by);
-    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
-    lflow->od = od;
-    lflow->stage = stage;
-    lflow->priority = priority;
-    lflow->match = match;
-    lflow->actions = actions;
-    lflow->io_port = io_port;
-    lflow->stage_hint = stage_hint;
-    lflow->ctrl_meter = ctrl_meter;
-    lflow->dpg = NULL;
-    lflow->where = where;
-    lflow->sb_uuid = UUID_ZERO;
-}
-
-/* The lflow_hash_lock is a mutex array that protects updates to the shared
- * lflow table across threads when parallel lflow build and dp-group are both
- * enabled. To avoid high contention between threads, a big array of mutexes
- * are used instead of just one. This is possible because when parallel build
- * is used we only use hmap_insert_fast() to update the hmap, which would not
- * touch the bucket array but only the list in a single bucket. We only need to
- * make sure that when adding lflows to the same hash bucket, the same lock is
- * used, so that no two threads can add to the bucket at the same time.  It is
- * ok that the same lock is used to protect multiple buckets, so a fixed sized
- * mutex array is used instead of 1-1 mapping to the hash buckets. This
- * simplies the implementation while effectively reduces lock contention
- * because the chance that different threads contending the same lock amongst
- * the big number of locks is very low. */
-#define LFLOW_HASH_LOCK_MASK 0xFFFF
-static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
-
-static void
-lflow_hash_lock_init(void)
-{
-    if (!lflow_hash_lock_initialized) {
-        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
-            ovs_mutex_init(&lflow_hash_locks[i]);
-        }
-        lflow_hash_lock_initialized = true;
-    }
-}
-
-static void
-lflow_hash_lock_destroy(void)
-{
-    if (lflow_hash_lock_initialized) {
-        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
-            ovs_mutex_destroy(&lflow_hash_locks[i]);
-        }
-    }
-    lflow_hash_lock_initialized = false;
-}
-
-/* Full thread safety analysis is not possible with hash locks, because
- * they are taken conditionally based on the 'parallelization_state' and
- * a flow hash.  Also, the order in which two hash locks are taken is not
- * predictable during the static analysis.
- *
- * Since the order of taking two locks depends on a random hash, to avoid
- * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
- * of hash locks is similar to a single mutex.
- *
- * Using a fake mutex to partially simulate thread safety restrictions, as
- * if it were actually a single mutex.
- *
- * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
- * nature of the lock.  Unlike other attributes, it applies to the
- * implementation and not to the interface.  So, we can define a function
- * that acquires the lock without analysing the way it does that.
- */
-extern struct ovs_mutex fake_hash_mutex;
-
-static struct ovs_mutex *
-lflow_hash_lock(const struct hmap *lflow_map, uint32_t hash)
-    OVS_ACQUIRES(fake_hash_mutex)
-    OVS_NO_THREAD_SAFETY_ANALYSIS
-{
-    struct ovs_mutex *hash_lock = NULL;
-
-    if (parallelization_state == STATE_USE_PARALLELIZATION) {
-        hash_lock =
-            &lflow_hash_locks[hash & lflow_map->mask & LFLOW_HASH_LOCK_MASK];
-        ovs_mutex_lock(hash_lock);
-    }
-    return hash_lock;
-}
-
-static void
-lflow_hash_unlock(struct ovs_mutex *hash_lock)
-    OVS_RELEASES(fake_hash_mutex)
-    OVS_NO_THREAD_SAFETY_ANALYSIS
-{
-    if (hash_lock) {
-        ovs_mutex_unlock(hash_lock);
-    }
-}
+int parallelization_state = STATE_NULL;
 
 
 /* This thread-local var is used for parallel lflow building when dp-groups is
@@ -6167,240 +5749,7 @@  lflow_hash_unlock(struct ovs_mutex *hash_lock)
  * threads are collected to fix the lflow hmap's size (by the function
  * fix_flow_map_size()).
  * */
-static thread_local size_t thread_lflow_counter = 0;
-
-/* Adds an OVN datapath to a datapath group of existing logical flow.
- * Version to use when hash bucket locking is NOT required or the corresponding
- * hash lock is already taken. */
-static void
-ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
-                                const struct ovn_datapath *od,
-                                const unsigned long *dp_bitmap,
-                                size_t bitmap_len)
-    OVS_REQUIRES(fake_hash_mutex)
-{
-    if (od) {
-        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
-    }
-    if (dp_bitmap) {
-        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
-    }
-}
-
-/* This global variable collects the lflows generated by do_ovn_lflow_add().
- * start_collecting_lflows() will enable the lflow collection and the calls to
- * do_ovn_lflow_add (or the macros ovn_lflow_add_...) will add generated lflows
- * to the list. end_collecting_lflows() will disable it. */
-static thread_local struct ovs_list collected_lflows;
-static thread_local bool collecting_lflows = false;
-
-static void
-start_collecting_lflows(void)
-{
-    ovs_assert(!collecting_lflows);
-    ovs_list_init(&collected_lflows);
-    collecting_lflows = true;
-}
-
-static void
-end_collecting_lflows(void)
-{
-    ovs_assert(collecting_lflows);
-    collecting_lflows = false;
-}
-
-/* Adds a row with the specified contents to the Logical_Flow table.
- * Version to use when hash bucket locking is NOT required. */
-static void
-do_ovn_lflow_add(struct hmap *lflow_map, const struct ovn_datapath *od,
-                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
-                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
-                 const char *match, const char *actions, const char *io_port,
-                 const struct ovsdb_idl_row *stage_hint,
-                 const char *where, const char *ctrl_meter)
-    OVS_REQUIRES(fake_hash_mutex)
-{
-
-    struct ovn_lflow *old_lflow;
-    struct ovn_lflow *lflow;
-
-    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
-    ovs_assert(bitmap_len);
-
-    if (collecting_lflows) {
-        ovs_assert(od);
-        ovs_assert(!dp_bitmap);
-    } else {
-        old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
-                                   actions, ctrl_meter, hash);
-        if (old_lflow) {
-            ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
-                                            bitmap_len);
-            return;
-        }
-    }
-
-    lflow = xmalloc(sizeof *lflow);
-    /* While adding new logical flows we're not setting single datapath, but
-     * collecting a group.  'od' will be updated later for all flows with only
-     * one datapath in a group, so it could be hashed correctly. */
-    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
-                   xstrdup(match), xstrdup(actions),
-                   io_port ? xstrdup(io_port) : NULL,
-                   nullable_xstrdup(ctrl_meter),
-                   ovn_lflow_hint(stage_hint), where);
-
-    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
-
-    if (parallelization_state != STATE_USE_PARALLELIZATION) {
-        hmap_insert(lflow_map, &lflow->hmap_node, hash);
-    } else {
-        hmap_insert_fast(lflow_map, &lflow->hmap_node, hash);
-        thread_lflow_counter++;
-    }
-
-    if (collecting_lflows) {
-        ovs_list_insert(&collected_lflows, &lflow->list_node);
-    }
-}
-
-/* Adds a row with the specified contents to the Logical_Flow table. */
-static void
-ovn_lflow_add_at(struct hmap *lflow_map, const struct ovn_datapath *od,
-                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
-                 enum ovn_stage stage, uint16_t priority,
-                 const char *match, const char *actions, const char *io_port,
-                 const char *ctrl_meter,
-                 const struct ovsdb_idl_row *stage_hint, const char *where)
-    OVS_EXCLUDED(fake_hash_mutex)
-{
-    struct ovs_mutex *hash_lock;
-    uint32_t hash;
-
-    ovs_assert(!od ||
-               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
-
-    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
-                                 ovn_stage_get_pipeline(stage),
-                                 priority, match,
-                                 actions);
-
-    hash_lock = lflow_hash_lock(lflow_map, hash);
-    do_ovn_lflow_add(lflow_map, od, dp_bitmap, dp_bitmap_len, hash, stage,
-                     priority, match, actions, io_port, stage_hint, where,
-                     ctrl_meter);
-    lflow_hash_unlock(hash_lock);
-}
-
-static void
-__ovn_lflow_add_default_drop(struct hmap *lflow_map,
-                             struct ovn_datapath *od,
-                             enum ovn_stage stage,
-                             const char *where)
-{
-        ovn_lflow_add_at(lflow_map, od, NULL, 0, stage, 0, "1",
-                         debug_drop_action(),
-                         NULL, NULL, NULL, where );
-}
-
-/* Adds a row with the specified contents to the Logical_Flow table. */
-#define ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
-                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
-                                  STAGE_HINT) \
-    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
-                     IN_OUT_PORT, CTRL_METER, STAGE_HINT, OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_add_with_hint(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
-                                ACTIONS, STAGE_HINT) \
-    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
-                     NULL, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_add_with_dp_group(LFLOW_MAP, DP_BITMAP, DP_BITMAP_LEN, \
-                                    STAGE, PRIORITY, MATCH, ACTIONS, \
-                                    STAGE_HINT) \
-    ovn_lflow_add_at(LFLOW_MAP, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
-                     PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
-                     OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE)                    \
-    __ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE, OVS_SOURCE_LOCATOR)
-
-
-/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
- * the IN_OUT_PORT argument, which tells the lport name that appears in the
- * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
- * not local to the chassis. The critiera of the lport to be added using this
- * argument:
- *
- * - For ingress pipeline, the lport that is used to match "inport".
- * - For egress pipeline, the lport that is used to match "outport".
- *
- * For now, only LS pipelines should use this macro.  */
-#define ovn_lflow_add_with_lport_and_hint(LFLOW_MAP, OD, STAGE, PRIORITY, \
-                                          MATCH, ACTIONS, IN_OUT_PORT, \
-                                          STAGE_HINT) \
-    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
-                     IN_OUT_PORT, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_add(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
-    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
-                     NULL, NULL, NULL, OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_metered(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
-                          CTRL_METER) \
-    ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
-                              ACTIONS, NULL, CTRL_METER, NULL)
-
-static struct ovn_lflow *
-ovn_lflow_find(const struct hmap *lflows, const struct ovn_datapath *od,
-               enum ovn_stage stage, uint16_t priority,
-               const char *match, const char *actions, const char *ctrl_meter,
-               uint32_t hash)
-{
-    struct ovn_lflow *lflow;
-    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
-        if (ovn_lflow_equal(lflow, od, stage, priority, match, actions,
-                            ctrl_meter)) {
-            return lflow;
-        }
-    }
-    return NULL;
-}
-
-static void
-ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow)
-{
-    if (lflow) {
-        if (lflows) {
-            hmap_remove(lflows, &lflow->hmap_node);
-        }
-        bitmap_free(lflow->dpg_bitmap);
-        free(lflow->match);
-        free(lflow->actions);
-        free(lflow->io_port);
-        free(lflow->stage_hint);
-        free(lflow->ctrl_meter);
-        struct lflow_ref_node *l;
-        LIST_FOR_EACH_SAFE (l, ref_list_node, &lflow->referenced_by) {
-            ovs_list_remove(&l->lflow_list_node);
-            ovs_list_remove(&l->ref_list_node);
-            free(l);
-        }
-        free(lflow);
-    }
-}
-
-static void
-link_ovn_port_to_lflows(struct ovn_port *op, struct ovs_list *lflows)
-{
-    struct ovn_lflow *f;
-    LIST_FOR_EACH (f, list_node, lflows) {
-        struct lflow_ref_node *lfrn = xmalloc(sizeof *lfrn);
-        lfrn->lflow = f;
-        ovs_list_insert(&op->lflows, &lfrn->lflow_list_node);
-        ovs_list_insert(&f->referenced_by, &lfrn->ref_list_node);
-    }
-}
+thread_local size_t thread_lflow_counter = 0;
 
 static bool
 build_dhcpv4_action(struct ovn_port *op, ovs_be32 offer_ip,
@@ -6578,8 +5927,8 @@  build_dhcpv6_action(struct ovn_port *op, struct in6_addr *offer_ip,
  * build_lswitch_lflows_admission_control() handles the port security.
  */
 static void
-build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
-                                struct ds *actions, struct ds *match)
+build_lswitch_port_sec_op(struct ovn_port *op, struct lflow_table *lflows,
+                          struct ds *actions, struct ds *match)
 {
     ovs_assert(op->nbsp);
 
@@ -6595,13 +5944,13 @@  build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
         ovn_lflow_add_with_lport_and_hint(
             lflows, op->od, S_SWITCH_IN_CHECK_PORT_SEC,
             100, ds_cstr(match), REGBIT_PORT_SEC_DROP" = 1; next;",
-            op->key, &op->nbsp->header_);
+            op->key, &op->nbsp->header_, op->lflow_ref);
 
         ds_clear(match);
         ds_put_format(match, "outport == %s", op->json_key);
         ovn_lflow_add_with_lport_and_hint(
             lflows, op->od, S_SWITCH_IN_L2_UNKNOWN, 50, ds_cstr(match),
-            debug_drop_action(), op->key, &op->nbsp->header_);
+            debug_drop_action(), op->key, &op->nbsp->header_, op->lflow_ref);
         return;
     }
 
@@ -6617,14 +5966,16 @@  build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
         ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                           S_SWITCH_IN_CHECK_PORT_SEC, 70,
                                           ds_cstr(match), ds_cstr(actions),
-                                          op->key, &op->nbsp->header_);
+                                          op->key, &op->nbsp->header_,
+                                          op->lflow_ref);
     } else if (queue_id) {
         ds_put_cstr(actions,
                     REGBIT_PORT_SEC_DROP" = check_in_port_sec(); next;");
         ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                           S_SWITCH_IN_CHECK_PORT_SEC, 70,
                                           ds_cstr(match), ds_cstr(actions),
-                                          op->key, &op->nbsp->header_);
+                                          op->key, &op->nbsp->header_,
+                                          op->lflow_ref);
 
         if (!lsp_is_localnet(op->nbsp) && !op->od->n_localnet_ports) {
             return;
@@ -6639,7 +5990,8 @@  build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
             ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                               S_SWITCH_OUT_APPLY_PORT_SEC, 100,
                                               ds_cstr(match), ds_cstr(actions),
-                                              op->key, &op->nbsp->header_);
+                                              op->key, &op->nbsp->header_,
+                                              op->lflow_ref);
         } else if (op->od->n_localnet_ports) {
             ds_put_format(match, "outport == %s && inport == %s",
                           op->od->localnet_ports[0]->json_key,
@@ -6648,15 +6000,16 @@  build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
                     S_SWITCH_OUT_APPLY_PORT_SEC, 110,
                     ds_cstr(match), ds_cstr(actions),
                     op->od->localnet_ports[0]->key,
-                    &op->od->localnet_ports[0]->nbsp->header_);
+                    &op->od->localnet_ports[0]->nbsp->header_,
+                    op->lflow_ref);
         }
     }
 }
 
 static void
 build_lswitch_learn_fdb_op(
-        struct ovn_port *op, struct hmap *lflows,
-        struct ds *actions, struct ds *match)
+    struct ovn_port *op, struct lflow_table *lflows,
+    struct ds *actions, struct ds *match)
 {
     ovs_assert(op->nbsp);
 
@@ -6673,7 +6026,8 @@  build_lswitch_learn_fdb_op(
         ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                           S_SWITCH_IN_LOOKUP_FDB, 100,
                                           ds_cstr(match), ds_cstr(actions),
-                                          op->key, &op->nbsp->header_);
+                                          op->key, &op->nbsp->header_,
+                                          op->lflow_ref);
 
         ds_put_cstr(match, " && "REGBIT_LKUP_FDB" == 0");
         ds_clear(actions);
@@ -6681,13 +6035,14 @@  build_lswitch_learn_fdb_op(
         ovn_lflow_add_with_lport_and_hint(lflows, op->od, S_SWITCH_IN_PUT_FDB,
                                           100, ds_cstr(match),
                                           ds_cstr(actions), op->key,
-                                          &op->nbsp->header_);
+                                          &op->nbsp->header_,
+                                          op->lflow_ref);
     }
 }
 
 static void
 build_lswitch_learn_fdb_od(
-        struct ovn_datapath *od, struct hmap *lflows)
+    struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_IN_LOOKUP_FDB, 0, "1", "next;");
@@ -6701,7 +6056,7 @@  build_lswitch_learn_fdb_od(
  *                 (priority 100). */
 static void
 build_lswitch_output_port_sec_od(struct ovn_datapath *od,
-                              struct hmap *lflows)
+                                 struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_OUT_CHECK_PORT_SEC, 100,
@@ -6719,7 +6074,7 @@  static void
 skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
                          bool has_stateful_acl, enum ovn_stage in_stage,
                          enum ovn_stage out_stage, uint16_t priority,
-                         struct hmap *lflows)
+                         struct lflow_table *lflows)
 {
     /* Can't use ct() for router ports. Consider the following configuration:
      * lp1(10.0.0.2) on hostA--ls1--lr0--ls2--lp2(10.0.1.2) on hostB, For a
@@ -6741,10 +6096,10 @@  skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
 
     ovn_lflow_add_with_lport_and_hint(lflows, od, in_stage, priority,
                                       ingress_match, ingress_action,
-                                      op->key, &op->nbsp->header_);
+                                      op->key, &op->nbsp->header_, NULL);
     ovn_lflow_add_with_lport_and_hint(lflows, od, out_stage, priority,
                                       egress_match, egress_action,
-                                      op->key, &op->nbsp->header_);
+                                      op->key, &op->nbsp->header_, NULL);
 
     free(ingress_match);
     free(egress_match);
@@ -6753,7 +6108,7 @@  skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
 static void
 build_stateless_filter(const struct ovn_datapath *od,
                        const struct nbrec_acl *acl,
-                       struct hmap *lflows)
+                       struct lflow_table *lflows)
 {
     const char *action = REGBIT_ACL_STATELESS" = 1; next;";
     if (!strcmp(acl->direction, "from-lport")) {
@@ -6774,7 +6129,7 @@  build_stateless_filter(const struct ovn_datapath *od,
 static void
 build_stateless_filters(const struct ovn_datapath *od,
                         const struct ls_port_group_table *ls_port_groups,
-                        struct hmap *lflows)
+                        struct lflow_table *lflows)
 {
     for (size_t i = 0; i < od->nbs->n_acls; i++) {
         const struct nbrec_acl *acl = od->nbs->acls[i];
@@ -6802,7 +6157,7 @@  build_stateless_filters(const struct ovn_datapath *od,
 }
 
 static void
-build_pre_acls(struct ovn_datapath *od, struct hmap *lflows)
+build_pre_acls(struct ovn_datapath *od, struct lflow_table *lflows)
 {
     /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are
      * allowed by default. */
@@ -6821,7 +6176,7 @@  build_ls_stateful_rec_pre_acls(
     const struct ls_stateful_record *ls_stateful_rec,
     const struct ovn_datapath *od,
     const struct ls_port_group_table *ls_port_groups,
-    struct hmap *lflows)
+    struct lflow_table *lflows)
 {
     /* If there are any stateful ACL rules in this datapath, we may
      * send IP packets for some (allow) filters through the conntrack action,
@@ -6942,7 +6297,7 @@  build_empty_lb_event_flow(struct ovn_lb_vip *lb_vip,
 static void
 build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
                                   const struct shash *meter_groups,
-                                  struct hmap *lflows)
+                                  struct lflow_table *lflows)
 {
     struct mcast_switch_info *mcast_sw_info = &od->mcast_info.sw;
     if (!mcast_sw_info->enabled
@@ -6976,7 +6331,7 @@  build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
 
 static void
 build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
-             struct hmap *lflows)
+             struct lflow_table *lflows)
 {
     /* Handle IGMP/MLD packets crossing AZs. */
     build_interconn_mcast_snoop_flows(od, meter_groups, lflows);
@@ -7013,7 +6368,7 @@  build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
 static void
 build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
                              const struct ovn_datapath *od,
-                             struct hmap *lflows)
+                             struct lflow_table *lflows)
 {
     for (size_t i = 0; i < od->n_router_ports; i++) {
         skip_port_from_conntrack(od, od->router_ports[i],
@@ -7077,7 +6432,7 @@  build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
 static void
 build_pre_stateful(struct ovn_datapath *od,
                    const struct chassis_features *features,
-                   struct hmap *lflows)
+                   struct lflow_table *lflows)
 {
     /* Ingress and Egress pre-stateful Table (Priority 0): Packets are
      * allowed by default. */
@@ -7110,7 +6465,7 @@  static void
 build_acl_hints(const struct ls_stateful_record *ls_stateful_rec,
                 const struct ovn_datapath *od,
                 const struct chassis_features *features,
-                struct hmap *lflows)
+                struct lflow_table *lflows)
 {
     /* This stage builds hints for the IN/OUT_ACL stage. Based on various
      * combinations of ct flags packets may hit only a subset of the logical
@@ -7278,7 +6633,7 @@  build_acl_log(struct ds *actions, const struct nbrec_acl *acl,
 }
 
 static void
-consider_acl(struct hmap *lflows, const struct ovn_datapath *od,
+consider_acl(struct lflow_table *lflows, const struct ovn_datapath *od,
              const struct nbrec_acl *acl, bool has_stateful,
              bool ct_masked_mark, const struct shash *meter_groups,
              uint64_t max_acl_tier, struct ds *match, struct ds *actions)
@@ -7507,7 +6862,7 @@  ovn_update_ipv6_options(struct hmap *lr_ports)
 static void
 build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
                         const struct ovn_datapath *od,
-                        struct hmap *lflows,
+                        struct lflow_table *lflows,
                         const char *default_acl_action,
                         const struct shash *meter_groups,
                         struct ds *match,
@@ -7582,7 +6937,8 @@  build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
 }
 
 static void
-build_acl_log_related_flows(const struct ovn_datapath *od, struct hmap *lflows,
+build_acl_log_related_flows(const struct ovn_datapath *od,
+                            struct lflow_table *lflows,
                             const struct nbrec_acl *acl, bool has_stateful,
                             bool ct_masked_mark,
                             const struct shash *meter_groups,
@@ -7658,7 +7014,7 @@  static void
 build_acls(const struct ls_stateful_record *ls_stateful_rec,
            const struct ovn_datapath *od,
            const struct chassis_features *features,
-           struct hmap *lflows,
+           struct lflow_table *lflows,
            const struct ls_port_group_table *ls_port_groups,
            const struct shash *meter_groups)
 {
@@ -7902,7 +7258,7 @@  build_acls(const struct ls_stateful_record *ls_stateful_rec,
 }
 
 static void
-build_qos(struct ovn_datapath *od, struct hmap *lflows) {
+build_qos(struct ovn_datapath *od, struct lflow_table *lflows) {
     struct ds action = DS_EMPTY_INITIALIZER;
 
     ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_MARK, 0, "1", "next;");
@@ -7963,7 +7319,7 @@  build_qos(struct ovn_datapath *od, struct hmap *lflows) {
 }
 
 static void
-build_lb_rules_pre_stateful(struct hmap *lflows,
+build_lb_rules_pre_stateful(struct lflow_table *lflows,
                             struct ovn_lb_datapaths *lb_dps,
                             bool ct_lb_mark,
                             const struct ovn_datapaths *ls_datapaths,
@@ -8065,7 +7421,8 @@  build_lb_rules_pre_stateful(struct hmap *lflows,
  *
  */
 static void
-build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
+build_lb_affinity_lr_flows(struct lflow_table *lflows,
+                           const struct ovn_northd_lb *lb,
                            struct ovn_lb_vip *lb_vip, char *new_lb_match,
                            char *lb_action, const unsigned long *dp_bitmap,
                            const struct ovn_datapaths *lr_datapaths)
@@ -8252,7 +7609,7 @@  build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
  *
  */
 static void
-build_lb_affinity_ls_flows(struct hmap *lflows,
+build_lb_affinity_ls_flows(struct lflow_table *lflows,
                            struct ovn_lb_datapaths *lb_dps,
                            struct ovn_lb_vip *lb_vip,
                            const struct ovn_datapaths *ls_datapaths)
@@ -8396,7 +7753,7 @@  build_lb_affinity_ls_flows(struct hmap *lflows,
 
 static void
 build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
-                                        struct hmap *lflows)
+                                        struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_IN_LB_AFF_CHECK, 0, "1", "next;");
@@ -8405,7 +7762,7 @@  build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
 
 static void
 build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
-                                        struct hmap *lflows)
+                                        struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     ovn_lflow_add(lflows, od, S_ROUTER_IN_LB_AFF_CHECK, 0, "1", "next;");
@@ -8413,7 +7770,7 @@  build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
 }
 
 static void
-build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
+build_lb_rules(struct lflow_table *lflows, struct ovn_lb_datapaths *lb_dps,
                const struct ovn_datapaths *ls_datapaths,
                const struct chassis_features *features, struct ds *match,
                struct ds *action, const struct shash *meter_groups,
@@ -8493,7 +7850,7 @@  build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
 static void
 build_stateful(struct ovn_datapath *od,
                const struct chassis_features *features,
-               struct hmap *lflows)
+               struct lflow_table *lflows)
 {
     const char *ct_block_action = features->ct_no_masked_label
                                   ? "ct_mark.blocked"
@@ -8544,7 +7901,7 @@  build_stateful(struct ovn_datapath *od,
 static void
 build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
                  const struct ovn_datapath *od,
-                 struct hmap *lflows)
+                 struct lflow_table *lflows)
 {
     /* Ingress Pre-Hairpin/Nat-Hairpin/Hairpin tabled (Priority 0).
      * Packets that don't need hairpinning should continue processing.
@@ -8601,7 +7958,7 @@  build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
 }
 
 static void
-build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
+build_vtep_hairpin(struct ovn_datapath *od, struct lflow_table *lflows)
 {
     if (!od->has_vtep_lports) {
         /* There is no need in these flows if datapath has no vtep lports. */
@@ -8649,7 +8006,7 @@  build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
 
 /* Build logical flows for the forwarding groups */
 static void
-build_fwd_group_lflows(struct ovn_datapath *od, struct hmap *lflows)
+build_fwd_group_lflows(struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     if (!od->nbs->n_forwarding_groups) {
@@ -8830,7 +8187,8 @@  build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
                                         uint32_t priority,
                                         const struct ovn_datapath *od,
                                         const struct lr_nat_record *lrnat_rec,
-                                        struct hmap *lflows)
+                                        struct lflow_table *lflows,
+                                        struct lflow_ref *lflow_ref)
 {
     struct ds eth_src = DS_EMPTY_INITIALIZER;
     struct ds match = DS_EMPTY_INITIALIZER;
@@ -8854,8 +8212,10 @@  build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
     ds_put_format(&match,
                   "eth.src == %s && (arp.op == 1 || rarp.op == 3 || nd_ns)",
                   ds_cstr(&eth_src));
-    ovn_lflow_add(lflows, od, S_SWITCH_IN_L2_LKUP, priority, ds_cstr(&match),
-                  "outport = \""MC_FLOOD_L2"\"; output;");
+    ovn_lflow_add_with_lflow_ref(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
+                                 ds_cstr(&match),
+                                 "outport = \""MC_FLOOD_L2"\"; output;",
+                                 lflow_ref);
 
     ds_destroy(&eth_src);
     ds_destroy(&match);
@@ -8920,11 +8280,11 @@  lrouter_port_ipv6_reachable(const struct ovn_port *op,
  * switching domain as regular broadcast.
  */
 static void
-build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
-                                 struct ovn_port *patch_op,
-                                 const struct ovn_datapath *od,
-                                 uint32_t priority, struct hmap *lflows,
-                                 const struct ovsdb_idl_row *stage_hint)
+build_lswitch_rport_arp_req_flow(
+    const char *ips, int addr_family, struct ovn_port *patch_op,
+    const struct ovn_datapath *od, uint32_t priority,
+    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
+    struct lflow_ref *lflow_ref)
 {
     struct ds match   = DS_EMPTY_INITIALIZER;
     struct ds actions = DS_EMPTY_INITIALIZER;
@@ -8938,14 +8298,17 @@  build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
         ds_put_format(&actions, "clone {outport = %s; output; }; "
                                 "outport = \""MC_FLOOD_L2"\"; output;",
                       patch_op->json_key);
-        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
-                                priority, ds_cstr(&match),
-                                ds_cstr(&actions), stage_hint);
+        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
+                                          priority, ds_cstr(&match),
+                                          ds_cstr(&actions), stage_hint,
+                                          lflow_ref);
     } else {
         ds_put_format(&actions, "outport = %s; output;", patch_op->json_key);
-        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
-                                ds_cstr(&match), ds_cstr(&actions),
-                                stage_hint);
+        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
+                                          priority, ds_cstr(&match),
+                                          ds_cstr(&actions),
+                                          stage_hint,
+                                          lflow_ref);
     }
 
     ds_destroy(&match);
@@ -8963,7 +8326,7 @@  static void
 build_lswitch_rport_arp_req_flows(struct ovn_port *op,
                                   struct ovn_datapath *sw_od,
                                   struct ovn_port *sw_op,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   const struct ovsdb_idl_row *stage_hint)
 {
     if (!op || !op->nbrp) {
@@ -8981,12 +8344,12 @@  build_lswitch_rport_arp_req_flows(struct ovn_port *op,
     for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
         build_lswitch_rport_arp_req_flow(
             op->lrp_networks.ipv4_addrs[i].addr_s, AF_INET, sw_op, sw_od, 80,
-            lflows, stage_hint);
+            lflows, stage_hint, sw_op->lflow_ref);
     }
     for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
         build_lswitch_rport_arp_req_flow(
             op->lrp_networks.ipv6_addrs[i].addr_s, AF_INET6, sw_op, sw_od, 80,
-            lflows, stage_hint);
+            lflows, stage_hint, sw_op->lflow_ref);
     }
 }
 
@@ -9001,7 +8364,8 @@  static void
 build_lswitch_rport_arp_req_flows_for_lbnats(
     struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
     const struct ovn_datapath *sw_od, struct ovn_port *sw_op,
-    struct hmap *lflows, const struct ovsdb_idl_row *stage_hint)
+    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
+    struct lflow_ref *lflow_ref)
 {
     if (!op || !op->nbrp) {
         return;
@@ -9030,7 +8394,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                 lrouter_port_ipv4_reachable(op, ipv4_addr)) {
                 build_lswitch_rport_arp_req_flow(
                     ip_addr, AF_INET, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         }
         SSET_FOR_EACH (ip_addr, &lr_stateful_rec->lb_ips->ips_v6_reachable) {
@@ -9043,7 +8407,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                 lrouter_port_ipv6_reachable(op, &ipv6_addr)) {
                 build_lswitch_rport_arp_req_flow(
                     ip_addr, AF_INET6, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         }
     }
@@ -9058,7 +8422,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
     if (sw_od->n_router_ports != sw_od->nbs->n_ports) {
         build_lswitch_rport_arp_req_self_orig_flow(op, 75, sw_od,
                                                    lr_stateful_rec->lrnat_rec,
-                                                   lflows);
+                                                   lflows, lflow_ref);
     }
 
     for (size_t i = 0; i < lr_stateful_rec->lrnat_rec->n_nat_entries; i++) {
@@ -9082,14 +8446,14 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                                nat->external_ip)) {
                 build_lswitch_rport_arp_req_flow(
                     nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         } else {
             if (!sset_contains(&lr_stateful_rec->lb_ips->ips_v4,
                                nat->external_ip)) {
                 build_lswitch_rport_arp_req_flow(
                     nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         }
     }
@@ -9116,7 +8480,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                                nat->external_ip)) {
                 build_lswitch_rport_arp_req_flow(
                     nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         } else {
             if (!lr_stateful_rec ||
@@ -9124,7 +8488,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                                nat->external_ip)) {
                 build_lswitch_rport_arp_req_flow(
                     nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         }
     }
@@ -9135,7 +8499,7 @@  build_dhcpv4_options_flows(struct ovn_port *op,
                            struct lport_addresses *lsp_addrs,
                            struct ovn_port *inport, bool is_external,
                            const struct shash *meter_groups,
-                           struct hmap *lflows)
+                           struct lflow_table *lflows)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
 
@@ -9166,7 +8530,7 @@  build_dhcpv4_options_flows(struct ovn_port *op,
                               op->json_key);
             }
 
-            ovn_lflow_add_with_hint__(lflows, op->od,
+            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
                                       S_SWITCH_IN_DHCP_OPTIONS, 100,
                                       ds_cstr(&match),
                                       ds_cstr(&options_action),
@@ -9174,7 +8538,8 @@  build_dhcpv4_options_flows(struct ovn_port *op,
                                       copp_meter_get(COPP_DHCPV4_OPTS,
                                                      op->od->nbs->copp,
                                                      meter_groups),
-                                      &op->nbsp->dhcpv4_options->header_);
+                                      &op->nbsp->dhcpv4_options->header_,
+                                      op->lflow_ref);
             ds_clear(&match);
 
             /* If REGBIT_DHCP_OPTS_RESULT is set, it means the
@@ -9193,7 +8558,8 @@  build_dhcpv4_options_flows(struct ovn_port *op,
             ovn_lflow_add_with_lport_and_hint(
                 lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
                 ds_cstr(&match), ds_cstr(&response_action), inport->key,
-                &op->nbsp->dhcpv4_options->header_);
+                &op->nbsp->dhcpv4_options->header_,
+                op->lflow_ref);
             ds_destroy(&options_action);
             ds_destroy(&response_action);
             ds_destroy(&ipv4_addr_match);
@@ -9220,7 +8586,8 @@  build_dhcpv4_options_flows(struct ovn_port *op,
                 ovn_lflow_add_with_lport_and_hint(
                     lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
                     ds_cstr(&match),dhcp_actions, op->key,
-                    &op->nbsp->dhcpv4_options->header_);
+                    &op->nbsp->dhcpv4_options->header_,
+                    op->lflow_ref);
             }
             break;
         }
@@ -9233,7 +8600,7 @@  build_dhcpv6_options_flows(struct ovn_port *op,
                            struct lport_addresses *lsp_addrs,
                            struct ovn_port *inport, bool is_external,
                            const struct shash *meter_groups,
-                           struct hmap *lflows)
+                           struct lflow_table *lflows)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
 
@@ -9255,7 +8622,7 @@  build_dhcpv6_options_flows(struct ovn_port *op,
                               op->json_key);
             }
 
-            ovn_lflow_add_with_hint__(lflows, op->od,
+            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
                                       S_SWITCH_IN_DHCP_OPTIONS, 100,
                                       ds_cstr(&match),
                                       ds_cstr(&options_action),
@@ -9263,7 +8630,8 @@  build_dhcpv6_options_flows(struct ovn_port *op,
                                       copp_meter_get(COPP_DHCPV6_OPTS,
                                                      op->od->nbs->copp,
                                                      meter_groups),
-                                      &op->nbsp->dhcpv6_options->header_);
+                                      &op->nbsp->dhcpv6_options->header_,
+                                      op->lflow_ref);
 
             /* If REGBIT_DHCP_OPTS_RESULT is set to 1, it means the
              * put_dhcpv6_opts action is successful */
@@ -9271,7 +8639,7 @@  build_dhcpv6_options_flows(struct ovn_port *op,
             ovn_lflow_add_with_lport_and_hint(
                 lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
                 ds_cstr(&match), ds_cstr(&response_action), inport->key,
-                &op->nbsp->dhcpv6_options->header_);
+                &op->nbsp->dhcpv6_options->header_, op->lflow_ref);
             ds_destroy(&options_action);
             ds_destroy(&response_action);
 
@@ -9303,7 +8671,8 @@  build_dhcpv6_options_flows(struct ovn_port *op,
                 ovn_lflow_add_with_lport_and_hint(
                     lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
                     ds_cstr(&match),dhcp6_actions, op->key,
-                    &op->nbsp->dhcpv6_options->header_);
+                    &op->nbsp->dhcpv6_options->header_,
+                    op->lflow_ref);
             }
             break;
         }
@@ -9314,7 +8683,7 @@  build_dhcpv6_options_flows(struct ovn_port *op,
 static void
 build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
                                                  const struct ovn_port *port,
-                                                 struct hmap *lflows)
+                                                 struct lflow_table *lflows)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
 
@@ -9334,7 +8703,7 @@  build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
                     ovn_lflow_add_with_lport_and_hint(
                         lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
                         ds_cstr(&match),  debug_drop_action(), port->key,
-                        &op->nbsp->header_);
+                        &op->nbsp->header_, op->lflow_ref);
                 }
                 for (size_t l = 0; l < rp->lsp_addrs[k].n_ipv6_addrs; l++) {
                     ds_clear(&match);
@@ -9350,7 +8719,7 @@  build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
                     ovn_lflow_add_with_lport_and_hint(
                         lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
                         ds_cstr(&match), debug_drop_action(), port->key,
-                        &op->nbsp->header_);
+                        &op->nbsp->header_, op->lflow_ref);
                 }
 
                 ds_clear(&match);
@@ -9366,7 +8735,8 @@  build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
                                                   100, ds_cstr(&match),
                                                   debug_drop_action(),
                                                   port->key,
-                                                  &op->nbsp->header_);
+                                                  &op->nbsp->header_,
+                                                  op->lflow_ref);
             }
         }
     }
@@ -9381,7 +8751,7 @@  is_vlan_transparent(const struct ovn_datapath *od)
 
 static void
 build_lswitch_lflows_l2_unknown(struct ovn_datapath *od,
-                                struct hmap *lflows)
+                                struct lflow_table *lflows)
 {
     /* Ingress table 25/26: Destination lookup for unknown MACs. */
     if (od->has_unknown) {
@@ -9402,7 +8772,7 @@  static void
 build_lswitch_lflows_pre_acl_and_acl(
     struct ovn_datapath *od,
     const struct chassis_features *features,
-    struct hmap *lflows,
+    struct lflow_table *lflows,
     const struct shash *meter_groups)
 {
     ovs_assert(od->nbs);
@@ -9418,7 +8788,7 @@  build_lswitch_lflows_pre_acl_and_acl(
  * 100). */
 static void
 build_lswitch_lflows_admission_control(struct ovn_datapath *od,
-                                       struct hmap *lflows)
+                                       struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
 
@@ -9453,7 +8823,7 @@  build_lswitch_lflows_admission_control(struct ovn_datapath *od,
 
 static void
 build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
-                                          struct hmap *lflows,
+                                          struct lflow_table *lflows,
                                           struct ds *match)
 {
     ovs_assert(op->nbsp);
@@ -9465,14 +8835,14 @@  build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
     ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                       S_SWITCH_IN_ARP_ND_RSP, 100,
                                       ds_cstr(match), "next;", op->key,
-                                      &op->nbsp->header_);
+                                      &op->nbsp->header_, op->lflow_ref);
 }
 
 /* Ingress table 19: ARP/ND responder, reply for known IPs.
  * (priority 50). */
 static void
 build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
-                                         struct hmap *lflows,
+                                         struct lflow_table *lflows,
                                          const struct hmap *ls_ports,
                                          const struct shash *meter_groups,
                                          struct ds *actions,
@@ -9557,7 +8927,8 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                                               S_SWITCH_IN_ARP_ND_RSP, 100,
                                               ds_cstr(match),
                                               ds_cstr(actions), vparent,
-                                              &vp->nbsp->header_);
+                                              &vp->nbsp->header_,
+                                              op->lflow_ref);
         }
 
         free(tokstr);
@@ -9601,11 +8972,12 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                     "output;",
                     op->lsp_addrs[i].ea_s, op->lsp_addrs[i].ea_s,
                     op->lsp_addrs[i].ipv4_addrs[j].addr_s);
-                ovn_lflow_add_with_hint(lflows, op->od,
-                                        S_SWITCH_IN_ARP_ND_RSP, 50,
-                                        ds_cstr(match),
-                                        ds_cstr(actions),
-                                        &op->nbsp->header_);
+                ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                                  S_SWITCH_IN_ARP_ND_RSP, 50,
+                                                  ds_cstr(match),
+                                                  ds_cstr(actions),
+                                                  &op->nbsp->header_,
+                                                  op->lflow_ref);
 
                 /* Do not reply to an ARP request from the port that owns
                  * the address (otherwise a DHCP client that ARPs to check
@@ -9624,7 +8996,8 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                                                   S_SWITCH_IN_ARP_ND_RSP,
                                                   100, ds_cstr(match),
                                                   "next;", op->key,
-                                                  &op->nbsp->header_);
+                                                  &op->nbsp->header_,
+                                                  op->lflow_ref);
             }
 
             /* For ND solicitations, we need to listen for both the
@@ -9654,15 +9027,16 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                         op->lsp_addrs[i].ipv6_addrs[j].addr_s,
                         op->lsp_addrs[i].ipv6_addrs[j].addr_s,
                         op->lsp_addrs[i].ea_s);
-                ovn_lflow_add_with_hint__(lflows, op->od,
-                                          S_SWITCH_IN_ARP_ND_RSP, 50,
-                                          ds_cstr(match),
-                                          ds_cstr(actions),
-                                          NULL,
-                                          copp_meter_get(COPP_ND_NA,
-                                              op->od->nbs->copp,
-                                              meter_groups),
-                                          &op->nbsp->header_);
+                ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
+                                                    S_SWITCH_IN_ARP_ND_RSP, 50,
+                                                    ds_cstr(match),
+                                                    ds_cstr(actions),
+                                                    NULL,
+                                                    copp_meter_get(COPP_ND_NA,
+                                                        op->od->nbs->copp,
+                                                        meter_groups),
+                                                    &op->nbsp->header_,
+                                                    op->lflow_ref);
 
                 /* Do not reply to a solicitation from the port that owns
                  * the address (otherwise DAD detection will fail). */
@@ -9671,7 +9045,8 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                                                   S_SWITCH_IN_ARP_ND_RSP,
                                                   100, ds_cstr(match),
                                                   "next;", op->key,
-                                                  &op->nbsp->header_);
+                                                  &op->nbsp->header_,
+                                                  op->lflow_ref);
             }
         }
     }
@@ -9717,8 +9092,12 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                 ea_s,
                 ea_s);
 
-            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_ARP_ND_RSP,
-                30, ds_cstr(match), ds_cstr(actions), &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                              S_SWITCH_IN_ARP_ND_RSP,
+                                              30, ds_cstr(match),
+                                              ds_cstr(actions),
+                                              &op->nbsp->header_,
+                                              op->lflow_ref);
         }
 
         /* Add IPv6 NDP responses.
@@ -9761,15 +9140,16 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                     lsp_is_router(op->nbsp) ? "nd_na_router" : "nd_na",
                     ea_s,
                     ea_s);
-            ovn_lflow_add_with_hint__(lflows, op->od,
-                                      S_SWITCH_IN_ARP_ND_RSP, 30,
-                                      ds_cstr(match),
-                                      ds_cstr(actions),
-                                      NULL,
-                                      copp_meter_get(COPP_ND_NA,
-                                          op->od->nbs->copp,
-                                          meter_groups),
-                                      &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
+                                                S_SWITCH_IN_ARP_ND_RSP, 30,
+                                                ds_cstr(match),
+                                                ds_cstr(actions),
+                                                NULL,
+                                                copp_meter_get(COPP_ND_NA,
+                                                    op->od->nbs->copp,
+                                                    meter_groups),
+                                                &op->nbsp->header_,
+                                                op->lflow_ref);
             ds_destroy(&ip6_dst_match);
             ds_destroy(&nd_target_match);
         }
@@ -9780,7 +9160,7 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
  * (priority 0)*/
 static void
 build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
-                                       struct hmap *lflows)
+                                       struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1", "next;");
@@ -9791,7 +9171,7 @@  build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
 static void
 build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
                                      const struct hmap *ls_ports,
-                                     struct hmap *lflows,
+                                     struct lflow_table *lflows,
                                      struct ds *actions,
                                      struct ds *match)
 {
@@ -9867,7 +9247,7 @@  build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
  * priority 100 flows. */
 static void
 build_lswitch_dhcp_options_and_response(struct ovn_port *op,
-                                        struct hmap *lflows,
+                                        struct lflow_table *lflows,
                                         const struct shash *meter_groups)
 {
     ovs_assert(op->nbsp);
@@ -9922,7 +9302,7 @@  build_lswitch_dhcp_options_and_response(struct ovn_port *op,
  * (priority 0). */
 static void
 build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
-                                        struct hmap *lflows)
+                                        struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_OPTIONS, 0, "1", "next;");
@@ -9937,7 +9317,7 @@  build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
 */
 static void
 build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
-                                      struct hmap *lflows,
+                                      struct lflow_table *lflows,
                                       const struct shash *meter_groups)
 {
     ovs_assert(od->nbs);
@@ -9968,7 +9348,7 @@  build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
  * binding the external ports. */
 static void
 build_lswitch_external_port(struct ovn_port *op,
-                            struct hmap *lflows)
+                            struct lflow_table *lflows)
 {
     ovs_assert(op->nbsp);
     if (!lsp_is_external(op->nbsp)) {
@@ -9984,7 +9364,7 @@  build_lswitch_external_port(struct ovn_port *op,
  * (priority 70 - 100). */
 static void
 build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
-                                        struct hmap *lflows,
+                                        struct lflow_table *lflows,
                                         struct ds *actions,
                                         const struct shash *meter_groups)
 {
@@ -10077,7 +9457,7 @@  build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
  * (priority 90). */
 static void
 build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
-                                struct hmap *lflows,
+                                struct lflow_table *lflows,
                                 struct ds *actions,
                                 struct ds *match)
 {
@@ -10157,7 +9537,8 @@  build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
 
 /* Ingress table 25: Destination lookup, unicast handling (priority 50), */
 static void
-build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
+build_lswitch_ip_unicast_lookup(struct ovn_port *op,
+                                struct lflow_table *lflows,
                                 struct ds *actions, struct ds *match)
 {
     ovs_assert(op->nbsp);
@@ -10190,10 +9571,12 @@  build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
 
             ds_clear(actions);
             ds_put_format(actions, action, op->json_key);
-            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
-                                    50, ds_cstr(match),
-                                    ds_cstr(actions),
-                                    &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                              S_SWITCH_IN_L2_LKUP,
+                                              50, ds_cstr(match),
+                                              ds_cstr(actions),
+                                              &op->nbsp->header_,
+                                              op->lflow_ref);
         } else if (!strcmp(op->nbsp->addresses[i], "unknown")) {
             continue;
         } else if (is_dynamic_lsp_address(op->nbsp->addresses[i])) {
@@ -10208,10 +9591,12 @@  build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
 
             ds_clear(actions);
             ds_put_format(actions, action, op->json_key);
-            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
-                                    50, ds_cstr(match),
-                                    ds_cstr(actions),
-                                    &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                              S_SWITCH_IN_L2_LKUP,
+                                              50, ds_cstr(match),
+                                              ds_cstr(actions),
+                                              &op->nbsp->header_,
+                                              op->lflow_ref);
         } else if (!strcmp(op->nbsp->addresses[i], "router")) {
             if (!op->peer || !op->peer->nbrp
                 || !ovs_scan(op->peer->nbrp->mac,
@@ -10263,10 +9648,11 @@  build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
 
             ds_clear(actions);
             ds_put_format(actions, action, op->json_key);
-            ovn_lflow_add_with_hint(lflows, op->od,
-                                    S_SWITCH_IN_L2_LKUP, 50,
-                                    ds_cstr(match), ds_cstr(actions),
-                                    &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                              S_SWITCH_IN_L2_LKUP, 50,
+                                              ds_cstr(match), ds_cstr(actions),
+                                              &op->nbsp->header_,
+                                              op->lflow_ref);
         } else {
             static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
 
@@ -10281,7 +9667,8 @@  build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
 static void
 build_lswitch_ip_unicast_lookup_for_nats(
     struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
-    struct hmap *lflows, struct ds *match, struct ds *actions)
+    struct lflow_table *lflows, struct ds *match, struct ds *actions,
+    struct lflow_ref *lflow_ref)
 {
     ovs_assert(op->nbsp);
 
@@ -10317,11 +9704,12 @@  build_lswitch_ip_unicast_lookup_for_nats(
 
             ds_clear(actions);
             ds_put_format(actions, action, op->json_key);
-            ovn_lflow_add_with_hint(lflows, op->od,
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
                                     S_SWITCH_IN_L2_LKUP, 50,
                                     ds_cstr(match),
                                     ds_cstr(actions),
-                                    &op->nbsp->header_);
+                                    &op->nbsp->header_,
+                                    lflow_ref);
         }
     }
 }
@@ -10561,7 +9949,7 @@  get_outport_for_routing_policy_nexthop(struct ovn_datapath *od,
 }
 
 static void
-build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
+build_routing_policy_flow(struct lflow_table *lflows, struct ovn_datapath *od,
                           const struct hmap *lr_ports,
                           const struct nbrec_logical_router_policy *rule,
                           const struct ovsdb_idl_row *stage_hint)
@@ -10626,7 +10014,8 @@  build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
 }
 
 static void
-build_ecmp_routing_policy_flows(struct hmap *lflows, struct ovn_datapath *od,
+build_ecmp_routing_policy_flows(struct lflow_table *lflows,
+                                struct ovn_datapath *od,
                                 const struct hmap *lr_ports,
                                 const struct nbrec_logical_router_policy *rule,
                                 uint16_t ecmp_group_id)
@@ -10762,7 +10151,7 @@  get_route_table_id(struct simap *route_tables, const char *route_table_name)
 }
 
 static void
-build_route_table_lflow(struct ovn_datapath *od, struct hmap *lflows,
+build_route_table_lflow(struct ovn_datapath *od, struct lflow_table *lflows,
                         struct nbrec_logical_router_port *lrp,
                         struct simap *route_tables)
 {
@@ -11173,7 +10562,7 @@  find_static_route_outport(struct ovn_datapath *od, const struct hmap *lr_ports,
 }
 
 static void
-add_ecmp_symmetric_reply_flows(struct hmap *lflows,
+add_ecmp_symmetric_reply_flows(struct lflow_table *lflows,
                                struct ovn_datapath *od,
                                bool ct_masked_mark,
                                const char *port_ip,
@@ -11338,7 +10727,7 @@  add_ecmp_symmetric_reply_flows(struct hmap *lflows,
 }
 
 static void
-build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
+build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
                       bool ct_masked_mark, const struct hmap *lr_ports,
                       struct ecmp_groups_node *eg)
 
@@ -11425,12 +10814,12 @@  build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
 }
 
 static void
-add_route(struct hmap *lflows, struct ovn_datapath *od,
+add_route(struct lflow_table *lflows, struct ovn_datapath *od,
           const struct ovn_port *op, const char *lrp_addr_s,
           const char *network_s, int plen, const char *gateway,
           bool is_src_route, const uint32_t rtb_id,
           const struct ovsdb_idl_row *stage_hint, bool is_discard_route,
-          int ofs)
+          int ofs, struct lflow_ref *lflow_ref)
 {
     bool is_ipv4 = strchr(network_s, '.') ? true : false;
     struct ds match = DS_EMPTY_INITIALIZER;
@@ -11473,14 +10862,17 @@  add_route(struct hmap *lflows, struct ovn_datapath *od,
         ds_put_format(&actions, "ip.ttl--; %s", ds_cstr(&common_actions));
     }
 
-    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_IP_ROUTING, priority,
-                            ds_cstr(&match), ds_cstr(&actions),
-                            stage_hint);
+    ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_ROUTER_IN_IP_ROUTING,
+                                      priority, ds_cstr(&match),
+                                      ds_cstr(&actions), stage_hint,
+                                      lflow_ref);
     if (op && op->has_bfd) {
         ds_put_format(&match, " && udp.dst == 3784");
-        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_ROUTING,
-                                priority + 1, ds_cstr(&match),
-                                ds_cstr(&common_actions), stage_hint);
+        ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                          S_ROUTER_IN_IP_ROUTING,
+                                          priority + 1, ds_cstr(&match),
+                                          ds_cstr(&common_actions),\
+                                          stage_hint, lflow_ref);
     }
     ds_destroy(&match);
     ds_destroy(&common_actions);
@@ -11488,7 +10880,7 @@  add_route(struct hmap *lflows, struct ovn_datapath *od,
 }
 
 static void
-build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
+build_static_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
                         const struct hmap *lr_ports,
                         const struct parsed_route *route_)
 {
@@ -11514,7 +10906,7 @@  build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
     add_route(lflows, route_->is_discard_route ? od : out_port->od, out_port,
               lrp_addr_s, prefix_s, route_->plen, route->nexthop,
               route_->is_src_route, route_->route_table_id, &route->header_,
-              route_->is_discard_route, ofs);
+              route_->is_discard_route, ofs, NULL);
 
     free(prefix_s);
 }
@@ -11577,7 +10969,7 @@  struct lrouter_nat_lb_flows_ctx {
 
     int prio;
 
-    struct hmap *lflows;
+    struct lflow_table *lflows;
     const struct shash *meter_groups;
 };
 
@@ -11709,7 +11101,7 @@  build_lrouter_nat_flows_for_lb(
     struct ovn_northd_lb_vip *vips_nb,
     const struct ovn_datapaths *lr_datapaths,
     const struct lr_stateful_table *lr_stateful_table,
-    struct hmap *lflows,
+    struct lflow_table *lflows,
     struct ds *match, struct ds *action,
     const struct shash *meter_groups,
     const struct chassis_features *features,
@@ -11878,7 +11270,7 @@  build_lrouter_nat_flows_for_lb(
 
 static void
 build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
-                           struct hmap *lflows,
+                           struct lflow_table *lflows,
                            const struct shash *meter_groups,
                            const struct ovn_datapaths *ls_datapaths,
                            const struct chassis_features *features,
@@ -11939,7 +11331,7 @@  build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
  */
 static void
 build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   const struct ovn_datapaths *lr_datapaths,
                                   struct ds *match)
 {
@@ -11965,7 +11357,7 @@  build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
 
 static void
 build_lrouter_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
-                           struct hmap *lflows,
+                           struct lflow_table *lflows,
                            const struct shash *meter_groups,
                            const struct ovn_datapaths *lr_datapaths,
                            const struct lr_stateful_table *lr_stateful_table,
@@ -12123,7 +11515,7 @@  lrouter_dnat_and_snat_is_stateless(const struct nbrec_nat *nat)
  */
 static inline void
 lrouter_nat_add_ext_ip_match(const struct ovn_datapath *od,
-                             struct hmap *lflows, struct ds *match,
+                             struct lflow_table *lflows, struct ds *match,
                              const struct nbrec_nat *nat,
                              bool is_v6, bool is_src, int cidr_bits)
 {
@@ -12190,7 +11582,7 @@  build_lrouter_arp_flow(const struct ovn_datapath *od, struct ovn_port *op,
                        const char *ip_address, const char *eth_addr,
                        struct ds *extra_match, bool drop, uint16_t priority,
                        const struct ovsdb_idl_row *hint,
-                       struct hmap *lflows)
+                       struct lflow_table *lflows)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
     struct ds actions = DS_EMPTY_INITIALIZER;
@@ -12240,7 +11632,8 @@  build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
                       const char *sn_ip_address, const char *eth_addr,
                       struct ds *extra_match, bool drop, uint16_t priority,
                       const struct ovsdb_idl_row *hint,
-                      struct hmap *lflows, const struct shash *meter_groups)
+                      struct lflow_table *lflows,
+                      const struct shash *meter_groups)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
     struct ds actions = DS_EMPTY_INITIALIZER;
@@ -12291,7 +11684,7 @@  build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
 static void
 build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
                               struct ovn_nat *nat_entry,
-                              struct hmap *lflows,
+                              struct lflow_table *lflows,
                               const struct shash *meter_groups)
 {
     struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
@@ -12314,7 +11707,7 @@  build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
 static void
 build_lrouter_port_nat_arp_nd_flow(struct ovn_port *op,
                                    struct ovn_nat *nat_entry,
-                                   struct hmap *lflows,
+                                   struct lflow_table *lflows,
                                    const struct shash *meter_groups)
 {
     struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
@@ -12388,7 +11781,7 @@  build_lrouter_drop_own_dest(struct ovn_port *op,
                             const struct lr_stateful_record *lr_stateful_rec,
                             enum ovn_stage stage,
                             uint16_t priority, bool drop_snat_ip,
-                            struct hmap *lflows)
+                            struct lflow_table *lflows)
 {
     struct ds match_ips = DS_EMPTY_INITIALIZER;
 
@@ -12453,7 +11846,7 @@  build_lrouter_drop_own_dest(struct ovn_port *op,
 }
 
 static void
-build_lrouter_force_snat_flows(struct hmap *lflows,
+build_lrouter_force_snat_flows(struct lflow_table *lflows,
                                const struct ovn_datapath *od,
                                const char *ip_version, const char *ip_addr,
                                const char *context)
@@ -12484,7 +11877,7 @@  build_lrouter_force_snat_flows(struct hmap *lflows,
  */
 static void
 build_lrouter_icmp_packet_toobig_admin_flows(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -12509,7 +11902,7 @@  build_lrouter_icmp_packet_toobig_admin_flows(
 
 static void
 build_lswitch_icmp_packet_toobig_admin_flows(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbsp);
@@ -12548,7 +11941,7 @@  build_lswitch_icmp_packet_toobig_admin_flows(
 static void
 build_lrouter_force_snat_flows_op(struct ovn_port *op,
                                   const struct lr_nat_record *lrnat_rec,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -12620,7 +12013,7 @@  build_lrouter_force_snat_flows_op(struct ovn_port *op,
 }
 
 static void
-build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
+build_lrouter_bfd_flows(struct lflow_table *lflows, struct ovn_port *op,
                         const struct shash *meter_groups)
 {
     if (!op->has_bfd) {
@@ -12675,7 +12068,7 @@  build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
  */
 static void
 build_adm_ctrl_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows)
+        struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
 
@@ -12726,7 +12119,7 @@  build_gateway_get_l2_hdr_size(struct ovn_port *op)
  * function.
  */
 static void OVS_PRINTF_FORMAT(9, 10)
-build_gateway_mtu_flow(struct hmap *lflows, struct ovn_port *op,
+build_gateway_mtu_flow(struct lflow_table *lflows, struct ovn_port *op,
                        enum ovn_stage stage, uint16_t prio_low,
                        uint16_t prio_high, struct ds *match,
                        struct ds *actions, const struct ovsdb_idl_row *hint,
@@ -12787,7 +12180,7 @@  consider_l3dgw_port_is_centralized(struct ovn_port *op)
  */
 static void
 build_adm_ctrl_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -12841,7 +12234,7 @@  build_adm_ctrl_flows_for_lrouter_port(
  * lflows for logical routers. */
 static void
 build_neigh_learning_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
 {
@@ -12972,7 +12365,7 @@  build_neigh_learning_flows_for_lrouter(
  * for logical router ports. */
 static void
 build_neigh_learning_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -13034,7 +12427,7 @@  build_neigh_learning_flows_for_lrouter_port(
  * Adv (RA) options and response. */
 static void
 build_ND_RA_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
 {
@@ -13149,7 +12542,8 @@  build_ND_RA_flows_for_lrouter_port(
 /* Logical router ingress table ND_RA_OPTIONS & ND_RA_RESPONSE: RS
  * responder, by default goto next. (priority 0). */
 static void
-build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
+build_ND_RA_flows_for_lrouter(struct ovn_datapath *od,
+                              struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     ovn_lflow_add(lflows, od, S_ROUTER_IN_ND_RA_OPTIONS, 0, "1", "next;");
@@ -13160,7 +12554,7 @@  build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
  * by default goto next. (priority 0). */
 static void
 build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
-                                       struct hmap *lflows)
+                                       struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING_PRE, 0, "1",
@@ -13188,21 +12582,23 @@  build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
  */
 static void
 build_ip_routing_flows_for_lrp(
-        struct ovn_port *op, struct hmap *lflows)
+        struct ovn_port *op, struct lflow_table *lflows)
 {
     ovs_assert(op->nbrp);
     for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
         add_route(lflows, op->od, op, op->lrp_networks.ipv4_addrs[i].addr_s,
                   op->lrp_networks.ipv4_addrs[i].network_s,
                   op->lrp_networks.ipv4_addrs[i].plen, NULL, false, 0,
-                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
+                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
+                  NULL);
     }
 
     for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
         add_route(lflows, op->od, op, op->lrp_networks.ipv6_addrs[i].addr_s,
                   op->lrp_networks.ipv6_addrs[i].network_s,
                   op->lrp_networks.ipv6_addrs[i].plen, NULL, false, 0,
-                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
+                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
+                  NULL);
     }
 }
 
@@ -13215,8 +12611,9 @@  build_ip_routing_flows_for_lrp(
  */
 static void
 build_ip_routing_flows_for_router_type_lsp(
-        struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
-        const struct hmap *lr_ports, struct hmap *lflows)
+    struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
+    const struct hmap *lr_ports, struct lflow_table *lflows,
+    struct lflow_ref *lflow_ref)
 {
     ovs_assert(op->nbsp);
     if (!lsp_is_router(op->nbsp)) {
@@ -13252,7 +12649,8 @@  build_ip_routing_flows_for_router_type_lsp(
                             laddrs->ipv4_addrs[k].network_s,
                             laddrs->ipv4_addrs[k].plen, NULL, false, 0,
                             &peer->nbrp->header_, false,
-                            ROUTE_PRIO_OFFSET_CONNECTED);
+                            ROUTE_PRIO_OFFSET_CONNECTED,
+                            lflow_ref);
                 }
             }
             destroy_routable_addresses(&ra);
@@ -13263,7 +12661,7 @@  build_ip_routing_flows_for_router_type_lsp(
 static void
 build_static_route_flows_for_lrouter(
         struct ovn_datapath *od, const struct chassis_features *features,
-        struct hmap *lflows, const struct hmap *lr_ports,
+        struct lflow_table *lflows, const struct hmap *lr_ports,
         const struct hmap *bfd_connections)
 {
     ovs_assert(od->nbr);
@@ -13327,7 +12725,7 @@  build_static_route_flows_for_lrouter(
  */
 static void
 build_mcast_lookup_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(od->nbr);
@@ -13428,7 +12826,7 @@  build_mcast_lookup_flows_for_lrouter(
  * advances to the next table for ARP/ND resolution. */
 static void
 build_ingress_policy_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         const struct hmap *lr_ports)
 {
     ovs_assert(od->nbr);
@@ -13462,7 +12860,7 @@  build_ingress_policy_flows_for_lrouter(
 /* Local router ingress table ARP_RESOLVE: ARP Resolution. */
 static void
 build_arp_resolve_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows)
+        struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     /* Multicast packets already have the outport set so just advance to
@@ -13480,10 +12878,12 @@  build_arp_resolve_flows_for_lrouter(
 }
 
 static void
-routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
+routable_addresses_to_lflows(struct lflow_table *lflows,
+                             struct ovn_port *router_port,
                              struct ovn_port *peer,
                              const struct lr_stateful_record *lr_stateful_rec,
-                             struct ds *match, struct ds *actions)
+                             struct ds *match, struct ds *actions,
+                             struct lflow_ref *lflow_ref)
 {
     struct ovn_port_routable_addresses ra =
         get_op_routable_addresses(router_port, lr_stateful_rec);
@@ -13507,8 +12907,9 @@  routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
 
         ds_clear(actions);
         ds_put_format(actions, "eth.dst = %s; next;", ra.laddrs[i].ea_s);
-        ovn_lflow_add(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE, 100,
-                      ds_cstr(match), ds_cstr(actions));
+        ovn_lflow_add_with_lflow_ref(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE,
+                                     100, ds_cstr(match), ds_cstr(actions),
+                                     lflow_ref);
     }
     destroy_routable_addresses(&ra);
 }
@@ -13525,7 +12926,8 @@  routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
 
 /* This function adds ARP resolve flows related to a LRP. */
 static void
-build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
+build_arp_resolve_flows_for_lrp(struct ovn_port *op,
+                                struct lflow_table *lflows,
                                 struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -13600,7 +13002,7 @@  build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
 /* This function adds ARP resolve flows related to a LSP. */
 static void
 build_arp_resolve_flows_for_lsp(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         const struct hmap *lr_ports,
         struct ds *match, struct ds *actions)
 {
@@ -13642,11 +13044,12 @@  build_arp_resolve_flows_for_lsp(
 
                     ds_clear(actions);
                     ds_put_format(actions, "eth.dst = %s; next;", ea_s);
-                    ovn_lflow_add_with_hint(lflows, peer->od,
+                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
                                             S_ROUTER_IN_ARP_RESOLVE, 100,
                                             ds_cstr(match),
                                             ds_cstr(actions),
-                                            &op->nbsp->header_);
+                                            &op->nbsp->header_,
+                                            op->lflow_ref);
                 }
             }
 
@@ -13673,11 +13076,12 @@  build_arp_resolve_flows_for_lsp(
 
                     ds_clear(actions);
                     ds_put_format(actions, "eth.dst = %s; next;", ea_s);
-                    ovn_lflow_add_with_hint(lflows, peer->od,
+                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
                                             S_ROUTER_IN_ARP_RESOLVE, 100,
                                             ds_cstr(match),
                                             ds_cstr(actions),
-                                            &op->nbsp->header_);
+                                            &op->nbsp->header_,
+                                            op->lflow_ref);
                 }
             }
         }
@@ -13721,10 +13125,11 @@  build_arp_resolve_flows_for_lsp(
                 ds_clear(actions);
                 ds_put_format(actions, "eth.dst = %s; next;",
                                           router_port->lrp_networks.ea_s);
-                ovn_lflow_add_with_hint(lflows, peer->od,
+                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
                                         S_ROUTER_IN_ARP_RESOLVE, 100,
                                         ds_cstr(match), ds_cstr(actions),
-                                        &op->nbsp->header_);
+                                        &op->nbsp->header_,
+                                        op->lflow_ref);
             }
 
             if (router_port->lrp_networks.n_ipv6_addrs) {
@@ -13737,10 +13142,11 @@  build_arp_resolve_flows_for_lsp(
                 ds_clear(actions);
                 ds_put_format(actions, "eth.dst = %s; next;",
                               router_port->lrp_networks.ea_s);
-                ovn_lflow_add_with_hint(lflows, peer->od,
+                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
                                         S_ROUTER_IN_ARP_RESOLVE, 100,
                                         ds_cstr(match), ds_cstr(actions),
-                                        &op->nbsp->header_);
+                                        &op->nbsp->header_,
+                                        op->lflow_ref);
             }
         }
     }
@@ -13748,10 +13154,11 @@  build_arp_resolve_flows_for_lsp(
 
 static void
 build_arp_resolve_flows_for_lsp_routable_addresses(
-        struct ovn_port *op, struct hmap *lflows,
-        const struct hmap *lr_ports,
-        const struct lr_stateful_table *lr_stateful_table,
-        struct ds *match, struct ds *actions)
+    struct ovn_port *op, struct lflow_table *lflows,
+    const struct hmap *lr_ports,
+    const struct lr_stateful_table *lr_stateful_table,
+    struct ds *match, struct ds *actions,
+    struct lflow_ref *lflow_ref)
 {
     if (!lsp_is_router(op->nbsp)) {
         return;
@@ -13785,13 +13192,15 @@  build_arp_resolve_flows_for_lsp_routable_addresses(
             lr_stateful_rec = lr_stateful_table_find_by_index(
                 lr_stateful_table, router_port->od->index);
             routable_addresses_to_lflows(lflows, router_port, peer,
-                                         lr_stateful_rec, match, actions);
+                                         lr_stateful_rec, match, actions,
+                                         lflow_ref);
         }
     }
 }
 
 static void
-build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
+build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu,
+                            struct lflow_table *lflows,
                             const struct shash *meter_groups, struct ds *match,
                             struct ds *actions, enum ovn_stage stage,
                             struct ovn_port *outport)
@@ -13884,7 +13293,7 @@  build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
 
 static void
 build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   const struct hmap *lr_ports,
                                   const struct shash *meter_groups,
                                   struct ds *match,
@@ -13934,7 +13343,7 @@  build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
  * */
 static void
 build_check_pkt_len_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         const struct hmap *lr_ports,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
@@ -13961,7 +13370,7 @@  build_check_pkt_len_flows_for_lrouter(
 /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
 static void
 build_gateway_redirect_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(od->nbr);
@@ -14005,8 +13414,8 @@  build_gateway_redirect_flows_for_lrouter(
 /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
 static void
 build_lr_gateway_redirect_flows_for_nats(
-    const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
-    struct hmap *lflows, struct ds *match, struct ds *actions)
+        const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
+        struct lflow_table *lflows, struct ds *match, struct ds *actions)
 {
     ovs_assert(od->nbr);
     for (size_t i = 0; i < od->n_l3dgw_ports; i++) {
@@ -14075,7 +13484,7 @@  build_lr_gateway_redirect_flows_for_nats(
  * and sends an ARP/IPv6 NA request (priority 100). */
 static void
 build_arp_request_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
 {
@@ -14153,7 +13562,7 @@  build_arp_request_flows_for_lrouter(
  */
 static void
 build_egress_delivery_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -14195,7 +13604,7 @@  build_egress_delivery_flows_for_lrouter_port(
 
 static void
 build_misc_local_traffic_drop_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows)
+        struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     /* Allow IGMP and MLD packets (with TTL = 1) if the router is
@@ -14277,7 +13686,7 @@  build_misc_local_traffic_drop_flows_for_lrouter(
 
 static void
 build_dhcpv6_reply_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match)
 {
     ovs_assert(op->nbrp);
@@ -14297,7 +13706,7 @@  build_dhcpv6_reply_flows_for_lrouter_port(
 
 static void
 build_ipv6_input_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
 {
@@ -14466,7 +13875,7 @@  build_ipv6_input_flows_for_lrouter_port(
 static void
 build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
                                   const struct lr_nat_record *lrnat_rec,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   const struct shash *meter_groups)
 {
     ovs_assert(od->nbr);
@@ -14518,7 +13927,7 @@  build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
 /* Logical router ingress table 3: IP Input for IPv4. */
 static void
 build_lrouter_ipv4_ip_input(struct ovn_port *op,
-                            struct hmap *lflows,
+                            struct lflow_table *lflows,
                             struct ds *match, struct ds *actions,
                             const struct shash *meter_groups)
 {
@@ -14722,7 +14131,7 @@  build_lrouter_ipv4_ip_input(struct ovn_port *op,
 /* Logical router ingress table 3: IP Input for IPv4. */
 static void
 build_lrouter_ipv4_ip_input_for_lbnats(
-    struct ovn_port *op, struct hmap *lflows,
+    struct ovn_port *op, struct lflow_table *lflows,
     const struct lr_stateful_record *lr_stateful_rec,
     struct ds *match, const struct shash *meter_groups)
 {
@@ -14842,7 +14251,7 @@  build_lrouter_in_unsnat_match(const struct ovn_datapath *od,
 }
 
 static void
-build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
+build_lrouter_in_unsnat_stateless_flow(struct lflow_table *lflows,
                                        const struct ovn_datapath *od,
                                        const struct nbrec_nat *nat,
                                        struct ds *match,
@@ -14864,7 +14273,7 @@  build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
+build_lrouter_in_unsnat_in_czone_flow(struct lflow_table *lflows,
                                       const struct ovn_datapath *od,
                                       const struct nbrec_nat *nat,
                                       struct ds *match, bool distributed_nat,
@@ -14898,7 +14307,7 @@  build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_in_unsnat_flow(struct hmap *lflows,
+build_lrouter_in_unsnat_flow(struct lflow_table *lflows,
                              const struct ovn_datapath *od,
                              const struct nbrec_nat *nat, struct ds *match,
                              bool distributed_nat, bool is_v6,
@@ -14920,7 +14329,7 @@  build_lrouter_in_unsnat_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_in_dnat_flow(struct hmap *lflows,
+build_lrouter_in_dnat_flow(struct lflow_table *lflows,
                            const struct ovn_datapath *od,
                            const struct lr_nat_record *lrnat_rec,
                            const struct nbrec_nat *nat, struct ds *match,
@@ -14992,7 +14401,7 @@  build_lrouter_in_dnat_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_undnat_flow(struct hmap *lflows,
+build_lrouter_out_undnat_flow(struct lflow_table *lflows,
                               const struct ovn_datapath *od,
                               const struct nbrec_nat *nat, struct ds *match,
                               struct ds *actions, bool distributed_nat,
@@ -15043,7 +14452,7 @@  build_lrouter_out_undnat_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_is_dnat_local(struct hmap *lflows,
+build_lrouter_out_is_dnat_local(struct lflow_table *lflows,
                                 const struct ovn_datapath *od,
                                 const struct nbrec_nat *nat, struct ds *match,
                                 struct ds *actions, bool distributed_nat,
@@ -15074,7 +14483,7 @@  build_lrouter_out_is_dnat_local(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_snat_match(struct hmap *lflows,
+build_lrouter_out_snat_match(struct lflow_table *lflows,
                              const struct ovn_datapath *od,
                              const struct nbrec_nat *nat, struct ds *match,
                              bool distributed_nat, int cidr_bits, bool is_v6,
@@ -15103,7 +14512,7 @@  build_lrouter_out_snat_match(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
+build_lrouter_out_snat_stateless_flow(struct lflow_table *lflows,
                                       const struct ovn_datapath *od,
                                       const struct nbrec_nat *nat,
                                       struct ds *match, struct ds *actions,
@@ -15146,7 +14555,7 @@  build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
+build_lrouter_out_snat_in_czone_flow(struct lflow_table *lflows,
                                      const struct ovn_datapath *od,
                                      const struct nbrec_nat *nat,
                                      struct ds *match,
@@ -15208,7 +14617,7 @@  build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_snat_flow(struct hmap *lflows,
+build_lrouter_out_snat_flow(struct lflow_table *lflows,
                             const struct ovn_datapath *od,
                             const struct nbrec_nat *nat, struct ds *match,
                             struct ds *actions, bool distributed_nat,
@@ -15254,7 +14663,7 @@  build_lrouter_out_snat_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
+build_lrouter_ingress_nat_check_pkt_len(struct lflow_table *lflows,
                                         const struct nbrec_nat *nat,
                                         const struct ovn_datapath *od,
                                         bool is_v6, struct ds *match,
@@ -15326,7 +14735,7 @@  build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
 }
 
 static void
-build_lrouter_ingress_flow(struct hmap *lflows,
+build_lrouter_ingress_flow(struct lflow_table *lflows,
                            const struct ovn_datapath *od,
                            const struct nbrec_nat *nat, struct ds *match,
                            struct ds *actions, struct eth_addr mac,
@@ -15506,7 +14915,7 @@  lrouter_check_nat_entry(const struct ovn_datapath *od,
 
 /* NAT, Defrag and load balancing. */
 static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
-                                                     struct hmap *lflows)
+                                                struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
 
@@ -15532,7 +14941,7 @@  static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
 static void
 build_lrouter_nat_defrag_and_lb(
     const struct lr_stateful_record *lr_stateful_rec,
-    const struct ovn_datapath *od, struct hmap *lflows,
+    const struct ovn_datapath *od, struct lflow_table *lflows,
     const struct hmap *ls_ports, const struct hmap *lr_ports,
     struct ds *match, struct ds *actions,
     const struct shash *meter_groups,
@@ -15911,31 +15320,30 @@  build_lsp_lflows_for_lbnats(struct ovn_port *lsp,
                             const struct lr_stateful_record *lr_stateful_rec,
                             const struct lr_stateful_table *lr_stateful_table,
                             const struct hmap *lr_ports,
-                            struct hmap *lflows,
+                            struct lflow_table *lflows,
                             struct ds *match,
-                            struct ds *actions)
+                            struct ds *actions,
+                            struct lflow_ref *lflow_ref)
 {
     ovs_assert(lsp->nbsp);
     ovs_assert(lsp->peer);
-    start_collecting_lflows();
     build_lswitch_rport_arp_req_flows_for_lbnats(
         lsp->peer, lr_stateful_rec, lsp->od, lsp,
-        lflows, &lsp->nbsp->header_);
+        lflows, &lsp->nbsp->header_, lflow_ref);
     build_ip_routing_flows_for_router_type_lsp(lsp, lr_stateful_table,
-                                               lr_ports, lflows);
+                                               lr_ports, lflows,
+                                               lflow_ref);
     build_arp_resolve_flows_for_lsp_routable_addresses(
-        lsp, lflows, lr_ports, lr_stateful_table, match, actions);
+        lsp, lflows, lr_ports, lr_stateful_table, match, actions, lflow_ref);
     build_lswitch_ip_unicast_lookup_for_nats(lsp, lr_stateful_rec, lflows,
-                                             match, actions);
-    link_ovn_port_to_lflows(lsp, &collected_lflows);
-    end_collecting_lflows();
+                                             match, actions, lflow_ref);
 }
 
 static void
 build_lbnat_lflows_iterate_by_lsp(
     struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
     const struct hmap *lr_ports, struct ds *match, struct ds *actions,
-    struct hmap *lflows)
+    struct lflow_table *lflows)
 {
     ovs_assert(op->nbsp);
 
@@ -15948,8 +15356,9 @@  build_lbnat_lflows_iterate_by_lsp(
                                                       op->peer->od->index);
     ovs_assert(lr_stateful_rec);
 
-    build_lsp_lflows_for_lbnats(op, lr_stateful_rec, lr_stateful_table,
-                                lr_ports, lflows, match, actions);
+    build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
+                                lr_stateful_table, lr_ports, lflows,
+                                match, actions, op->stateful_lflow_ref);
 }
 
 static void
@@ -15957,7 +15366,7 @@  build_lrp_lflows_for_lbnats(struct ovn_port *op,
                             const struct lr_stateful_record *lr_stateful_rec,
                             const struct shash *meter_groups,
                             struct ds *match, struct ds *actions,
-                            struct hmap *lflows)
+                            struct lflow_table *lflows)
 {
     /* Drop IP traffic destined to router owned IPs except if the IP is
      * also a SNAT IP. Those are dropped later, in stage
@@ -15992,7 +15401,7 @@  static void
 build_lbnat_lflows_iterate_by_lrp(
     struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
     const struct shash *meter_groups, struct ds *match,
-    struct ds *actions, struct hmap *lflows)
+    struct ds *actions, struct lflow_table *lflows)
 {
     ovs_assert(op->nbrp);
 
@@ -16008,7 +15417,7 @@  build_lbnat_lflows_iterate_by_lrp(
 static void
 build_lr_stateful_flows(const struct lr_stateful_record *lr_stateful_rec,
                         const struct ovn_datapaths *lr_datapaths,
-                        struct hmap *lflows,
+                        struct lflow_table *lflows,
                         const struct hmap *ls_ports,
                         const struct hmap *lr_ports,
                         struct ds *match,
@@ -16036,7 +15445,7 @@  build_ls_stateful_flows(const struct ls_stateful_record *ls_stateful_rec,
                         const struct ls_port_group_table *ls_pgs,
                         const struct chassis_features *features,
                         const struct shash *meter_groups,
-                        struct hmap *lflows)
+                        struct lflow_table *lflows)
 {
     build_ls_stateful_rec_pre_acls(ls_stateful_rec, od, ls_pgs, lflows);
     build_ls_stateful_rec_pre_lb(ls_stateful_rec, od, lflows);
@@ -16053,7 +15462,7 @@  struct lswitch_flow_build_info {
     const struct ls_port_group_table *ls_port_groups;
     const struct lr_stateful_table *lr_stateful_table;
     const struct ls_stateful_table *ls_stateful_table;
-    struct hmap *lflows;
+    struct lflow_table *lflows;
     struct hmap *igmp_groups;
     const struct shash *meter_groups;
     const struct hmap *lb_dps_map;
@@ -16136,10 +15545,9 @@  build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
                                          const struct shash *meter_groups,
                                          struct ds *match,
                                          struct ds *actions,
-                                         struct hmap *lflows)
+                                         struct lflow_table *lflows)
 {
     ovs_assert(op->nbsp);
-    start_collecting_lflows();
 
     /* Build Logical Switch Flows. */
     build_lswitch_port_sec_op(op, lflows, actions, match);
@@ -16155,9 +15563,6 @@  build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
 
     /* Build Logical Router Flows. */
     build_arp_resolve_flows_for_lsp(op, lflows, lr_ports, match, actions);
-
-    link_ovn_port_to_lflows(op, &collected_lflows);
-    end_collecting_lflows();
 }
 
 /* Helper function to combine all lflow generation which is iterated by logical
@@ -16203,6 +15608,8 @@  build_lflows_thread(void *arg)
     struct ovn_port *op;
     int bnum;
 
+    /* Note:  lflow_ref is not thread safe.  Ensure that op->lflow_ref
+     * is not accessed by multiple threads at the same time. */
     while (!stop_parallel_processing()) {
         wait_for_work(control);
         lsi = (struct lswitch_flow_build_info *) control->data;
@@ -16372,7 +15779,7 @@  noop_callback(struct worker_pool *pool OVS_UNUSED,
     /* Do nothing */
 }
 
-/* Fixes the hmap size (hmap->n) after parallel building the lflow_map when
+/* Fixes the hmap size (hmap->n) after parallel building the lflow_table when
  * dp-groups is enabled, because in that case all threads are updating the
  * global lflow hmap. Although the lflow_hash_lock prevents currently inserting
  * to the same hash bucket, the hmap->n is updated currently by all threads and
@@ -16382,7 +15789,7 @@  noop_callback(struct worker_pool *pool OVS_UNUSED,
  * after the worker threads complete the tasks in each iteration before any
  * future operations on the lflow map. */
 static void
-fix_flow_map_size(struct hmap *lflow_map,
+fix_flow_table_size(struct lflow_table *lflow_table,
                   struct lswitch_flow_build_info *lsiv,
                   size_t n_lsiv)
 {
@@ -16390,7 +15797,7 @@  fix_flow_map_size(struct hmap *lflow_map,
     for (size_t i = 0; i < n_lsiv; i++) {
         total += lsiv[i].thread_lflow_counter;
     }
-    lflow_map->n = total;
+    lflow_table_set_size(lflow_table, total);
 }
 
 static void
@@ -16402,7 +15809,7 @@  build_lswitch_and_lrouter_flows(
     const struct ls_port_group_table *ls_pgs,
     const struct lr_stateful_table *lr_stateful_table,
     const struct ls_stateful_table *ls_stateful_table,
-    struct hmap *lflows,
+    struct lflow_table *lflows,
     struct hmap *igmp_groups,
     const struct shash *meter_groups,
     const struct hmap *lb_dps_map,
@@ -16449,7 +15856,7 @@  build_lswitch_and_lrouter_flows(
 
         /* Run thread pool. */
         run_pool_callback(build_lflows_pool, NULL, NULL, noop_callback);
-        fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
+        fix_flow_table_size(lflows, lsiv, build_lflows_pool->size);
 
         for (index = 0; index < build_lflows_pool->size; index++) {
             ds_destroy(&lsiv[index].match);
@@ -16570,24 +15977,6 @@  build_lswitch_and_lrouter_flows(
     free(svc_check_match);
 }
 
-static ssize_t max_seen_lflow_size = 128;
-
-void
-lflow_data_init(struct lflow_data *data)
-{
-    fast_hmap_size_for(&data->lflows, max_seen_lflow_size);
-}
-
-void
-lflow_data_destroy(struct lflow_data *data)
-{
-    struct ovn_lflow *lflow;
-    HMAP_FOR_EACH_SAFE (lflow, hmap_node, &data->lflows) {
-        ovn_lflow_destroy(&data->lflows, lflow);
-    }
-    hmap_destroy(&data->lflows);
-}
-
 void run_update_worker_pool(int n_threads)
 {
     /* If number of threads has been updated (or initially set),
@@ -16633,7 +16022,7 @@  create_sb_multicast_group(struct ovsdb_idl_txn *ovnsb_txn,
  * constructing their contents based on the OVN_NB database. */
 void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
                   struct lflow_input *input_data,
-                  struct hmap *lflows)
+                  struct lflow_table *lflows)
 {
     struct hmap mcast_groups;
     struct hmap igmp_groups;
@@ -16664,281 +16053,26 @@  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
     }
 
     /* Parallel build may result in a suboptimal hash. Resize the
-     * hash to a correct size before doing lookups */
-
-    hmap_expand(lflows);
-
-    if (hmap_count(lflows) > max_seen_lflow_size) {
-        max_seen_lflow_size = hmap_count(lflows);
-    }
-
-    stopwatch_start(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
-    /* Collecting all unique datapath groups. */
-    struct hmap ls_dp_groups = HMAP_INITIALIZER(&ls_dp_groups);
-    struct hmap lr_dp_groups = HMAP_INITIALIZER(&lr_dp_groups);
-    struct hmap single_dp_lflows;
-
-    /* Single dp_flows will never grow bigger than lflows,
-     * thus the two hmaps will remain the same size regardless
-     * of how many elements we remove from lflows and add to
-     * single_dp_lflows.
-     * Note - lflows is always sized for at least 128 flows.
-     */
-    fast_hmap_size_for(&single_dp_lflows, max_seen_lflow_size);
-
-    struct ovn_lflow *lflow;
-    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
-        struct ovn_datapath **datapaths_array;
-        size_t n_datapaths;
-
-        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
-            n_datapaths = ods_size(input_data->ls_datapaths);
-            datapaths_array = input_data->ls_datapaths->array;
-        } else {
-            n_datapaths = ods_size(input_data->lr_datapaths);
-            datapaths_array = input_data->lr_datapaths->array;
-        }
-
-        lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
-
-        ovs_assert(lflow->n_ods);
-
-        if (lflow->n_ods == 1) {
-            /* There is only one datapath, so it should be moved out of the
-             * group to a single 'od'. */
-            size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
-                                       n_datapaths);
-
-            bitmap_set0(lflow->dpg_bitmap, index);
-            lflow->od = datapaths_array[index];
-
-            /* Logical flow should be re-hashed to allow lookups. */
-            uint32_t hash = hmap_node_hash(&lflow->hmap_node);
-            /* Remove from lflows. */
-            hmap_remove(lflows, &lflow->hmap_node);
-            hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
-                                                  hash);
-            /* Add to single_dp_lflows. */
-            hmap_insert_fast(&single_dp_lflows, &lflow->hmap_node, hash);
-        }
-    }
-
-    /* Merge multiple and single dp hashes. */
-
-    fast_hmap_merge(lflows, &single_dp_lflows);
-
-    hmap_destroy(&single_dp_lflows);
-
-    stopwatch_stop(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
+     * lflow map to a correct size before doing lookups */
+    lflow_table_expand(lflows);
+    
     stopwatch_start(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
-
-    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
-    /* Push changes to the Logical_Flow table to database. */
-    const struct sbrec_logical_flow *sbflow;
-    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow,
-                                     input_data->sbrec_logical_flow_table) {
-        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
-        struct ovn_datapath *logical_datapath_od = NULL;
-        size_t i;
-
-        /* Find one valid datapath to get the datapath type. */
-        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
-        if (dp) {
-            logical_datapath_od = ovn_datapath_from_sbrec(
-                                        &input_data->ls_datapaths->datapaths,
-                                        &input_data->lr_datapaths->datapaths,
-                                        dp);
-            if (logical_datapath_od
-                && ovn_datapath_is_stale(logical_datapath_od)) {
-                logical_datapath_od = NULL;
-            }
-        }
-        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
-            logical_datapath_od = ovn_datapath_from_sbrec(
-                                        &input_data->ls_datapaths->datapaths,
-                                        &input_data->lr_datapaths->datapaths,
-                                        dp_group->datapaths[i]);
-            if (logical_datapath_od
-                && !ovn_datapath_is_stale(logical_datapath_od)) {
-                break;
-            }
-            logical_datapath_od = NULL;
-        }
-
-        if (!logical_datapath_od) {
-            /* This lflow has no valid logical datapaths. */
-            sbrec_logical_flow_delete(sbflow);
-            continue;
-        }
-
-        enum ovn_pipeline pipeline
-            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
-
-        lflow = ovn_lflow_find(
-            lflows, dp_group ? NULL : logical_datapath_od,
-            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
-                            pipeline, sbflow->table_id),
-            sbflow->priority, sbflow->match, sbflow->actions,
-            sbflow->controller_meter, sbflow->hash);
-        if (lflow) {
-            struct hmap *dp_groups;
-            size_t n_datapaths;
-            bool is_switch;
-
-            lflow->sb_uuid = sbflow->header_.uuid;
-            is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
-            if (is_switch) {
-                n_datapaths = ods_size(input_data->ls_datapaths);
-                dp_groups = &ls_dp_groups;
-            } else {
-                n_datapaths = ods_size(input_data->lr_datapaths);
-                dp_groups = &lr_dp_groups;
-            }
-            if (input_data->ovn_internal_version_changed) {
-                const char *stage_name = smap_get_def(&sbflow->external_ids,
-                                                  "stage-name", "");
-                const char *stage_hint = smap_get_def(&sbflow->external_ids,
-                                                  "stage-hint", "");
-                const char *source = smap_get_def(&sbflow->external_ids,
-                                                  "source", "");
-
-                if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
-                    sbrec_logical_flow_update_external_ids_setkey(sbflow,
-                     "stage-name", ovn_stage_to_str(lflow->stage));
-                }
-                if (lflow->stage_hint) {
-                    if (strcmp(stage_hint, lflow->stage_hint)) {
-                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
-                        "stage-hint", lflow->stage_hint);
-                    }
-                }
-                if (lflow->where) {
-                    if (strcmp(source, lflow->where)) {
-                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
-                        "source", lflow->where);
-                    }
-                }
-            }
-
-            if (lflow->od) {
-                sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
-                sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
-            } else {
-                lflow->dpg = ovn_dp_group_get_or_create(
-                                ovnsb_txn, dp_groups, dp_group,
-                                lflow->n_ods, lflow->dpg_bitmap,
-                                n_datapaths, is_switch,
-                                input_data->ls_datapaths,
-                                input_data->lr_datapaths);
-
-                sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
-                sbrec_logical_flow_set_logical_dp_group(sbflow,
-                                                        lflow->dpg->dp_group);
-            }
-
-            /* This lflow updated.  Not needed anymore. */
-            hmap_remove(lflows, &lflow->hmap_node);
-            hmap_insert(&lflows_temp, &lflow->hmap_node,
-                        hmap_node_hash(&lflow->hmap_node));
-        } else {
-            sbrec_logical_flow_delete(sbflow);
-        }
-    }
-
-    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
-        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
-        uint8_t table = ovn_stage_get_table(lflow->stage);
-        struct hmap *dp_groups;
-        size_t n_datapaths;
-        bool is_switch;
-
-        is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
-        if (is_switch) {
-            n_datapaths = ods_size(input_data->ls_datapaths);
-            dp_groups = &ls_dp_groups;
-        } else {
-            n_datapaths = ods_size(input_data->lr_datapaths);
-            dp_groups = &lr_dp_groups;
-        }
-
-        lflow->sb_uuid = uuid_random();
-        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
-                                                        &lflow->sb_uuid);
-        if (lflow->od) {
-            sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
-        } else {
-            lflow->dpg = ovn_dp_group_get_or_create(
-                                ovnsb_txn, dp_groups, NULL,
-                                lflow->n_ods, lflow->dpg_bitmap,
-                                n_datapaths, is_switch,
-                                input_data->ls_datapaths,
-                                input_data->lr_datapaths);
-
-            sbrec_logical_flow_set_logical_dp_group(sbflow,
-                                                    lflow->dpg->dp_group);
-        }
-
-        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
-        sbrec_logical_flow_set_table_id(sbflow, table);
-        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
-        sbrec_logical_flow_set_match(sbflow, lflow->match);
-        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
-        if (lflow->io_port) {
-            struct smap tags = SMAP_INITIALIZER(&tags);
-            smap_add(&tags, "in_out_port", lflow->io_port);
-            sbrec_logical_flow_set_tags(sbflow, &tags);
-            smap_destroy(&tags);
-        }
-        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
-
-        /* Trim the source locator lflow->where, which looks something like
-         * "ovn/northd/northd.c:1234", down to just the part following the
-         * last slash, e.g. "northd.c:1234". */
-        const char *slash = strrchr(lflow->where, '/');
-#if _WIN32
-        const char *backslash = strrchr(lflow->where, '\\');
-        if (!slash || backslash > slash) {
-            slash = backslash;
-        }
-#endif
-        const char *where = slash ? slash + 1 : lflow->where;
-
-        struct smap ids = SMAP_INITIALIZER(&ids);
-        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
-        smap_add(&ids, "source", where);
-        if (lflow->stage_hint) {
-            smap_add(&ids, "stage-hint", lflow->stage_hint);
-        }
-        sbrec_logical_flow_set_external_ids(sbflow, &ids);
-        smap_destroy(&ids);
-        hmap_remove(lflows, &lflow->hmap_node);
-        hmap_insert(&lflows_temp, &lflow->hmap_node,
-                    hmap_node_hash(&lflow->hmap_node));
-    }
-    hmap_swap(lflows, &lflows_temp);
-    hmap_destroy(&lflows_temp);
+    lflow_table_sync_to_sb(lflows, ovnsb_txn, input_data->ls_datapaths,
+                           input_data->lr_datapaths,
+                           input_data->ovn_internal_version_changed,
+                           input_data->sbrec_logical_flow_table,
+                           input_data->sbrec_logical_dp_group_table);
 
     stopwatch_stop(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
-    struct ovn_dp_group *dpg;
-    HMAP_FOR_EACH_POP (dpg, node, &ls_dp_groups) {
-        bitmap_free(dpg->bitmap);
-        free(dpg);
-    }
-    hmap_destroy(&ls_dp_groups);
-    HMAP_FOR_EACH_POP (dpg, node, &lr_dp_groups) {
-        bitmap_free(dpg->bitmap);
-        free(dpg);
-    }
-    hmap_destroy(&lr_dp_groups);
 
     /* Push changes to the Multicast_Group table to database. */
     const struct sbrec_multicast_group *sbmc;
-    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (sbmc,
-                                input_data->sbrec_multicast_group_table) {
+    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (
+            sbmc, input_data->sbrec_multicast_group_table) {
         struct ovn_datapath *od = ovn_datapath_from_sbrec(
-                                       &input_data->ls_datapaths->datapaths,
-                                       &input_data->lr_datapaths->datapaths,
-                                       sbmc->datapath);
+            &input_data->ls_datapaths->datapaths,
+            &input_data->lr_datapaths->datapaths,
+            sbmc->datapath);
 
         if (!od || ovn_datapath_is_stale(od)) {
             sbrec_multicast_group_delete(sbmc);
@@ -16978,120 +16112,22 @@  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
     hmap_destroy(&mcast_groups);
 }
 
-static void
-sync_lsp_lflows_to_sb(struct ovsdb_idl_txn *ovnsb_txn,
-                      struct lflow_input *lflow_input,
-                      struct hmap *lflows,
-                      struct ovn_lflow *lflow)
-{
-    size_t n_datapaths;
-    struct ovn_datapath **datapaths_array;
-    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
-        n_datapaths = ods_size(lflow_input->ls_datapaths);
-        datapaths_array = lflow_input->ls_datapaths->array;
-    } else {
-        n_datapaths = ods_size(lflow_input->lr_datapaths);
-        datapaths_array = lflow_input->lr_datapaths->array;
-    }
-    uint32_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
-    ovs_assert(n_ods == 1);
-    /* There is only one datapath, so it should be moved out of the
-     * group to a single 'od'. */
-    size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
-                               n_datapaths);
-
-    bitmap_set0(lflow->dpg_bitmap, index);
-    lflow->od = datapaths_array[index];
-
-    /* Logical flow should be re-hashed to allow lookups. */
-    uint32_t hash = hmap_node_hash(&lflow->hmap_node);
-    /* Remove from lflows. */
-    hmap_remove(lflows, &lflow->hmap_node);
-    hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
-                                          hash);
-    /* Add back. */
-    hmap_insert(lflows, &lflow->hmap_node, hash);
-
-    /* Sync to SB. */
-    const struct sbrec_logical_flow *sbflow;
-    /* Note: uuid_random acquires a global mutex. If we parallelize the sync to
-     * SB this may become a bottleneck. */
-    lflow->sb_uuid = uuid_random();
-    sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
-                                                    &lflow->sb_uuid);
-    const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
-    uint8_t table = ovn_stage_get_table(lflow->stage);
-    sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
-    sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
-    sbrec_logical_flow_set_pipeline(sbflow, pipeline);
-    sbrec_logical_flow_set_table_id(sbflow, table);
-    sbrec_logical_flow_set_priority(sbflow, lflow->priority);
-    sbrec_logical_flow_set_match(sbflow, lflow->match);
-    sbrec_logical_flow_set_actions(sbflow, lflow->actions);
-    if (lflow->io_port) {
-        struct smap tags = SMAP_INITIALIZER(&tags);
-        smap_add(&tags, "in_out_port", lflow->io_port);
-        sbrec_logical_flow_set_tags(sbflow, &tags);
-        smap_destroy(&tags);
-    }
-    sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
-    /* Trim the source locator lflow->where, which looks something like
-     * "ovn/northd/northd.c:1234", down to just the part following the
-     * last slash, e.g. "northd.c:1234". */
-    const char *slash = strrchr(lflow->where, '/');
-#if _WIN32
-    const char *backslash = strrchr(lflow->where, '\\');
-    if (!slash || backslash > slash) {
-        slash = backslash;
-    }
-#endif
-    const char *where = slash ? slash + 1 : lflow->where;
-
-    struct smap ids = SMAP_INITIALIZER(&ids);
-    smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
-    smap_add(&ids, "source", where);
-    if (lflow->stage_hint) {
-        smap_add(&ids, "stage-hint", lflow->stage_hint);
-    }
-    sbrec_logical_flow_set_external_ids(sbflow, &ids);
-    smap_destroy(&ids);
-}
-
-static bool
-delete_lflow_for_lsp(struct ovn_port *op, bool is_update,
-                     const struct sbrec_logical_flow_table *sb_lflow_table,
-                     struct hmap *lflows)
-{
-    struct lflow_ref_node *lfrn;
-    const char *operation = is_update ? "updated" : "deleted";
-    LIST_FOR_EACH_SAFE (lfrn, lflow_list_node, &op->lflows) {
-        VLOG_DBG("Deleting SB lflow "UUID_FMT" for %s port %s",
-                 UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
-
-        const struct sbrec_logical_flow *sblflow =
-            sbrec_logical_flow_table_get_for_uuid(sb_lflow_table,
-                                              &lfrn->lflow->sb_uuid);
-        if (sblflow) {
-            sbrec_logical_flow_delete(sblflow);
-        } else {
-            static struct vlog_rate_limit rl =
-                VLOG_RATE_LIMIT_INIT(1, 1);
-            VLOG_WARN_RL(&rl, "SB lflow "UUID_FMT" not found when handling "
-                         "%s port %s. Recompute.",
-                         UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
-            return false;
-        }
+void
+lflow_reset_northd_refs(struct lflow_input *lflow_input)
+{
+    struct ovn_port *op;
 
-        ovn_lflow_destroy(lflows, lfrn->lflow);
+    HMAP_FOR_EACH (op, key_node, lflow_input->ls_ports) {
+        lflow_ref_clear(op->lflow_ref);
+        lflow_ref_clear(op->stateful_lflow_ref);
     }
-    return true;
 }
 
 bool
 lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
                                  struct tracked_ovn_ports *trk_lsps,
                                  struct lflow_input *lflow_input,
-                                 struct hmap *lflows)
+                                 struct lflow_table *lflows)
 {
     struct hmapx_node *hmapx_node;
     struct ovn_port *op;
@@ -17100,13 +16136,15 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
         op = hmapx_node->data;
         /* Make sure 'op' is an lsp and not lrp. */
         ovs_assert(op->nbsp);
-
-        if (!delete_lflow_for_lsp(op, false,
-                                  lflow_input->sbrec_logical_flow_table,
-                                  lflows)) {
-                return false;
-            }
-
+        bool handled = lflow_ref_resync_flows(
+            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
+            lflow_input->lr_datapaths,
+            lflow_input->ovn_internal_version_changed,
+            lflow_input->sbrec_logical_flow_table,
+            lflow_input->sbrec_logical_dp_group_table);
+        if (!handled) {
+            return false;
+        }
         /* No need to update SB multicast groups, thanks to weak
          * references. */
     }
@@ -17115,13 +16153,8 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
         op = hmapx_node->data;
         /* Make sure 'op' is an lsp and not lrp. */
         ovs_assert(op->nbsp);
-
-        /* Delete old lflows. */
-        if (!delete_lflow_for_lsp(op, true,
-                                  lflow_input->sbrec_logical_flow_table,
-                                  lflows)) {
-            return false;
-        }
+        /* Clear old lflows. */
+        lflow_ref_unlink_lflows(op->lflow_ref);
 
         /* Generate new lflows. */
         struct ds match = DS_EMPTY_INITIALIZER;
@@ -17131,21 +16164,39 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
                                                  lflow_input->meter_groups,
                                                  &match, &actions,
                                                  lflows);
-        build_lbnat_lflows_iterate_by_lsp(op, lflow_input->lr_stateful_table,
-                                          lflow_input->lr_ports, &match,
-                                          &actions, lflows);
+        /* Sync the new flows to SB. */
+        bool handled = lflow_ref_sync_lflows(
+            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
+            lflow_input->lr_datapaths,
+            lflow_input->ovn_internal_version_changed,
+            lflow_input->sbrec_logical_flow_table,
+            lflow_input->sbrec_logical_dp_group_table);
+        if (handled) {
+            /* Now regenerate the stateful lflows for 'op' */
+            /* Clear old lflows. */
+            lflow_ref_unlink_lflows(op->stateful_lflow_ref);
+            build_lbnat_lflows_iterate_by_lsp(op,
+                                              lflow_input->lr_stateful_table,
+                                              lflow_input->lr_ports, &match,
+                                              &actions, lflows);
+            handled = lflow_ref_sync_lflows(
+                op->stateful_lflow_ref, lflows, ovnsb_txn,
+                lflow_input->ls_datapaths,
+                lflow_input->lr_datapaths,
+                lflow_input->ovn_internal_version_changed,
+                lflow_input->sbrec_logical_flow_table,
+                lflow_input->sbrec_logical_dp_group_table);
+        }
+
         ds_destroy(&match);
         ds_destroy(&actions);
 
+        if (!handled) {
+            return false;
+        }
+
         /* SB port_binding is not deleted, so don't update SB multicast
          * groups. */
-
-        /* Sync the new flows to SB. */
-        struct lflow_ref_node *lfrn;
-        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
-            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
-                                  lfrn->lflow);
-        }
     }
 
     HMAPX_FOR_EACH (hmapx_node, &trk_lsps->created) {
@@ -17170,12 +16221,35 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
                                                  lflow_input->meter_groups,
                                                  &match, &actions, lflows);
 
-        build_lbnat_lflows_iterate_by_lsp(op, lflow_input->lr_stateful_table,
-                                          lflow_input->lr_ports, &match,
-                                          &actions, lflows);
+        /* Sync the newly added flows to SB. */
+        bool handled = lflow_ref_sync_lflows(
+            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
+            lflow_input->lr_datapaths,
+            lflow_input->ovn_internal_version_changed,
+            lflow_input->sbrec_logical_flow_table,
+            lflow_input->sbrec_logical_dp_group_table);
+        if (handled) {
+            /* Now generate the stateful lflows for 'op' */
+            build_lbnat_lflows_iterate_by_lsp(op,
+                                              lflow_input->lr_stateful_table,
+                                              lflow_input->lr_ports, &match,
+                                              &actions, lflows);
+            handled = lflow_ref_sync_lflows(
+                op->stateful_lflow_ref, lflows, ovnsb_txn,
+                lflow_input->ls_datapaths,
+                lflow_input->lr_datapaths,
+                lflow_input->ovn_internal_version_changed,
+                lflow_input->sbrec_logical_flow_table,
+                lflow_input->sbrec_logical_dp_group_table);
+        }
+
         ds_destroy(&match);
         ds_destroy(&actions);
 
+        if (!handled) {
+            return false;
+        }
+
         /* Update SB multicast groups for the new port. */
         if (!sbmc_flood) {
             sbmc_flood = create_sb_multicast_group(ovnsb_txn,
@@ -17199,13 +16273,6 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
             sbrec_multicast_group_update_ports_addvalue(sbmc_unknown,
                                                         op->sb);
         }
-
-        /* Sync the newly added flows to SB. */
-        struct lflow_ref_node *lfrn;
-        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
-            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
-                                    lfrn->lflow);
-        }
     }
 
     return true;
diff --git a/northd/northd.h b/northd/northd.h
index 404abbe5b5..f9370be955 100644
--- a/northd/northd.h
+++ b/northd/northd.h
@@ -23,6 +23,7 @@ 
 #include "northd/en-port-group.h"
 #include "northd/ipam.h"
 #include "openvswitch/hmap.h"
+#include "ovs-thread.h"
 
 struct northd_input {
     /* Northbound table references */
@@ -164,13 +165,6 @@  struct northd_data {
     struct northd_tracked_data trk_data;
 };
 
-struct lflow_data {
-    struct hmap lflows;
-};
-
-void lflow_data_init(struct lflow_data *);
-void lflow_data_destroy(struct lflow_data *);
-
 struct lr_nat_table;
 
 struct lflow_input {
@@ -182,6 +176,7 @@  struct lflow_input {
     const struct sbrec_logical_flow_table *sbrec_logical_flow_table;
     const struct sbrec_multicast_group_table *sbrec_multicast_group_table;
     const struct sbrec_igmp_group_table *sbrec_igmp_group_table;
+    const struct sbrec_logical_dp_group_table *sbrec_logical_dp_group_table;
 
     /* Indexes */
     struct ovsdb_idl_index *sbrec_mcast_group_by_name_dp;
@@ -201,6 +196,15 @@  struct lflow_input {
     bool ovn_internal_version_changed;
 };
 
+extern int parallelization_state;
+enum {
+    STATE_NULL,               /* parallelization is off */
+    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
+    STATE_USE_PARALLELIZATION /* parallelization is on */
+};
+
+extern thread_local size_t thread_lflow_counter;
+
 /*
  * Multicast snooping and querier per datapath configuration.
  */
@@ -351,6 +355,179 @@  ovn_datapaths_find_by_index(const struct ovn_datapaths *ovn_datapaths,
     return ovn_datapaths->array[od_index];
 }
 
+struct ovn_datapath *ovn_datapath_from_sbrec(
+    const struct hmap *ls_datapaths, const struct hmap *lr_datapaths,
+    const struct sbrec_datapath_binding *);
+
+static inline bool
+ovn_datapath_is_stale(const struct ovn_datapath *od)
+{
+    return !od->nbr && !od->nbs;
+};
+
+/* Pipeline stages. */
+
+/* The two purposes for which ovn-northd uses OVN logical datapaths. */
+enum ovn_datapath_type {
+    DP_SWITCH,                  /* OVN logical switch. */
+    DP_ROUTER                   /* OVN logical router. */
+};
+
+/* Returns an "enum ovn_stage" built from the arguments.
+ *
+ * (It's better to use ovn_stage_build() for type-safety reasons, but inline
+ * functions can't be used in enums or switch cases.) */
+#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
+    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
+
+/* A stage within an OVN logical switch or router.
+ *
+ * An "enum ovn_stage" indicates whether the stage is part of a logical switch
+ * or router, whether the stage is part of the ingress or egress pipeline, and
+ * the table within that pipeline.  The first three components are combined to
+ * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
+ * S_ROUTER_OUT_DELIVERY. */
+enum ovn_stage {
+#define PIPELINE_STAGES                                                   \
+    /* Logical switch ingress stages. */                                  \
+    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
+    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
+    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
+    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
+    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
+    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
+    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
+    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
+    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
+    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
+    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
+    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
+    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
+    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
+    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
+                   "ls_in_acl_after_lb_eval")  \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
+                   "ls_in_acl_after_lb_action")  \
+    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
+    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
+    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
+    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
+    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
+    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
+    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
+    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
+    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
+                                                                          \
+    /* Logical switch egress stages. */                                   \
+    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
+    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
+    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
+    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
+    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
+    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
+    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
+    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
+    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
+    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
+    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
+                                                                      \
+    /* Logical router ingress stages. */                              \
+    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
+    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
+    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
+    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
+    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
+    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
+    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
+    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
+    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
+    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
+    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
+    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
+    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
+    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
+    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
+    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
+    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
+    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
+    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
+    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
+    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
+    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
+                                                                      \
+    /* Logical router egress stages. */                               \
+    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
+                   "lr_out_chk_dnat_local")                                  \
+    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
+    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
+    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
+    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
+    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
+    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
+
+#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
+    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
+        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
+    PIPELINE_STAGES
+#undef PIPELINE_STAGE
+};
+
+enum ovn_datapath_type ovn_stage_to_datapath_type(enum ovn_stage stage);
+
+
+/* Returns 'od''s datapath type. */
+static inline enum ovn_datapath_type
+ovn_datapath_get_type(const struct ovn_datapath *od)
+{
+    return od->nbs ? DP_SWITCH : DP_ROUTER;
+}
+
+/* Returns an "enum ovn_stage" built from the arguments. */
+static inline enum ovn_stage
+ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
+                uint8_t table)
+{
+    return OVN_STAGE_BUILD(dp_type, pipeline, table);
+}
+
+/* Returns the pipeline to which 'stage' belongs. */
+static inline enum ovn_pipeline
+ovn_stage_get_pipeline(enum ovn_stage stage)
+{
+    return (stage >> 8) & 1;
+}
+
+/* Returns the pipeline name to which 'stage' belongs. */
+static inline const char *
+ovn_stage_get_pipeline_name(enum ovn_stage stage)
+{
+    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
+}
+
+/* Returns the table to which 'stage' belongs. */
+static inline uint8_t
+ovn_stage_get_table(enum ovn_stage stage)
+{
+    return stage & 0xff;
+}
+
+/* Returns a string name for 'stage'. */
+static inline const char *
+ovn_stage_to_str(enum ovn_stage stage)
+{
+    switch (stage) {
+#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
+        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
+    PIPELINE_STAGES
+#undef PIPELINE_STAGE
+        default: return "<unknown>";
+    }
+}
+
 /* A logical switch port or logical router port.
  *
  * In steady state, an ovn_port points to a northbound Logical_Switch_Port
@@ -441,8 +618,10 @@  struct ovn_port {
     /* Temporarily used for traversing a list (or hmap) of ports. */
     bool visited;
 
-    /* List of struct lflow_ref_node that points to the lflows generated by
-     * this ovn_port.
+    /* Only used for the router type LSP whose peer is l3dgw_port */
+    bool enable_router_port_acl;
+
+    /* Reference of lflows generated for this ovn_port.
      *
      * This data is initialized and destroyed by the en_northd node, but
      * populated and used only by the en_lflow node. Ideally this data should
@@ -460,11 +639,19 @@  struct ovn_port {
      * Adding the list here is more straightforward. The drawback is that we
      * need to keep in mind that this data belongs to en_lflow node, so never
      * access it from any other nodes.
+     *
+     * 'lflow_ref' is used to reference generic logical flows generated for
+     *  this ovn_port.
+     *
+     * 'stateful_lflow_ref' is used for logical switch ports of type
+     * 'patch/router' to reference logical flows generated fo this ovn_port
+     *  from the 'lr_stateful' record of the peer port's datapath.
+     *
+     * Note: lflow_ref is not thread safe.  Only one thread should
+     * access ovn_ports->lflow_ref at any given time.
      */
-    struct ovs_list lflows;
-
-    /* Only used for the router type LSP whose peer is l3dgw_port */
-    bool enable_router_port_acl;
+    struct lflow_ref *lflow_ref;
+    struct lflow_ref *stateful_lflow_ref;
 };
 
 void ovnnb_db_run(struct northd_input *input_data,
@@ -487,13 +674,17 @@  void northd_destroy(struct northd_data *data);
 void northd_init(struct northd_data *data);
 void northd_indices_create(struct northd_data *data,
                            struct ovsdb_idl *ovnsb_idl);
+
+struct lflow_table;
 void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
                   struct lflow_input *input_data,
-                  struct hmap *lflows);
+                  struct lflow_table *);
+void lflow_reset_northd_refs(struct lflow_input *);
+
 bool lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
                                       struct tracked_ovn_ports *,
                                       struct lflow_input *,
-                                      struct hmap *lflows);
+                                      struct lflow_table *lflows);
 bool northd_handle_sb_port_binding_changes(
     const struct sbrec_port_binding_table *, struct hmap *ls_ports,
     struct hmap *lr_ports);
diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
index deb3194cbd..0c0c00ca6d 100644
--- a/northd/ovn-northd.c
+++ b/northd/ovn-northd.c
@@ -856,6 +856,10 @@  main(int argc, char *argv[])
         ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
                              &sbrec_port_group_columns[i]);
     }
+    for (size_t i = 0; i < SBREC_LOGICAL_DP_GROUP_N_COLUMNS; i++) {
+        ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
+                             &sbrec_logical_dp_group_columns[i]);
+    }
 
     unixctl_command_register("sb-connection-status", "", 0, 0,
                              ovn_conn_show, ovnsb_idl_loop.idl);
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index f5cf4f25c9..25e45506b7 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -11352,6 +11352,222 @@  CHECK_NO_CHANGE_AFTER_RECOMPUTE
 AT_CLEANUP
 ])
 
+OVN_FOR_EACH_NORTHD_NO_HV([
+AT_SETUP([Load balancer incremental processing for multiple LBs with same VIPs])
+ovn_start
+
+check ovn-nbctl ls-add sw0
+check ovn-nbctl ls-add sw1
+check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
+check ovn-nbctl --wait=sb lb-add lb2 10.0.0.10:80 10.0.0.3:80
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb ls-lb-add sw0 lb1
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
+sw0_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw0)
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" = ""])
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb ls-lb-add sw1 lb2
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+# Clear the SB:Logical_Flow.logical_dp_groups column of all the
+# logical flows and then modify the NB:Load_balancer.  ovn-northd
+# should resync the logical flows.
+for l in $(ovn-sbctl --bare --columns _uuid list logical_flow)
+do
+    ovn-sbctl clear logical_flow $l logical_dp_group
+done
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb set load_balancer lb1 options:foo=bar
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb clear load_balancer lb2 vips
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" = ""])
+
+# Add back the vip to lb2.
+check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
+
+# Create additional logical switches and associate lb1 to sw0, sw1 and sw2
+# and associate lb2 to sw3, sw4 and sw5
+check ovn-nbctl ls-add sw2
+check ovn-nbctl ls-add sw3
+check ovn-nbctl ls-add sw4
+check ovn-nbctl ls-add sw5
+check ovn-nbctl --wait=sb ls-lb-del sw1 lb2
+check ovn-nbctl ls-lb-add sw1 lb1
+check ovn-nbctl ls-lb-add sw2 lb1
+check ovn-nbctl ls-lb-add sw3 lb2
+check ovn-nbctl ls-lb-add sw4 lb2
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb ls-lb-add sw5 lb2
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+sw1_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw1)
+sw2_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw2)
+sw3_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw3)
+sw4_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw4)
+sw5_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw5)
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
+
+echo "dpgrp_dps - $dpgrp_dps"
+
+# Clear the vips for lb2.  The logical lb logical flow dp group should have
+# only sw0, sw1 and sw2 uuids.
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb clear load_balancer lb2 vips
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [1], [ignore])
+
+# Clear the vips for lb1.  The logical flow should be deleted.
+check ovn-nbctl --wait=sb clear load_balancer lb1 vips
+
+AT_CHECK([ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid], [1], [ignore], [ignore])
+
+lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
+AT_CHECK([test "$lb_lflow_uuid" = ""])
+
+
+# Now add back the vips,  create another lb with the same vips and associate to
+# sw0 and sw1
+check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
+check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
+check ovn-nbctl --wait=sb lb-add lb3 10.0.0.10:80 10.0.0.3:80
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+
+check ovn-nbctl ls-lb-add sw0 lb3
+check ovn-nbctl --wait=sb ls-lb-add sw1 lb3
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
+
+# Now clear lb1 vips.
+# Since lb3 is associated with sw0 and sw1, the logical flow db group
+# should have reference to sw0 and sw1, but not to sw2.
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb clear load_balancer lb1 vips
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+echo "dpgrp dps - $dpgrp_dps"
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
+
+# Now clear lb3 vips.  The logical flow db group
+# should have reference only to sw3, sw4 and sw5 because lb2 is
+# associated to them.
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb clear load_balancer lb3 vips
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+echo "dpgrp dps - $dpgrp_dps"
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
+
+AT_CLEANUP
+])
+
 OVN_FOR_EACH_NORTHD_NO_HV([
 AT_SETUP([Logical router incremental processing for NAT])