diff mbox series

[ovs-dev,1/3] Don't save original dst IP and Port to avoid megaflow unwildcarding.

Message ID 20220705224112.2830175-1-hzhou@ovn.org
State Deferred
Headers show
Series [ovs-dev,1/3] Don't save original dst IP and Port to avoid megaflow unwildcarding. | expand

Checks

Context Check Description
ovsrobot/apply-robot success apply and check: success
ovsrobot/github-robot-_ovn-kubernetes success github build: passed
ovsrobot/github-robot-_Build_and_Test fail github build: failed

Commit Message

Han Zhou July 5, 2022, 10:41 p.m. UTC
The ls_in_pre_stateful priority 120 flow that saves dst IP and Port to
registers is causing a critical dataplane performance impact to
short-lived connections, because it unwildcards megaflows with exact
match on dst IP and L4 ports. Any new connections with a different
client side L4 port will encounter datapath flow miss and upcall to
ovs-vswitchd.

These fields (dst IP and port) were saved to registers to solve a
problem of LB hairpin use case when different VIPs are sharing
overlapping backend+port [0]. The change [0] might not have as wide
performance impact as it is now because at that time one of the match
condition "REGBIT_CONNTRACK_NAT == 1" was set only for established and
natted traffic, while now the impact is more obvious because
REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
configured on the LS) since commit [1], after several other indirectly
related optimizations and refactors.

Since the changes that introduced the performance problem had their
own values (fixes problems or optimizes performance), so we don't want
to revert any of the changes (and it is also not straightforward to
revert any of them because there have been lots of changes and refactors
on top of them).

Change [0] itself has added an alternative way to solve the overlapping
backends problem, which utilizes ct fields instead of saving dst IP and
port to registers. This patch forces to that approach and removes the
flows/actions that saves the dst IP and port to avoid the dataplane
performance problem for short-lived connections.

(With this approach, the ct_state DNAT is not HW offload friendly, so it
may result in those flows not being offloaded, which is supposed to be
solved in a follow-up patch)

[0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with shared backends.")
[1] 0038579d1928 ("northd: Optimize ct nat for load balancer traffic.")

Signed-off-by: Han Zhou <hzhou@ovn.org>
---
 controller/lflow.c           |  74 ++----------
 include/ovn/logical-fields.h |   5 -
 lib/lb.c                     |   3 -
 lib/lb.h                     |   3 -
 northd/northd.c              |  95 ++++-----------
 northd/ovn-northd.8.xml      |  30 +----
 tests/ovn-northd.at          |  78 ++++---------
 tests/ovn.at                 | 218 +++++++++++++----------------------
 8 files changed, 141 insertions(+), 365 deletions(-)

Comments

Dumitru Ceara July 6, 2022, 3:45 p.m. UTC | #1
Hi Han,

On 7/6/22 00:41, Han Zhou wrote:
> The ls_in_pre_stateful priority 120 flow that saves dst IP and Port to
> registers is causing a critical dataplane performance impact to
> short-lived connections, because it unwildcards megaflows with exact
> match on dst IP and L4 ports. Any new connections with a different
> client side L4 port will encounter datapath flow miss and upcall to
> ovs-vswitchd.
> 
> These fields (dst IP and port) were saved to registers to solve a
> problem of LB hairpin use case when different VIPs are sharing
> overlapping backend+port [0]. The change [0] might not have as wide
> performance impact as it is now because at that time one of the match
> condition "REGBIT_CONNTRACK_NAT == 1" was set only for established and
> natted traffic, while now the impact is more obvious because
> REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
> configured on the LS) since commit [1], after several other indirectly
> related optimizations and refactors.
> 
> Since the changes that introduced the performance problem had their
> own values (fixes problems or optimizes performance), so we don't want
> to revert any of the changes (and it is also not straightforward to
> revert any of them because there have been lots of changes and refactors
> on top of them).
> 
> Change [0] itself has added an alternative way to solve the overlapping
> backends problem, which utilizes ct fields instead of saving dst IP and
> port to registers. This patch forces to that approach and removes the
> flows/actions that saves the dst IP and port to avoid the dataplane
> performance problem for short-lived connections.
> 
> (With this approach, the ct_state DNAT is not HW offload friendly, so it
> may result in those flows not being offloaded, which is supposed to be
> solved in a follow-up patch)
> 
> [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with shared backends.")
> [1] 0038579d1928 ("northd: Optimize ct nat for load balancer traffic.")
> 
> Signed-off-by: Han Zhou <hzhou@ovn.org>
> ---

I think the main concern I have is that this forces us to choose between:
a. non hwol friendly flows (reduced performance)
b. less functionality (with the knob in patch 3/3 set to false).

Change [0] was added to address the case when a service in kubernetes is
exposed via two different k8s services objects that share the same
endpoints.  That translates in ovn-k8s to two different OVN load
balancer VIPs that share the same backends.  For such cases, if the
service is being accessed by one of its own backends we need to be able
to differentiate based on the VIP address it used to connect to.

CC: Tim Rozet, Dan Williams for some more input from the ovn-k8s side on
how common it is that an OVN-networked pod accesses two (or more)
services that might have the pod itself as a backend.

If this turns out to be mandatory I guess we might want to also look
into alternatives like:
- getting help from the HW to offload matches like ct_tuple()
- limiting the impact of "a." only to some load balancers (e.g., would
it help to use different hairpin lookup tables for such load balancers?)

Thanks,
Dumitru

>  controller/lflow.c           |  74 ++----------
>  include/ovn/logical-fields.h |   5 -
>  lib/lb.c                     |   3 -
>  lib/lb.h                     |   3 -
>  northd/northd.c              |  95 ++++-----------
>  northd/ovn-northd.8.xml      |  30 +----
>  tests/ovn-northd.at          |  78 ++++---------
>  tests/ovn.at                 | 218 +++++++++++++----------------------
>  8 files changed, 141 insertions(+), 365 deletions(-)
> 
> diff --git a/controller/lflow.c b/controller/lflow.c
> index 934b23698..a44f6d056 100644
> --- a/controller/lflow.c
> +++ b/controller/lflow.c
> @@ -1932,10 +1932,6 @@ add_lb_vip_hairpin_reply_action(struct in6_addr *vip6, ovs_be32 vip,
>  }
>  
>  /* Adds flows to detect hairpin sessions.
> - *
> - * For backwards compatibilty with older ovn-northd versions, uses
> - * ct_nw_dst(), ct_ipv6_dst(), ct_tp_dst(), otherwise uses the
> - * original destination tuple stored by ovn-northd.
>   */
>  static void
>  add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
> @@ -1956,10 +1952,8 @@ add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
>      /* Matching on ct_nw_dst()/ct_ipv6_dst()/ct_tp_dst() requires matching
>       * on ct_state first.
>       */
> -    if (!lb->hairpin_orig_tuple) {
> -        uint32_t ct_state = OVS_CS_F_TRACKED | OVS_CS_F_DST_NAT;
> -        match_set_ct_state_masked(&hairpin_match, ct_state, ct_state);
> -    }
> +    uint32_t ct_state = OVS_CS_F_TRACKED | OVS_CS_F_DST_NAT;
> +    match_set_ct_state_masked(&hairpin_match, ct_state, ct_state);
>  
>      if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
>          ovs_be32 bip4 = in6_addr_get_mapped_ipv4(&lb_backend->ip);
> @@ -1971,14 +1965,7 @@ add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
>          match_set_dl_type(&hairpin_match, htons(ETH_TYPE_IP));
>          match_set_nw_src(&hairpin_match, bip4);
>          match_set_nw_dst(&hairpin_match, bip4);
> -
> -        if (!lb->hairpin_orig_tuple) {
> -            match_set_ct_nw_dst(&hairpin_match, vip4);
> -        } else {
> -            match_set_reg(&hairpin_match,
> -                          MFF_LOG_LB_ORIG_DIP_IPV4 - MFF_LOG_REG0,
> -                          ntohl(vip4));
> -        }
> +        match_set_ct_nw_dst(&hairpin_match, vip4);
>  
>          add_lb_vip_hairpin_reply_action(NULL, snat_vip4, lb_proto,
>                                          lb_backend->port,
> @@ -1993,17 +1980,7 @@ add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
>          match_set_dl_type(&hairpin_match, htons(ETH_TYPE_IPV6));
>          match_set_ipv6_src(&hairpin_match, bip6);
>          match_set_ipv6_dst(&hairpin_match, bip6);
> -
> -        if (!lb->hairpin_orig_tuple) {
> -            match_set_ct_ipv6_dst(&hairpin_match, &lb_vip->vip);
> -        } else {
> -            ovs_be128 vip6_value;
> -
> -            memcpy(&vip6_value, &lb_vip->vip, sizeof vip6_value);
> -            match_set_xxreg(&hairpin_match,
> -                            MFF_LOG_LB_ORIG_DIP_IPV6 - MFF_LOG_XXREG0,
> -                            ntoh128(vip6_value));
> -        }
> +        match_set_ct_ipv6_dst(&hairpin_match, &lb_vip->vip);
>  
>          add_lb_vip_hairpin_reply_action(snat_vip6, 0, lb_proto,
>                                          lb_backend->port,
> @@ -2014,14 +1991,8 @@ add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
>      if (lb_backend->port) {
>          match_set_nw_proto(&hairpin_match, lb_proto);
>          match_set_tp_dst(&hairpin_match, htons(lb_backend->port));
> -        if (!lb->hairpin_orig_tuple) {
> -            match_set_ct_nw_proto(&hairpin_match, lb_proto);
> -            match_set_ct_tp_dst(&hairpin_match, htons(lb_vip->vip_port));
> -        } else {
> -            match_set_reg_masked(&hairpin_match,
> -                                 MFF_LOG_LB_ORIG_TP_DPORT - MFF_REG0,
> -                                 lb_vip->vip_port, UINT16_MAX);
> -        }
> +        match_set_ct_nw_proto(&hairpin_match, lb_proto);
> +        match_set_ct_tp_dst(&hairpin_match, htons(lb_vip->vip_port));
>      }
>  
>      /* In the original direction, only match on traffic that was already
> @@ -2218,44 +2189,23 @@ add_lb_ct_snat_hairpin_vip_flow(struct ovn_controller_lb *lb,
>      /* Matching on ct_nw_dst()/ct_ipv6_dst()/ct_tp_dst() requires matching
>       * on ct_state first.
>       */
> -    if (!lb->hairpin_orig_tuple) {
> -        uint32_t ct_state = OVS_CS_F_TRACKED | OVS_CS_F_DST_NAT;
> -        match_set_ct_state_masked(&match, ct_state, ct_state);
> -    }
> +    uint32_t ct_state = OVS_CS_F_TRACKED | OVS_CS_F_DST_NAT;
> +    match_set_ct_state_masked(&match, ct_state, ct_state);
>  
>      if (address_family == AF_INET) {
>          ovs_be32 vip4 = in6_addr_get_mapped_ipv4(&lb_vip->vip);
>  
>          match_set_dl_type(&match, htons(ETH_TYPE_IP));
> -
> -        if (!lb->hairpin_orig_tuple) {
> -            match_set_ct_nw_dst(&match, vip4);
> -        } else {
> -            match_set_reg(&match, MFF_LOG_LB_ORIG_DIP_IPV4 - MFF_LOG_REG0,
> -                          ntohl(vip4));
> -        }
> +        match_set_ct_nw_dst(&match, vip4);
>      } else {
>          match_set_dl_type(&match, htons(ETH_TYPE_IPV6));
> -        if (!lb->hairpin_orig_tuple) {
> -            match_set_ct_ipv6_dst(&match, &lb_vip->vip);
> -        } else {
> -            ovs_be128 vip6_value;
> -
> -            memcpy(&vip6_value, &lb_vip->vip, sizeof vip6_value);
> -            match_set_xxreg(&match, MFF_LOG_LB_ORIG_DIP_IPV6 - MFF_LOG_XXREG0,
> -                            ntoh128(vip6_value));
> -        }
> +        match_set_ct_ipv6_dst(&match, &lb_vip->vip);
>      }
>  
>      match_set_nw_proto(&match, lb_proto);
>      if (lb_vip->vip_port) {
> -        if (!lb->hairpin_orig_tuple) {
> -            match_set_ct_nw_proto(&match, lb_proto);
> -            match_set_ct_tp_dst(&match, htons(lb_vip->vip_port));
> -        } else {
> -            match_set_reg_masked(&match, MFF_LOG_LB_ORIG_TP_DPORT - MFF_REG0,
> -                                 lb_vip->vip_port, UINT16_MAX);
> -        }
> +        match_set_ct_nw_proto(&match, lb_proto);
> +        match_set_ct_tp_dst(&match, htons(lb_vip->vip_port));
>      }
>  
>      /* We need to "add_or_append" flows because this match may form part
> diff --git a/include/ovn/logical-fields.h b/include/ovn/logical-fields.h
> index bfb07ebef..1d5d4fbe3 100644
> --- a/include/ovn/logical-fields.h
> +++ b/include/ovn/logical-fields.h
> @@ -45,11 +45,6 @@ enum ovn_controller_event {
>   *
>   * Make sure these don't overlap with the logical fields! */
>  #define MFF_LOG_REG0             MFF_REG0
> -#define MFF_LOG_LB_ORIG_DIP_IPV4 MFF_REG1
> -#define MFF_LOG_LB_ORIG_TP_DPORT MFF_REG2
> -
> -#define MFF_LOG_XXREG0           MFF_XXREG0
> -#define MFF_LOG_LB_ORIG_DIP_IPV6 MFF_XXREG1
>  
>  #define MFF_N_LOG_REGS 10
>  
> diff --git a/lib/lb.c b/lib/lb.c
> index 7b0ed1abe..63eb5cf3d 100644
> --- a/lib/lb.c
> +++ b/lib/lb.c
> @@ -301,9 +301,6 @@ ovn_controller_lb_create(const struct sbrec_load_balancer *sbrec_lb)
>       */
>      lb->n_vips = n_vips;
>  
> -    lb->hairpin_orig_tuple = smap_get_bool(&sbrec_lb->options,
> -                                           "hairpin_orig_tuple",
> -                                           false);
>      ovn_lb_get_hairpin_snat_ip(&sbrec_lb->header_.uuid, &sbrec_lb->options,
>                                 &lb->hairpin_snat_ips);
>      return lb;
> diff --git a/lib/lb.h b/lib/lb.h
> index 832ed31fb..424dd789e 100644
> --- a/lib/lb.h
> +++ b/lib/lb.h
> @@ -98,9 +98,6 @@ struct ovn_controller_lb {
>  
>      struct ovn_lb_vip *vips;
>      size_t n_vips;
> -    bool hairpin_orig_tuple; /* True if ovn-northd stores the original
> -                              * destination tuple in registers.
> -                              */
>  
>      struct lport_addresses hairpin_snat_ips; /* IP (v4 and/or v6) to be used
>                                                * as source for hairpinned
> diff --git a/northd/northd.c b/northd/northd.c
> index 6997c280c..79fcd0aaa 100644
> --- a/northd/northd.c
> +++ b/northd/northd.c
> @@ -211,10 +211,6 @@ enum ovn_stage {
>  #define REGBIT_FROM_RAMP          "reg0[14]"
>  #define REGBIT_PORT_SEC_DROP      "reg0[15]"
>  
> -#define REG_ORIG_DIP_IPV4         "reg1"
> -#define REG_ORIG_DIP_IPV6         "xxreg1"
> -#define REG_ORIG_TP_DPORT         "reg2[0..15]"
> -
>  /* Register definitions for switches and routers. */
>  
>  /* Indicate that this packet has been recirculated using egress
> @@ -266,26 +262,26 @@ enum ovn_stage {
>   * OVS register usage:
>   *
>   * Logical Switch pipeline:
> - * +----+----------------------------------------------+---+------------------+
> - * | R0 |     REGBIT_{CONNTRACK/DHCP/DNS}              |   |                  |
> - * |    |     REGBIT_{HAIRPIN/HAIRPIN_REPLY}           |   |                  |
> - * |    | REGBIT_ACL_HINT_{ALLOW_NEW/ALLOW/DROP/BLOCK} |   |                  |
> - * |    |     REGBIT_ACL_LABEL                         | X |                  |
> - * +----+----------------------------------------------+ X |                  |
> - * | R1 |         ORIG_DIP_IPV4 (>= IN_STATEFUL)       | R |                  |
> - * +----+----------------------------------------------+ E |                  |
> - * | R2 |         ORIG_TP_DPORT (>= IN_STATEFUL)       | G |                  |
> - * +----+----------------------------------------------+ 0 |                  |
> - * | R3 |                  ACL LABEL                   |   |                  |
> - * +----+----------------------------------------------+---+------------------+
> - * | R4 |                   UNUSED                     |   |                  |
> - * +----+----------------------------------------------+ X |   ORIG_DIP_IPV6  |
> - * | R5 |                   UNUSED                     | X | (>= IN_STATEFUL) |
> - * +----+----------------------------------------------+ R |                  |
> - * | R6 |                   UNUSED                     | E |                  |
> - * +----+----------------------------------------------+ G |                  |
> - * | R7 |                   UNUSED                     | 1 |                  |
> - * +----+----------------------------------------------+---+------------------+
> + * +----+----------------------------------------------+
> + * | R0 |     REGBIT_{CONNTRACK/DHCP/DNS}              |
> + * |    |     REGBIT_{HAIRPIN/HAIRPIN_REPLY}           |
> + * |    | REGBIT_ACL_HINT_{ALLOW_NEW/ALLOW/DROP/BLOCK} |
> + * |    |     REGBIT_ACL_LABEL                         |
> + * +----+----------------------------------------------+
> + * | R1 |                   UNUSED                     |
> + * +----+----------------------------------------------+
> + * | R2 |                   UNUSED                     |
> + * +----+----------------------------------------------+
> + * | R3 |                  ACL LABEL                   |
> + * +----+----------------------------------------------+
> + * | R4 |                   UNUSED                     |
> + * +----+----------------------------------------------+
> + * | R5 |                   UNUSED                     |
> + * +----+----------------------------------------------+
> + * | R6 |                   UNUSED                     |
> + * +----+----------------------------------------------+
> + * | R7 |                   UNUSED                     |
> + * +----+----------------------------------------------+
>   * | R8 |                   UNUSED                     |
>   * +----+----------------------------------------------+
>   * | R9 |                   UNUSED                     |
> @@ -4288,7 +4284,7 @@ sync_lbs(struct northd_input *input_data, struct ovsdb_idl_txn *ovnsb_txn,
>           */
>          struct smap options;
>          smap_clone(&options, &lb->nlb->options);
> -        smap_replace(&options, "hairpin_orig_tuple", "true");
> +        smap_replace(&options, "hairpin_orig_tuple", "false");
>  
>          struct sbrec_datapath_binding **lb_dps =
>              xmalloc(lb->n_nb_ls * sizeof *lb_dps);
> @@ -5917,42 +5913,14 @@ build_pre_stateful(struct ovn_datapath *od,
>      ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_STATEFUL, 0, "1", "next;");
>  
>      const char *ct_lb_action = features->ct_no_masked_label
> -                               ? "ct_lb_mark"
> -                               : "ct_lb";
> -    const char *lb_protocols[] = {"tcp", "udp", "sctp"};
> -    struct ds actions = DS_EMPTY_INITIALIZER;
> -    struct ds match = DS_EMPTY_INITIALIZER;
> -
> -    for (size_t i = 0; i < ARRAY_SIZE(lb_protocols); i++) {
> -        ds_clear(&match);
> -        ds_clear(&actions);
> -        ds_put_format(&match, REGBIT_CONNTRACK_NAT" == 1 && ip4 && %s",
> -                      lb_protocols[i]);
> -        ds_put_format(&actions, REG_ORIG_DIP_IPV4 " = ip4.dst; "
> -                                REG_ORIG_TP_DPORT " = %s.dst; %s;",
> -                      lb_protocols[i], ct_lb_action);
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 120,
> -                      ds_cstr(&match), ds_cstr(&actions));
> -
> -        ds_clear(&match);
> -        ds_clear(&actions);
> -        ds_put_format(&match, REGBIT_CONNTRACK_NAT" == 1 && ip6 && %s",
> -                      lb_protocols[i]);
> -        ds_put_format(&actions, REG_ORIG_DIP_IPV6 " = ip6.dst; "
> -                                REG_ORIG_TP_DPORT " = %s.dst; %s;",
> -                      lb_protocols[i], ct_lb_action);
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 120,
> -                      ds_cstr(&match), ds_cstr(&actions));
> -    }
> -
> -    ds_clear(&actions);
> -    ds_put_format(&actions, "%s;", ct_lb_action);
> +                               ? "ct_lb_mark;"
> +                               : "ct_lb;";
>  
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 110,
> -                  REGBIT_CONNTRACK_NAT" == 1", ds_cstr(&actions));
> +                  REGBIT_CONNTRACK_NAT" == 1", ct_lb_action);
>  
>      ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_STATEFUL, 110,
> -                  REGBIT_CONNTRACK_NAT" == 1", ds_cstr(&actions));
> +                  REGBIT_CONNTRACK_NAT" == 1", ct_lb_action);
>  
>      /* If REGBIT_CONNTRACK_DEFRAG is set as 1, then the packets should be
>       * sent to conntrack for tracking and defragmentation. */
> @@ -5961,9 +5929,6 @@ build_pre_stateful(struct ovn_datapath *od,
>  
>      ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_STATEFUL, 100,
>                    REGBIT_CONNTRACK_DEFRAG" == 1", "ct_next;");
> -
> -    ds_destroy(&actions);
> -    ds_destroy(&match);
>  }
>  
>  static void
> @@ -6879,12 +6844,8 @@ build_lb_rules(struct hmap *lflows, struct ovn_northd_lb *lb, bool ct_lb_mark,
>           */
>          if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
>              ip_match = "ip4";
> -            ds_put_format(action, REG_ORIG_DIP_IPV4 " = %s; ",
> -                          lb_vip->vip_str);
>          } else {
>              ip_match = "ip6";
> -            ds_put_format(action, REG_ORIG_DIP_IPV6 " = %s; ",
> -                          lb_vip->vip_str);
>          }
>  
>          const char *proto = NULL;
> @@ -6897,12 +6858,6 @@ build_lb_rules(struct hmap *lflows, struct ovn_northd_lb *lb, bool ct_lb_mark,
>                      proto = "sctp";
>                  }
>              }
> -
> -            /* Store the original destination port to be used when generating
> -             * hairpin flows.
> -             */
> -            ds_put_format(action, REG_ORIG_TP_DPORT " = %"PRIu16"; ",
> -                          lb_vip->vip_port);
>          }
>  
>          /* New connections in Ingress table. */
> diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
> index 59c584710..d9b99a67f 100644
> --- a/northd/ovn-northd.8.xml
> +++ b/northd/ovn-northd.8.xml
> @@ -535,26 +535,11 @@
>        traffic to the next table.
>      </p>
>      <ul>
> -      <li>
> -        Priority-120 flows that send the packets to connection tracker using
> -        <code>ct_lb_mark;</code> as the action so that the already established
> -        traffic destined to the load balancer VIP gets DNATted based on a hint
> -        provided by the previous tables (with a match
> -        for <code>reg0[2] == 1</code> and on supported load balancer protocols
> -        and address families).  For IPv4 traffic the flows also load the
> -        original destination IP and transport port in registers
> -        <code>reg1</code> and <code>reg2</code>.  For IPv6 traffic the flows
> -        also load the original destination IP and transport port in
> -        registers <code>xxreg1</code> and <code>reg2</code>.
> -      </li>
> -
>        <li>
>           A priority-110 flow sends the packets to connection tracker based
>           on a hint provided by the previous tables
>           (with a match for <code>reg0[2] == 1</code>) by using the
> -         <code>ct_lb_mark;</code> action.  This flow is added to handle
> -         the traffic for load balancer VIPs whose protocol is not defined
> -         (mainly for ICMP traffic).
> +         <code>ct_lb_mark;</code> action.
>        </li>
>  
>        <li>
> @@ -877,11 +862,7 @@
>          of <var>VIP</var>. If health check is enabled, then <var>args</var>
>          will only contain those endpoints whose service monitor status entry
>          in <code>OVN_Southbound</code> db is either <code>online</code> or
> -        empty.  For IPv4 traffic the flow also loads the original destination
> -        IP and transport port in registers <code>reg1</code> and
> -        <code>reg2</code>.  For IPv6 traffic the flow also loads the original
> -        destination IP and transport port in registers <code>xxreg1</code> and
> -        <code>reg2</code>.
> +        empty.
>          The above flow is created even if the load balancer is attached to a
>          logical router connected to the current logical switch and
>          the <code>install_ls_lb_from_router</code> variable in
> @@ -897,11 +878,6 @@
>          VIP</var></code>. The action on this flow is <code>
>          ct_lb_mark(<var>args</var>)</code>, where <var>args</var> contains comma
>          separated IP addresses of the same address family as <var>VIP</var>.
> -        For IPv4 traffic the flow also loads the original destination
> -        IP and transport port in registers <code>reg1</code> and
> -        <code>reg2</code>.  For IPv6 traffic the flow also loads the original
> -        destination IP and transport port in registers <code>xxreg1</code> and
> -        <code>reg2</code>.
>          The above flow is created even if the load balancer is attached to a
>          logical router connected to the current logical switch and
>          the <code>install_ls_lb_from_router</code> variable in
> @@ -1919,7 +1895,7 @@ output;
>  
>      <ul>
>        <li>
> -        A Priority-120 flow that send the packets to connection tracker using
> +        A Priority-110 flow that send the packets to connection tracker using
>          <code>ct_lb_mark;</code> as the action so that the already established
>          traffic gets unDNATted from the backend IP to the load balancer VIP
>          based on a hint provided by the previous tables with a match
> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> index 033b58b8c..ed6ac3b17 100644
> --- a/tests/ovn-northd.at
> +++ b/tests/ovn-northd.at
> @@ -1220,7 +1220,7 @@ check ovn-nbctl --wait=sb ls-lb-add sw0 lb1
>  AT_CAPTURE_FILE([sbflows])
>  OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows sw0 | tee sbflows | grep 'priority=120.*backends' | sed 's/table=..//'], 0, [dnl
> -  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
> +  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
>  ])
>  
>  AS_BOX([Delete the Load_Balancer_Health_Check])
> @@ -1230,7 +1230,7 @@ wait_row_count Service_Monitor 0
>  AT_CAPTURE_FILE([sbflows2])
>  OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows sw0 | tee sbflows2 | grep 'priority=120.*backends' | sed 's/table=..//'], [0],
> -[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
> +[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
>  ])
>  
>  AS_BOX([Create the Load_Balancer_Health_Check again.])
> @@ -1242,7 +1242,7 @@ check ovn-nbctl --wait=sb sync
>  
>  ovn-sbctl dump-flows sw0 | grep backends | grep priority=120 > lflows.txt
>  AT_CHECK([cat lflows.txt | sed 's/table=..//'], [0], [dnl
> -  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
> +  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
>  ])
>  
>  AS_BOX([Get the uuid of both the service_monitor])
> @@ -1252,7 +1252,7 @@ sm_sw1_p1=$(fetch_column Service_Monitor _uuid logical_port=sw1-p1)
>  AT_CAPTURE_FILE([sbflows3])
>  OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows sw0 | tee sbflows 3 | grep 'priority=120.*backends' | sed 's/table=..//'], [0],
> -[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
> +[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
>  ])
>  
>  AS_BOX([Set the service monitor for sw1-p1 to offline])
> @@ -1263,7 +1263,7 @@ check ovn-nbctl --wait=sb sync
>  AT_CAPTURE_FILE([sbflows4])
>  OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows sw0 | tee sbflows4 | grep 'priority=120.*backends' | sed 's/table=..//'], [0],
> -[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80);)
> +[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80);)
>  ])
>  
>  AS_BOX([Set the service monitor for sw0-p1 to offline])
> @@ -1292,7 +1292,7 @@ check ovn-nbctl --wait=sb sync
>  AT_CAPTURE_FILE([sbflows7])
>  OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows sw0 | tee sbflows7 | grep backends | grep priority=120 | sed 's/table=..//'], 0,
> -[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
> +[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
>  ])
>  
>  AS_BOX([Set the service monitor for sw1-p1 to error])
> @@ -1303,7 +1303,7 @@ check ovn-nbctl --wait=sb sync
>  ovn-sbctl dump-flows sw0 | grep "ip4.dst == 10.0.0.10 && tcp.dst == 80" \
>  | grep priority=120 > lflows.txt
>  AT_CHECK([cat lflows.txt | sed 's/table=..//'], [0], [dnl
> -  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80);)
> +  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80);)
>  ])
>  
>  AS_BOX([Add one more vip to lb1])
> @@ -1329,8 +1329,8 @@ AT_CAPTURE_FILE([sbflows9])
>  OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows sw0 | tee sbflows9 | grep backends | grep priority=120 | sed 's/table=..//' | sort],
>    0,
> -[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80);)
> -  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; reg1 = 10.0.0.40; reg2[[0..15]] = 1000; ct_lb_mark(backends=10.0.0.3:1000);)
> +[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80);)
> +  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:1000);)
>  ])
>  
>  AS_BOX([Set the service monitor for sw1-p1 to online])
> @@ -1343,8 +1343,8 @@ AT_CAPTURE_FILE([sbflows10])
>  OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows sw0 | tee sbflows10 | grep backends | grep priority=120 | sed 's/table=..//' | sort],
>    0,
> -[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
> -  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; reg1 = 10.0.0.40; reg2[[0..15]] = 1000; ct_lb_mark(backends=10.0.0.3:1000,20.0.0.3:80);)
> +[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
> +  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:1000,20.0.0.3:80);)
>  ])
>  
>  AS_BOX([Associate lb1 to sw1])
> @@ -1353,8 +1353,8 @@ AT_CAPTURE_FILE([sbflows11])
>  OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows sw1 | tee sbflows11 | grep backends | grep priority=120 | sed 's/table=..//' | sort],
>    0, [dnl
> -  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
> -  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; reg1 = 10.0.0.40; reg2[[0..15]] = 1000; ct_lb_mark(backends=10.0.0.3:1000,20.0.0.3:80);)
> +  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
> +  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:1000,20.0.0.3:80);)
>  ])
>  
>  AS_BOX([Now create lb2 same as lb1 but udp protocol.])
> @@ -2555,7 +2555,7 @@ check_column "" sb:datapath_binding load_balancers external_ids:name=sw1
>  echo
>  echo "__file__:__line__: Set hairpin_snat_ip on lb1 and check that SB DB is updated."
>  check ovn-nbctl --wait=sb set Load_Balancer lb1 options:hairpin_snat_ip="42.42.42.42 4242::4242"
> -check_column "$lb1_uuid" sb:load_balancer _uuid name=lb1 options='{hairpin_orig_tuple="true", hairpin_snat_ip="42.42.42.42 4242::4242"}'
> +check_column "$lb1_uuid" sb:load_balancer _uuid name=lb1 options='{hairpin_orig_tuple="false", hairpin_snat_ip="42.42.42.42 4242::4242"}'
>  
>  echo
>  echo "__file__:__line__: Delete load balancers lb1 and lbg1 and check that datapath sw1's load_balancers is still empty."
> @@ -3947,18 +3947,12 @@ check_stateful_flows() {
>    table=? (ls_in_pre_stateful ), priority=0    , match=(1), action=(next;)
>    table=? (ls_in_pre_stateful ), priority=100  , match=(reg0[[0]] == 1), action=(ct_next;)
>    table=? (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
>  ])
>  
>      AT_CHECK([grep "ls_in_lb" sw0flows | sort | sed 's/table=../table=??/'], [0], [dnl
>    table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> -  table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.20 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.20; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.40:8080);)
> +  table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.20 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.40:8080);)
>  ])
>  
>      AT_CHECK([grep "ls_in_stateful" sw0flows | sort | sed 's/table=../table=??/'], [0], [dnl
> @@ -4019,12 +4013,6 @@ AT_CHECK([grep "ls_in_pre_stateful" sw0flows | sort | sed 's/table=./table=?/'],
>    table=? (ls_in_pre_stateful ), priority=0    , match=(1), action=(next;)
>    table=? (ls_in_pre_stateful ), priority=100  , match=(reg0[[0]] == 1), action=(ct_next;)
>    table=? (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
>  ])
>  
>  AT_CHECK([grep "ls_in_lb" sw0flows | sort | sed 's/table=../table=??/'], [0], [dnl
> @@ -6392,7 +6380,7 @@ AT_CHECK([grep -e "ls_in_acl" lsflows | sed 's/table=../table=??/' | sort], [0],
>  
>  AT_CHECK([grep -e "ls_in_lb" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
>    table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> -  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; reg1 = 10.0.0.2; ct_lb_mark(backends=10.0.0.10);)
> +  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.10);)
>  ])
>  
>  AT_CHECK([grep -e "ls_in_stateful" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
> @@ -6445,7 +6433,7 @@ AT_CHECK([grep -e "ls_in_acl" lsflows | sed 's/table=../table=??/' | sort], [0],
>  
>  AT_CHECK([grep -e "ls_in_lb" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
>    table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> -  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; reg1 = 10.0.0.2; ct_lb_mark(backends=10.0.0.10);)
> +  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.10);)
>  ])
>  
>  AT_CHECK([grep -e "ls_in_stateful" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
> @@ -6498,7 +6486,7 @@ AT_CHECK([grep -e "ls_in_acl" lsflows | sed 's/table=../table=??/' | sort], [0],
>  
>  AT_CHECK([grep -e "ls_in_lb" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
>    table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> -  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; reg1 = 10.0.0.2; ct_lb_mark(backends=10.0.0.10);)
> +  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.10);)
>  ])
>  
>  AT_CHECK([grep -e "ls_in_stateful" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
> @@ -7468,14 +7456,8 @@ check ovn-nbctl --wait=sb sync
>  AT_CHECK([ovn-sbctl lflow-list | grep -e natted -e ct_lb], [0], [dnl
>    table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 66.66.66.66 && ct_mark.natted == 1), action=(next;)
>    table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 66.66.66.66), action=(ct_lb_mark(backends=42.42.42.2);)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
>    table=6 (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
> -  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; reg1 = 66.66.66.66; ct_lb_mark(backends=42.42.42.2);)
> +  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; ct_lb_mark(backends=42.42.42.2);)
>    table=2 (ls_out_pre_stateful), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
>  ])
>  
> @@ -7485,14 +7467,8 @@ check ovn-nbctl --wait=sb sync
>  AT_CHECK([ovn-sbctl lflow-list | grep -e natted -e ct_lb], [0], [dnl
>    table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 66.66.66.66 && ct_label.natted == 1), action=(next;)
>    table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 66.66.66.66), action=(ct_lb(backends=42.42.42.2);)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb;)
>    table=6 (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb;)
> -  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; reg1 = 66.66.66.66; ct_lb(backends=42.42.42.2);)
> +  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; ct_lb(backends=42.42.42.2);)
>    table=2 (ls_out_pre_stateful), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb;)
>  ])
>  
> @@ -7502,14 +7478,8 @@ check ovn-nbctl --wait=sb sync
>  AT_CHECK([ovn-sbctl lflow-list | grep -e natted -e ct_lb], [0], [dnl
>    table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 66.66.66.66 && ct_mark.natted == 1), action=(next;)
>    table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 66.66.66.66), action=(ct_lb_mark(backends=42.42.42.2);)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
>    table=6 (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
> -  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; reg1 = 66.66.66.66; ct_lb_mark(backends=42.42.42.2);)
> +  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; ct_lb_mark(backends=42.42.42.2);)
>    table=2 (ls_out_pre_stateful), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
>  ])
>  
> @@ -7680,11 +7650,11 @@ AT_CAPTURE_FILE([S1flows])
>  
>  AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl
>    table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> -  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);)
> +  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.2:80);)
>  ])
>  AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl
>    table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> -  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);)
> +  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.2:80);)
>  ])
>  
>  s0_uuid=$(ovn-sbctl get datapath S0 _uuid)
> diff --git a/tests/ovn.at b/tests/ovn.at
> index a4a696d51..0dd9a1c2e 100644
> --- a/tests/ovn.at
> +++ b/tests/ovn.at
> @@ -23578,13 +23578,7 @@ OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows > sbflows
>     ovn-sbctl dump-flows sw0 | grep ct_lb_mark | grep priority=120 | sed 's/table=..//'], 0,
>    [dnl
> -  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
> -  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
> -  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
> -  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
> -  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");)
> +  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");)
>  ])
>  
>  AT_CAPTURE_FILE([sbflows2])
> @@ -28503,7 +28497,7 @@ OVS_WAIT_UNTIL(
>  )
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69], [0], [dnl
> @@ -28511,7 +28505,7 @@ NXST_FLOW reply (xid=0x8):
>  ])
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | grep -v NXST], [1], [dnl
> @@ -28530,9 +28524,9 @@ OVS_WAIT_UNTIL(
>  )
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69], [0], [dnl
> @@ -28540,8 +28534,8 @@ NXST_FLOW reply (xid=0x8):
>  ])
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | grep -v NXST], [1], [dnl
> @@ -28563,17 +28557,17 @@ OVS_WAIT_UNTIL(
>  )
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
>  check ovn-nbctl --wait=hv ls-lb-add sw0 lb-ipv4-udp
> @@ -28587,35 +28581,35 @@ OVS_WAIT_UNTIL(
>  )
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
>  ])
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> - table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> - table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
>  check ovn-nbctl --wait=hv ls-lb-add sw0 lb-ipv6-tcp
> @@ -28629,39 +28623,39 @@ OVS_WAIT_UNTIL(
>  )
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
>  ])
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> - table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> - table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> - table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> - table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
>  check ovn-nbctl --wait=hv ls-lb-add sw0 lb-ipv6-udp
> @@ -28675,43 +28669,43 @@ OVS_WAIT_UNTIL(
>  )
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
>  ])
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> - table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> - table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> - table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> - table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
>  check ovn-nbctl --wait=hv ls-lb-add sw1 lb-ipv6-udp
> @@ -28727,59 +28721,6 @@ OVS_WAIT_UNTIL(
>  )
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> -])
> -
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
> -])
> -
> -AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> - table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> - table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> -])
> -
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> -])
> -
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
> -])
> -
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
> - table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> - table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> -])
> -
> -# Check backwards compatibility with ovn-northd versions that don't store the
> -# original destination tuple.
> -#
> -# ovn-controller should fall back to matching on ct_nw_dst()/ct_tp_dst().
> -as northd-backup ovn-appctl -t NORTHD_TYPE pause
> -as northd ovn-appctl -t NORTHD_TYPE pause
> -
> -check ovn-sbctl \
> -    -- remove load_balancer lb-ipv4-tcp options hairpin_orig_tuple \
> -    -- remove load_balancer lb-ipv6-tcp options hairpin_orig_tuple \
> -    -- remove load_balancer lb-ipv4-udp options hairpin_orig_tuple \
> -    -- remove load_balancer lb-ipv6-udp options hairpin_orig_tuple
> -
> -OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
>   table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>   table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>   table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> @@ -28788,10 +28729,10 @@ OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_a
>   table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
> +AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
>  ])
>  
> -OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> +AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
>   table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
>   table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
>   table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> @@ -28799,7 +28740,7 @@ OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_a
>   table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
> -OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> +AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
>   table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>   table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>   table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> @@ -28811,7 +28752,7 @@ OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_a
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
>  ])
>  
> -OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> +AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
>   table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
>   table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
>   table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> @@ -28819,11 +28760,6 @@ OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_a
>   table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
>  ])
>  
> -# Resume ovn-northd.
> -as northd ovn-appctl -t NORTHD_TYPE resume
> -as northd-backup ovn-appctl -t NORTHD_TYPE resume
> -check ovn-nbctl --wait=hv sync
> -
>  as hv2 ovs-vsctl del-port hv2-vif1
>  OVS_WAIT_UNTIL([test x$(ovn-nbctl lsp-get-up sw0-p2) = xdown])
>  
> @@ -28857,9 +28793,9 @@ OVS_WAIT_UNTIL(
>  )
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> - table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
> + table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
>  ])
>  
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69], [0], [dnl
> @@ -28867,9 +28803,9 @@ NXST_FLOW reply (xid=0x8):
>  ])
>  
>  AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
> - table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> - table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
> - table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
> + table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
>  ])
>  
>  check ovn-nbctl --wait=hv ls-del sw0
Han Zhou July 6, 2022, 10:08 p.m. UTC | #2
On Wed, Jul 6, 2022 at 8:45 AM Dumitru Ceara <dceara@redhat.com> wrote:
>
> Hi Han,
>
> On 7/6/22 00:41, Han Zhou wrote:
> > The ls_in_pre_stateful priority 120 flow that saves dst IP and Port to
> > registers is causing a critical dataplane performance impact to
> > short-lived connections, because it unwildcards megaflows with exact
> > match on dst IP and L4 ports. Any new connections with a different
> > client side L4 port will encounter datapath flow miss and upcall to
> > ovs-vswitchd.
> >
> > These fields (dst IP and port) were saved to registers to solve a
> > problem of LB hairpin use case when different VIPs are sharing
> > overlapping backend+port [0]. The change [0] might not have as wide
> > performance impact as it is now because at that time one of the match
> > condition "REGBIT_CONNTRACK_NAT == 1" was set only for established and
> > natted traffic, while now the impact is more obvious because
> > REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
> > configured on the LS) since commit [1], after several other indirectly
> > related optimizations and refactors.
> >
> > Since the changes that introduced the performance problem had their
> > own values (fixes problems or optimizes performance), so we don't want
> > to revert any of the changes (and it is also not straightforward to
> > revert any of them because there have been lots of changes and refactors
> > on top of them).
> >
> > Change [0] itself has added an alternative way to solve the overlapping
> > backends problem, which utilizes ct fields instead of saving dst IP and
> > port to registers. This patch forces to that approach and removes the
> > flows/actions that saves the dst IP and port to avoid the dataplane
> > performance problem for short-lived connections.
> >
> > (With this approach, the ct_state DNAT is not HW offload friendly, so it
> > may result in those flows not being offloaded, which is supposed to be
> > solved in a follow-up patch)
> >
> > [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with shared
backends.")
> > [1] 0038579d1928 ("northd: Optimize ct nat for load balancer traffic.")
> >
> > Signed-off-by: Han Zhou <hzhou@ovn.org>
> > ---
>
> I think the main concern I have is that this forces us to choose between:
> a. non hwol friendly flows (reduced performance)
> b. less functionality (with the knob in patch 3/3 set to false).
>
Thanks Dumitru for the comments! I agree the solution is not ideal, but if
we look at it from a different angle, even with a), for most pod->service
traffic the performance is still much better than how it is today (not
offloaded kernel datapath is still much better than userspace slowpath).
And *hopefully* b) is ok for most use cases to get HW-offload capability.

> Change [0] was added to address the case when a service in kubernetes is
> exposed via two different k8s services objects that share the same
> endpoints.  That translates in ovn-k8s to two different OVN load
> balancer VIPs that share the same backends.  For such cases, if the
> service is being accessed by one of its own backends we need to be able
> to differentiate based on the VIP address it used to connect to.
>
> CC: Tim Rozet, Dan Williams for some more input from the ovn-k8s side on
> how common it is that an OVN-networked pod accesses two (or more)
> services that might have the pod itself as a backend.
>

Yes, we definitely need input from ovn-k8s side. The information we got so
far: the change [0] was to fix a bug [2] reported by Tim. However, the bug
description didn't mention anything about two VIPs sharing the same
backend. Tim also mentioned in the ovn-k8s meeting last week that the
original user bug report for [2] was [3], and [3] was in fact a completely
different problem (although it is related to hairpin, too). So, I am under
the impression that "an OVN-networked pod accesses two (or more) services
that might have the pod itself as a backend" might be a very rare use case,
if it exists at all.

> If this turns out to be mandatory I guess we might want to also look
> into alternatives like:
> - getting help from the HW to offload matches like ct_tuple()

I believe this is going to happen in the future. HWOL is continuously
enhanced.

> - limiting the impact of "a." only to some load balancers (e.g., would
> it help to use different hairpin lookup tables for such load balancers?)

I am not sure if this would work, and not sure if this is a good approach,
either. In general, I believe it is possible to solve the problem with more
complex pipelines, but we need to keep in mind it is quite easy to
introduce other performance problems (either control plane or data plane) -
many of the changes lead to the current implementation were for performance
optimizations, some for control plane, some for HWOL, and some for reducing
recirculations. I'd avoid complexity unless it is really necessary. Let's
get more input for the problem, and based on that we can decide if we want
to move to a more complex solution.

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1931599
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1903651

Thanks,
Han
Dumitru Ceara July 7, 2022, 11:45 a.m. UTC | #3
On 7/7/22 00:08, Han Zhou wrote:
> On Wed, Jul 6, 2022 at 8:45 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>
>> Hi Han,
>>
>> On 7/6/22 00:41, Han Zhou wrote:
>>> The ls_in_pre_stateful priority 120 flow that saves dst IP and Port to
>>> registers is causing a critical dataplane performance impact to
>>> short-lived connections, because it unwildcards megaflows with exact
>>> match on dst IP and L4 ports. Any new connections with a different
>>> client side L4 port will encounter datapath flow miss and upcall to
>>> ovs-vswitchd.
>>>
>>> These fields (dst IP and port) were saved to registers to solve a
>>> problem of LB hairpin use case when different VIPs are sharing
>>> overlapping backend+port [0]. The change [0] might not have as wide
>>> performance impact as it is now because at that time one of the match
>>> condition "REGBIT_CONNTRACK_NAT == 1" was set only for established and
>>> natted traffic, while now the impact is more obvious because
>>> REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
>>> configured on the LS) since commit [1], after several other indirectly
>>> related optimizations and refactors.
>>>
>>> Since the changes that introduced the performance problem had their
>>> own values (fixes problems or optimizes performance), so we don't want
>>> to revert any of the changes (and it is also not straightforward to
>>> revert any of them because there have been lots of changes and refactors
>>> on top of them).
>>>
>>> Change [0] itself has added an alternative way to solve the overlapping
>>> backends problem, which utilizes ct fields instead of saving dst IP and
>>> port to registers. This patch forces to that approach and removes the
>>> flows/actions that saves the dst IP and port to avoid the dataplane
>>> performance problem for short-lived connections.
>>>
>>> (With this approach, the ct_state DNAT is not HW offload friendly, so it
>>> may result in those flows not being offloaded, which is supposed to be
>>> solved in a follow-up patch)
>>>
>>> [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with shared
> backends.")
>>> [1] 0038579d1928 ("northd: Optimize ct nat for load balancer traffic.")
>>>
>>> Signed-off-by: Han Zhou <hzhou@ovn.org>
>>> ---
>>
>> I think the main concern I have is that this forces us to choose between:
>> a. non hwol friendly flows (reduced performance)
>> b. less functionality (with the knob in patch 3/3 set to false).
>>
> Thanks Dumitru for the comments! I agree the solution is not ideal, but if
> we look at it from a different angle, even with a), for most pod->service
> traffic the performance is still much better than how it is today (not
> offloaded kernel datapath is still much better than userspace slowpath).
> And *hopefully* b) is ok for most use cases to get HW-offload capability.
> 
>> Change [0] was added to address the case when a service in kubernetes is
>> exposed via two different k8s services objects that share the same
>> endpoints.  That translates in ovn-k8s to two different OVN load
>> balancer VIPs that share the same backends.  For such cases, if the
>> service is being accessed by one of its own backends we need to be able
>> to differentiate based on the VIP address it used to connect to.
>>
>> CC: Tim Rozet, Dan Williams for some more input from the ovn-k8s side on
>> how common it is that an OVN-networked pod accesses two (or more)
>> services that might have the pod itself as a backend.
>>
> 
> Yes, we definitely need input from ovn-k8s side. The information we got so
> far: the change [0] was to fix a bug [2] reported by Tim. However, the bug
> description didn't mention anything about two VIPs sharing the same
> backend. Tim also mentioned in the ovn-k8s meeting last week that the
> original user bug report for [2] was [3], and [3] was in fact a completely
> different problem (although it is related to hairpin, too). So, I am under

I am not completely sure about the link between [3] and [2], maybe Tim
remembers more.

> the impression that "an OVN-networked pod accesses two (or more) services
> that might have the pod itself as a backend" might be a very rare use case,
> if it exists at all.
> 

I went ahead and set the new ovn-allow-vips-share-hairpin-backend knob
to "false" and pushed it to my fork to run the ovn-kubernetes CI.  This
runs a subset of the kubernetes conformance tests (AFAICT) and some
specific e2e ovn-kubernetes tests.

The results are here:

https://github.com/dceara/ovn/runs/7230840427?check_suite_focus=true

Focusing on the conformance failures:

2022-07-07T10:31:24.7228157Z [Fail] [sig-network] Networking Granular Checks: Services [It] should function for endpoint-Service: http 
2022-07-07T10:31:24.7228940Z vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
...
2022-07-07T10:31:24.7240313Z [Fail] [sig-network] Networking Granular Checks: Services [It] should function for multiple endpoint-Services with same selector 
2022-07-07T10:31:24.7240819Z vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
...

Checking how these tests are defined:
https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L283
https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L236

It seems to me that they're testing explicitly for a  "pod that accesses
two services that might have the pod itself as a backend".

So, if I'm not wrong, we'd become non-compliant in this case.
 
>> If this turns out to be mandatory I guess we might want to also look
>> into alternatives like:
>> - getting help from the HW to offload matches like ct_tuple()
> 
> I believe this is going to happen in the future. HWOL is continuously
> enhanced.
> 

That would make things simpler.

>> - limiting the impact of "a." only to some load balancers (e.g., would
>> it help to use different hairpin lookup tables for such load balancers?)
> 
> I am not sure if this would work, and not sure if this is a good approach,
> either. In general, I believe it is possible to solve the problem with more
> complex pipelines, but we need to keep in mind it is quite easy to
> introduce other performance problems (either control plane or data plane) -
> many of the changes lead to the current implementation were for performance
> optimizations, some for control plane, some for HWOL, and some for reducing
> recirculations. I'd avoid complexity unless it is really necessary. Let's
> get more input for the problem, and based on that we can decide if we want
> to move to a more complex solution.
> 

Sure, I agree we need to find the best solution.

In my option OVN should be HW-agnostic.  We did try to adjust the way
OVN generates openflows in order to make it more "HWOL-friendly" but
that shouldn't come with the cost of breaking CMS features (if that's
the case here).

> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1931599
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1903651
> 
> Thanks,
> Han
> 

Thanks,
Dumitru
Dumitru Ceara July 7, 2022, 3:55 p.m. UTC | #4
On 7/7/22 13:45, Dumitru Ceara wrote:
> On 7/7/22 00:08, Han Zhou wrote:
>> On Wed, Jul 6, 2022 at 8:45 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>>
>>> Hi Han,
>>>
>>> On 7/6/22 00:41, Han Zhou wrote:
>>>> The ls_in_pre_stateful priority 120 flow that saves dst IP and Port to
>>>> registers is causing a critical dataplane performance impact to
>>>> short-lived connections, because it unwildcards megaflows with exact
>>>> match on dst IP and L4 ports. Any new connections with a different
>>>> client side L4 port will encounter datapath flow miss and upcall to
>>>> ovs-vswitchd.
>>>>
>>>> These fields (dst IP and port) were saved to registers to solve a
>>>> problem of LB hairpin use case when different VIPs are sharing
>>>> overlapping backend+port [0]. The change [0] might not have as wide
>>>> performance impact as it is now because at that time one of the match
>>>> condition "REGBIT_CONNTRACK_NAT == 1" was set only for established and
>>>> natted traffic, while now the impact is more obvious because
>>>> REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
>>>> configured on the LS) since commit [1], after several other indirectly
>>>> related optimizations and refactors.
>>>>
>>>> Since the changes that introduced the performance problem had their
>>>> own values (fixes problems or optimizes performance), so we don't want
>>>> to revert any of the changes (and it is also not straightforward to
>>>> revert any of them because there have been lots of changes and refactors
>>>> on top of them).
>>>>
>>>> Change [0] itself has added an alternative way to solve the overlapping
>>>> backends problem, which utilizes ct fields instead of saving dst IP and
>>>> port to registers. This patch forces to that approach and removes the
>>>> flows/actions that saves the dst IP and port to avoid the dataplane
>>>> performance problem for short-lived connections.
>>>>
>>>> (With this approach, the ct_state DNAT is not HW offload friendly, so it
>>>> may result in those flows not being offloaded, which is supposed to be
>>>> solved in a follow-up patch)
>>>>
>>>> [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with shared
>> backends.")
>>>> [1] 0038579d1928 ("northd: Optimize ct nat for load balancer traffic.")
>>>>
>>>> Signed-off-by: Han Zhou <hzhou@ovn.org>
>>>> ---
>>>
>>> I think the main concern I have is that this forces us to choose between:
>>> a. non hwol friendly flows (reduced performance)
>>> b. less functionality (with the knob in patch 3/3 set to false).
>>>
>> Thanks Dumitru for the comments! I agree the solution is not ideal, but if
>> we look at it from a different angle, even with a), for most pod->service
>> traffic the performance is still much better than how it is today (not
>> offloaded kernel datapath is still much better than userspace slowpath).
>> And *hopefully* b) is ok for most use cases to get HW-offload capability.
>>

Just a note on this item.  I'm a bit confused about why all traffic
would be slowpath-ed?  It's just the first packet that goes to vswitchd
as an upcall, right?

Once the megaflow (even if it's more specific than ideal) is installed
all following traffic in that session should be forwarded in fast path
(kernel).

Also, I'm not sure I follow why the same behavior wouldn't happen with
your changes too for pod->service.  The datapath flow includes the
dp_hash() match, and that's likely different for different connections.

Or am I missing something?

>>> Change [0] was added to address the case when a service in kubernetes is
>>> exposed via two different k8s services objects that share the same
>>> endpoints.  That translates in ovn-k8s to two different OVN load
>>> balancer VIPs that share the same backends.  For such cases, if the
>>> service is being accessed by one of its own backends we need to be able
>>> to differentiate based on the VIP address it used to connect to.
>>>
>>> CC: Tim Rozet, Dan Williams for some more input from the ovn-k8s side on
>>> how common it is that an OVN-networked pod accesses two (or more)
>>> services that might have the pod itself as a backend.
>>>
>>
>> Yes, we definitely need input from ovn-k8s side. The information we got so
>> far: the change [0] was to fix a bug [2] reported by Tim. However, the bug
>> description didn't mention anything about two VIPs sharing the same
>> backend. Tim also mentioned in the ovn-k8s meeting last week that the
>> original user bug report for [2] was [3], and [3] was in fact a completely
>> different problem (although it is related to hairpin, too). So, I am under
> 
> I am not completely sure about the link between [3] and [2], maybe Tim
> remembers more.
> 
>> the impression that "an OVN-networked pod accesses two (or more) services
>> that might have the pod itself as a backend" might be a very rare use case,
>> if it exists at all.
>>
> 
> I went ahead and set the new ovn-allow-vips-share-hairpin-backend knob
> to "false" and pushed it to my fork to run the ovn-kubernetes CI.  This
> runs a subset of the kubernetes conformance tests (AFAICT) and some
> specific e2e ovn-kubernetes tests.
> 
> The results are here:
> 
> https://github.com/dceara/ovn/runs/7230840427?check_suite_focus=true
> 
> Focusing on the conformance failures:
> 
> 2022-07-07T10:31:24.7228157Z [Fail] [sig-network] Networking Granular Checks: Services [It] should function for endpoint-Service: http 
> 2022-07-07T10:31:24.7228940Z vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
> ...
> 2022-07-07T10:31:24.7240313Z [Fail] [sig-network] Networking Granular Checks: Services [It] should function for multiple endpoint-Services with same selector 
> 2022-07-07T10:31:24.7240819Z vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
> ...
> 
> Checking how these tests are defined:
> https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L283
> https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L236
> 
> It seems to me that they're testing explicitly for a  "pod that accesses
> two services that might have the pod itself as a backend".
> 
> So, if I'm not wrong, we'd become non-compliant in this case.
>  
>>> If this turns out to be mandatory I guess we might want to also look
>>> into alternatives like:
>>> - getting help from the HW to offload matches like ct_tuple()
>>
>> I believe this is going to happen in the future. HWOL is continuously
>> enhanced.
>>
> 
> That would make things simpler.
> 
>>> - limiting the impact of "a." only to some load balancers (e.g., would
>>> it help to use different hairpin lookup tables for such load balancers?)
>>
>> I am not sure if this would work, and not sure if this is a good approach,
>> either. In general, I believe it is possible to solve the problem with more
>> complex pipelines, but we need to keep in mind it is quite easy to
>> introduce other performance problems (either control plane or data plane) -
>> many of the changes lead to the current implementation were for performance
>> optimizations, some for control plane, some for HWOL, and some for reducing
>> recirculations. I'd avoid complexity unless it is really necessary. Let's
>> get more input for the problem, and based on that we can decide if we want
>> to move to a more complex solution.
>>
> 
> Sure, I agree we need to find the best solution.
> 
> In my option OVN should be HW-agnostic.  We did try to adjust the way
> OVN generates openflows in order to make it more "HWOL-friendly" but
> that shouldn't come with the cost of breaking CMS features (if that's
> the case here).
> 
>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1931599
>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1903651
>>
>> Thanks,
>> Han
>>
> 
> Thanks,
> Dumitru
Han Zhou July 7, 2022, 4:21 p.m. UTC | #5
On Thu, Jul 7, 2022 at 8:55 AM Dumitru Ceara <dceara@redhat.com> wrote:
>
> On 7/7/22 13:45, Dumitru Ceara wrote:
> > On 7/7/22 00:08, Han Zhou wrote:
> >> On Wed, Jul 6, 2022 at 8:45 AM Dumitru Ceara <dceara@redhat.com> wrote:
> >>>
> >>> Hi Han,
> >>>
> >>> On 7/6/22 00:41, Han Zhou wrote:
> >>>> The ls_in_pre_stateful priority 120 flow that saves dst IP and Port
to
> >>>> registers is causing a critical dataplane performance impact to
> >>>> short-lived connections, because it unwildcards megaflows with exact
> >>>> match on dst IP and L4 ports. Any new connections with a different
> >>>> client side L4 port will encounter datapath flow miss and upcall to
> >>>> ovs-vswitchd.
> >>>>
> >>>> These fields (dst IP and port) were saved to registers to solve a
> >>>> problem of LB hairpin use case when different VIPs are sharing
> >>>> overlapping backend+port [0]. The change [0] might not have as wide
> >>>> performance impact as it is now because at that time one of the match
> >>>> condition "REGBIT_CONNTRACK_NAT == 1" was set only for established
and
> >>>> natted traffic, while now the impact is more obvious because
> >>>> REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
> >>>> configured on the LS) since commit [1], after several other
indirectly
> >>>> related optimizations and refactors.
> >>>>
> >>>> Since the changes that introduced the performance problem had their
> >>>> own values (fixes problems or optimizes performance), so we don't
want
> >>>> to revert any of the changes (and it is also not straightforward to
> >>>> revert any of them because there have been lots of changes and
refactors
> >>>> on top of them).
> >>>>
> >>>> Change [0] itself has added an alternative way to solve the
overlapping
> >>>> backends problem, which utilizes ct fields instead of saving dst IP
and
> >>>> port to registers. This patch forces to that approach and removes the
> >>>> flows/actions that saves the dst IP and port to avoid the dataplane
> >>>> performance problem for short-lived connections.
> >>>>
> >>>> (With this approach, the ct_state DNAT is not HW offload friendly,
so it
> >>>> may result in those flows not being offloaded, which is supposed to
be
> >>>> solved in a follow-up patch)
> >>>>
> >>>> [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with
shared
> >> backends.")
> >>>> [1] 0038579d1928 ("northd: Optimize ct nat for load balancer
traffic.")
> >>>>
> >>>> Signed-off-by: Han Zhou <hzhou@ovn.org>
> >>>> ---
> >>>
> >>> I think the main concern I have is that this forces us to choose
between:
> >>> a. non hwol friendly flows (reduced performance)
> >>> b. less functionality (with the knob in patch 3/3 set to false).
> >>>
> >> Thanks Dumitru for the comments! I agree the solution is not ideal,
but if
> >> we look at it from a different angle, even with a), for most
pod->service
> >> traffic the performance is still much better than how it is today (not
> >> offloaded kernel datapath is still much better than userspace
slowpath).
> >> And *hopefully* b) is ok for most use cases to get HW-offload
capability.
> >>
>
> Just a note on this item.  I'm a bit confused about why all traffic
> would be slowpath-ed?  It's just the first packet that goes to vswitchd
> as an upcall, right?
>

It is about all traffic for *short-lived* connections. Any clients ->
service traffic with the pattern:
1. TCP connection setup
2. Set API request, receives response
3. Close TCP connection
It can be tested with netperf TCP_CRR. Every time the client side TCP port
is different, but since the server -> client DP flow includes the client
TCP port, for each such transaction there is going to be at least a DP flow
miss and goes to userspace. Such application latency would be very high. In
addition, it causes the OVS handler CPU spikes very high which would
further impact the dataplane performance of the system.

> Once the megaflow (even if it's more specific than ideal) is installed
> all following traffic in that session should be forwarded in fast path
> (kernel).
>
> Also, I'm not sure I follow why the same behavior wouldn't happen with
> your changes too for pod->service.  The datapath flow includes the
> dp_hash() match, and that's likely different for different connections.
>

With the change it is not going to happen, because the match is for server
side port only.
For dp_hash(), for what I remembered, there are as many as the number of
megaflows as the number of buckets (the masked hash value) at most.

> Or am I missing something?
>
> >>> Change [0] was added to address the case when a service in kubernetes
is
> >>> exposed via two different k8s services objects that share the same
> >>> endpoints.  That translates in ovn-k8s to two different OVN load
> >>> balancer VIPs that share the same backends.  For such cases, if the
> >>> service is being accessed by one of its own backends we need to be
able
> >>> to differentiate based on the VIP address it used to connect to.
> >>>
> >>> CC: Tim Rozet, Dan Williams for some more input from the ovn-k8s side
on
> >>> how common it is that an OVN-networked pod accesses two (or more)
> >>> services that might have the pod itself as a backend.
> >>>
> >>
> >> Yes, we definitely need input from ovn-k8s side. The information we
got so
> >> far: the change [0] was to fix a bug [2] reported by Tim. However, the
bug
> >> description didn't mention anything about two VIPs sharing the same
> >> backend. Tim also mentioned in the ovn-k8s meeting last week that the
> >> original user bug report for [2] was [3], and [3] was in fact a
completely
> >> different problem (although it is related to hairpin, too). So, I am
under
> >
> > I am not completely sure about the link between [3] and [2], maybe Tim
> > remembers more.
> >
> >> the impression that "an OVN-networked pod accesses two (or more)
services
> >> that might have the pod itself as a backend" might be a very rare use
case,
> >> if it exists at all.
> >>
> >
> > I went ahead and set the new ovn-allow-vips-share-hairpin-backend knob
> > to "false" and pushed it to my fork to run the ovn-kubernetes CI.  This
> > runs a subset of the kubernetes conformance tests (AFAICT) and some
> > specific e2e ovn-kubernetes tests.
> >
> > The results are here:
> >
> > https://github.com/dceara/ovn/runs/7230840427?check_suite_focus=true
> >
> > Focusing on the conformance failures:
> >
> > 2022-07-07T10:31:24.7228157Z  [91m [1m[Fail]  [0m [90m[sig-network]
Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
function for endpoint-Service: http  [0m
> > 2022-07-07T10:31:24.7228940Z  [37mvendor/
github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
> > ...
> > 2022-07-07T10:31:24.7240313Z  [91m [1m[Fail]  [0m [90m[sig-network]
Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
function for multiple endpoint-Services with same selector  [0m
> > 2022-07-07T10:31:24.7240819Z  [37mvendor/
github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
> > ...
> >
> > Checking how these tests are defined:
> >
https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L283
> >
https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L236
> >
Thanks for the test and information! Really need input from k8s folks to
understand more.

Thanks,
Han

> > It seems to me that they're testing explicitly for a  "pod that accesses
> > two services that might have the pod itself as a backend".
> >
> > So, if I'm not wrong, we'd become non-compliant in this case.
> >
> >>> If this turns out to be mandatory I guess we might want to also look
> >>> into alternatives like:
> >>> - getting help from the HW to offload matches like ct_tuple()
> >>
> >> I believe this is going to happen in the future. HWOL is continuously
> >> enhanced.
> >>
> >
> > That would make things simpler.
> >
> >>> - limiting the impact of "a." only to some load balancers (e.g., would
> >>> it help to use different hairpin lookup tables for such load
balancers?)
> >>
> >> I am not sure if this would work, and not sure if this is a good
approach,
> >> either. In general, I believe it is possible to solve the problem with
more
> >> complex pipelines, but we need to keep in mind it is quite easy to
> >> introduce other performance problems (either control plane or data
plane) -
> >> many of the changes lead to the current implementation were for
performance
> >> optimizations, some for control plane, some for HWOL, and some for
reducing
> >> recirculations. I'd avoid complexity unless it is really necessary.
Let's
> >> get more input for the problem, and based on that we can decide if we
want
> >> to move to a more complex solution.
> >>
> >
> > Sure, I agree we need to find the best solution.
> >
> > In my option OVN should be HW-agnostic.  We did try to adjust the way
> > OVN generates openflows in order to make it more "HWOL-friendly" but
> > that shouldn't come with the cost of breaking CMS features (if that's
> > the case here).
> >
> >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1931599
> >> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1903651
> >>
> >> Thanks,
> >> Han
> >>
> >
> > Thanks,
> > Dumitru
>
Dumitru Ceara July 7, 2022, 5:02 p.m. UTC | #6
On 7/7/22 18:21, Han Zhou wrote:
> On Thu, Jul 7, 2022 at 8:55 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>
>> On 7/7/22 13:45, Dumitru Ceara wrote:
>>> On 7/7/22 00:08, Han Zhou wrote:
>>>> On Wed, Jul 6, 2022 at 8:45 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>>>>
>>>>> Hi Han,
>>>>>
>>>>> On 7/6/22 00:41, Han Zhou wrote:
>>>>>> The ls_in_pre_stateful priority 120 flow that saves dst IP and Port
> to
>>>>>> registers is causing a critical dataplane performance impact to
>>>>>> short-lived connections, because it unwildcards megaflows with exact
>>>>>> match on dst IP and L4 ports. Any new connections with a different
>>>>>> client side L4 port will encounter datapath flow miss and upcall to
>>>>>> ovs-vswitchd.
>>>>>>
>>>>>> These fields (dst IP and port) were saved to registers to solve a
>>>>>> problem of LB hairpin use case when different VIPs are sharing
>>>>>> overlapping backend+port [0]. The change [0] might not have as wide
>>>>>> performance impact as it is now because at that time one of the match
>>>>>> condition "REGBIT_CONNTRACK_NAT == 1" was set only for established
> and
>>>>>> natted traffic, while now the impact is more obvious because
>>>>>> REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
>>>>>> configured on the LS) since commit [1], after several other
> indirectly
>>>>>> related optimizations and refactors.
>>>>>>
>>>>>> Since the changes that introduced the performance problem had their
>>>>>> own values (fixes problems or optimizes performance), so we don't
> want
>>>>>> to revert any of the changes (and it is also not straightforward to
>>>>>> revert any of them because there have been lots of changes and
> refactors
>>>>>> on top of them).
>>>>>>
>>>>>> Change [0] itself has added an alternative way to solve the
> overlapping
>>>>>> backends problem, which utilizes ct fields instead of saving dst IP
> and
>>>>>> port to registers. This patch forces to that approach and removes the
>>>>>> flows/actions that saves the dst IP and port to avoid the dataplane
>>>>>> performance problem for short-lived connections.
>>>>>>
>>>>>> (With this approach, the ct_state DNAT is not HW offload friendly,
> so it
>>>>>> may result in those flows not being offloaded, which is supposed to
> be
>>>>>> solved in a follow-up patch)
>>>>>>
>>>>>> [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with
> shared
>>>> backends.")
>>>>>> [1] 0038579d1928 ("northd: Optimize ct nat for load balancer
> traffic.")
>>>>>>
>>>>>> Signed-off-by: Han Zhou <hzhou@ovn.org>
>>>>>> ---
>>>>>
>>>>> I think the main concern I have is that this forces us to choose
> between:
>>>>> a. non hwol friendly flows (reduced performance)
>>>>> b. less functionality (with the knob in patch 3/3 set to false).
>>>>>
>>>> Thanks Dumitru for the comments! I agree the solution is not ideal,
> but if
>>>> we look at it from a different angle, even with a), for most
> pod->service
>>>> traffic the performance is still much better than how it is today (not
>>>> offloaded kernel datapath is still much better than userspace
> slowpath).
>>>> And *hopefully* b) is ok for most use cases to get HW-offload
> capability.
>>>>
>>
>> Just a note on this item.  I'm a bit confused about why all traffic
>> would be slowpath-ed?  It's just the first packet that goes to vswitchd
>> as an upcall, right?
>>
> 
> It is about all traffic for *short-lived* connections. Any clients ->
> service traffic with the pattern:
> 1. TCP connection setup
> 2. Set API request, receives response
> 3. Close TCP connection
> It can be tested with netperf TCP_CRR. Every time the client side TCP port
> is different, but since the server -> client DP flow includes the client
> TCP port, for each such transaction there is going to be at least a DP flow
> miss and goes to userspace. Such application latency would be very high. In
> addition, it causes the OVS handler CPU spikes very high which would
> further impact the dataplane performance of the system.
> 
>> Once the megaflow (even if it's more specific than ideal) is installed
>> all following traffic in that session should be forwarded in fast path
>> (kernel).
>>
>> Also, I'm not sure I follow why the same behavior wouldn't happen with
>> your changes too for pod->service.  The datapath flow includes the
>> dp_hash() match, and that's likely different for different connections.
>>
> 
> With the change it is not going to happen, because the match is for server
> side port only.
> For dp_hash(), for what I remembered, there are as many as the number of
> megaflows as the number of buckets (the masked hash value) at most.
> 

OK, that explains it, thanks.

>> Or am I missing something?
>>
>>>>> Change [0] was added to address the case when a service in kubernetes
> is
>>>>> exposed via two different k8s services objects that share the same
>>>>> endpoints.  That translates in ovn-k8s to two different OVN load
>>>>> balancer VIPs that share the same backends.  For such cases, if the
>>>>> service is being accessed by one of its own backends we need to be
> able
>>>>> to differentiate based on the VIP address it used to connect to.
>>>>>
>>>>> CC: Tim Rozet, Dan Williams for some more input from the ovn-k8s side
> on
>>>>> how common it is that an OVN-networked pod accesses two (or more)
>>>>> services that might have the pod itself as a backend.
>>>>>
>>>>
>>>> Yes, we definitely need input from ovn-k8s side. The information we
> got so
>>>> far: the change [0] was to fix a bug [2] reported by Tim. However, the
> bug
>>>> description didn't mention anything about two VIPs sharing the same
>>>> backend. Tim also mentioned in the ovn-k8s meeting last week that the
>>>> original user bug report for [2] was [3], and [3] was in fact a
> completely
>>>> different problem (although it is related to hairpin, too). So, I am
> under
>>>
>>> I am not completely sure about the link between [3] and [2], maybe Tim
>>> remembers more.
>>>
>>>> the impression that "an OVN-networked pod accesses two (or more)
> services
>>>> that might have the pod itself as a backend" might be a very rare use
> case,
>>>> if it exists at all.
>>>>
>>>
>>> I went ahead and set the new ovn-allow-vips-share-hairpin-backend knob
>>> to "false" and pushed it to my fork to run the ovn-kubernetes CI.  This
>>> runs a subset of the kubernetes conformance tests (AFAICT) and some
>>> specific e2e ovn-kubernetes tests.
>>>
>>> The results are here:
>>>
>>> https://github.com/dceara/ovn/runs/7230840427?check_suite_focus=true
>>>
>>> Focusing on the conformance failures:
>>>
>>> 2022-07-07T10:31:24.7228157Z  [91m [1m[Fail]  [0m [90m[sig-network]
> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
> function for endpoint-Service: http  [0m
>>> 2022-07-07T10:31:24.7228940Z  [37mvendor/
> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
>>> ...
>>> 2022-07-07T10:31:24.7240313Z  [91m [1m[Fail]  [0m [90m[sig-network]
> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
> function for multiple endpoint-Services with same selector  [0m
>>> 2022-07-07T10:31:24.7240819Z  [37mvendor/
> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
>>> ...
>>>
>>> Checking how these tests are defined:
>>>
> https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L283
>>>
> https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L236
>>>
> Thanks for the test and information! Really need input from k8s folks to
> understand more.
> 
> Thanks,
> Han
>

Like I said below, these are kubernetes conformance tests so I'll let
k8s folks confirm if such failures can be ignored/worked around.

>>> It seems to me that they're testing explicitly for a  "pod that accesses
>>> two services that might have the pod itself as a backend".
>>>
>>> So, if I'm not wrong, we'd become non-compliant in this case.
>>>
>>>>> If this turns out to be mandatory I guess we might want to also look
>>>>> into alternatives like:
>>>>> - getting help from the HW to offload matches like ct_tuple()
>>>>
>>>> I believe this is going to happen in the future. HWOL is continuously
>>>> enhanced.
>>>>
>>>
>>> That would make things simpler.
>>>
>>>>> - limiting the impact of "a." only to some load balancers (e.g., would
>>>>> it help to use different hairpin lookup tables for such load
> balancers?)
>>>>
>>>> I am not sure if this would work, and not sure if this is a good
> approach,
>>>> either. In general, I believe it is possible to solve the problem with
> more
>>>> complex pipelines, but we need to keep in mind it is quite easy to
>>>> introduce other performance problems (either control plane or data
> plane) -
>>>> many of the changes lead to the current implementation were for
> performance
>>>> optimizations, some for control plane, some for HWOL, and some for
> reducing
>>>> recirculations. I'd avoid complexity unless it is really necessary.
> Let's
>>>> get more input for the problem, and based on that we can decide if we
> want
>>>> to move to a more complex solution.
>>>>
>>>
>>> Sure, I agree we need to find the best solution.
>>>
>>> In my option OVN should be HW-agnostic.  We did try to adjust the way
>>> OVN generates openflows in order to make it more "HWOL-friendly" but
>>> that shouldn't come with the cost of breaking CMS features (if that's
>>> the case here).
>>>
>>>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1931599
>>>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1903651
>>>>
>>>> Thanks,
>>>> Han
>>>>
>>>
>>> Thanks,
>>> Dumitru
>>
>
Dumitru Ceara July 20, 2022, 11:16 a.m. UTC | #7
Hi Han,

On 7/7/22 19:02, Dumitru Ceara wrote:
> On 7/7/22 18:21, Han Zhou wrote:
>> On Thu, Jul 7, 2022 at 8:55 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>>
>>> On 7/7/22 13:45, Dumitru Ceara wrote:
>>>> On 7/7/22 00:08, Han Zhou wrote:
>>>>> On Wed, Jul 6, 2022 at 8:45 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>>>>>
>>>>>> Hi Han,
>>>>>>
>>>>>> On 7/6/22 00:41, Han Zhou wrote:
>>>>>>> The ls_in_pre_stateful priority 120 flow that saves dst IP and Port
>> to
>>>>>>> registers is causing a critical dataplane performance impact to
>>>>>>> short-lived connections, because it unwildcards megaflows with exact
>>>>>>> match on dst IP and L4 ports. Any new connections with a different
>>>>>>> client side L4 port will encounter datapath flow miss and upcall to
>>>>>>> ovs-vswitchd.
>>>>>>>
>>>>>>> These fields (dst IP and port) were saved to registers to solve a
>>>>>>> problem of LB hairpin use case when different VIPs are sharing
>>>>>>> overlapping backend+port [0]. The change [0] might not have as wide
>>>>>>> performance impact as it is now because at that time one of the match
>>>>>>> condition "REGBIT_CONNTRACK_NAT == 1" was set only for established
>> and
>>>>>>> natted traffic, while now the impact is more obvious because
>>>>>>> REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
>>>>>>> configured on the LS) since commit [1], after several other
>> indirectly
>>>>>>> related optimizations and refactors.
>>>>>>>
>>>>>>> Since the changes that introduced the performance problem had their
>>>>>>> own values (fixes problems or optimizes performance), so we don't
>> want
>>>>>>> to revert any of the changes (and it is also not straightforward to
>>>>>>> revert any of them because there have been lots of changes and
>> refactors
>>>>>>> on top of them).
>>>>>>>
>>>>>>> Change [0] itself has added an alternative way to solve the
>> overlapping
>>>>>>> backends problem, which utilizes ct fields instead of saving dst IP
>> and
>>>>>>> port to registers. This patch forces to that approach and removes the
>>>>>>> flows/actions that saves the dst IP and port to avoid the dataplane
>>>>>>> performance problem for short-lived connections.
>>>>>>>
>>>>>>> (With this approach, the ct_state DNAT is not HW offload friendly,
>> so it
>>>>>>> may result in those flows not being offloaded, which is supposed to
>> be
>>>>>>> solved in a follow-up patch)

While we're waiting for more input from ovn-k8s on this, I have a
slightly different question.

Aren't we hitting a similar problem in the router pipeline, due to
REG_ORIG_TP_DPORT_ROUTER?

Thanks,
Dumitru

>>>>>>>
>>>>>>> [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with
>> shared
>>>>> backends.")
>>>>>>> [1] 0038579d1928 ("northd: Optimize ct nat for load balancer
>> traffic.")
>>>>>>>
>>>>>>> Signed-off-by: Han Zhou <hzhou@ovn.org>
>>>>>>> ---
>>>>>>
>>>>>> I think the main concern I have is that this forces us to choose
>> between:
>>>>>> a. non hwol friendly flows (reduced performance)
>>>>>> b. less functionality (with the knob in patch 3/3 set to false).
>>>>>>
>>>>> Thanks Dumitru for the comments! I agree the solution is not ideal,
>> but if
>>>>> we look at it from a different angle, even with a), for most
>> pod->service
>>>>> traffic the performance is still much better than how it is today (not
>>>>> offloaded kernel datapath is still much better than userspace
>> slowpath).
>>>>> And *hopefully* b) is ok for most use cases to get HW-offload
>> capability.
>>>>>
>>>
>>> Just a note on this item.  I'm a bit confused about why all traffic
>>> would be slowpath-ed?  It's just the first packet that goes to vswitchd
>>> as an upcall, right?
>>>
>>
>> It is about all traffic for *short-lived* connections. Any clients ->
>> service traffic with the pattern:
>> 1. TCP connection setup
>> 2. Set API request, receives response
>> 3. Close TCP connection
>> It can be tested with netperf TCP_CRR. Every time the client side TCP port
>> is different, but since the server -> client DP flow includes the client
>> TCP port, for each such transaction there is going to be at least a DP flow
>> miss and goes to userspace. Such application latency would be very high. In
>> addition, it causes the OVS handler CPU spikes very high which would
>> further impact the dataplane performance of the system.
>>
>>> Once the megaflow (even if it's more specific than ideal) is installed
>>> all following traffic in that session should be forwarded in fast path
>>> (kernel).
>>>
>>> Also, I'm not sure I follow why the same behavior wouldn't happen with
>>> your changes too for pod->service.  The datapath flow includes the
>>> dp_hash() match, and that's likely different for different connections.
>>>
>>
>> With the change it is not going to happen, because the match is for server
>> side port only.
>> For dp_hash(), for what I remembered, there are as many as the number of
>> megaflows as the number of buckets (the masked hash value) at most.
>>
> 
> OK, that explains it, thanks.
> 
>>> Or am I missing something?
>>>
>>>>>> Change [0] was added to address the case when a service in kubernetes
>> is
>>>>>> exposed via two different k8s services objects that share the same
>>>>>> endpoints.  That translates in ovn-k8s to two different OVN load
>>>>>> balancer VIPs that share the same backends.  For such cases, if the
>>>>>> service is being accessed by one of its own backends we need to be
>> able
>>>>>> to differentiate based on the VIP address it used to connect to.
>>>>>>
>>>>>> CC: Tim Rozet, Dan Williams for some more input from the ovn-k8s side
>> on
>>>>>> how common it is that an OVN-networked pod accesses two (or more)
>>>>>> services that might have the pod itself as a backend.
>>>>>>
>>>>>
>>>>> Yes, we definitely need input from ovn-k8s side. The information we
>> got so
>>>>> far: the change [0] was to fix a bug [2] reported by Tim. However, the
>> bug
>>>>> description didn't mention anything about two VIPs sharing the same
>>>>> backend. Tim also mentioned in the ovn-k8s meeting last week that the
>>>>> original user bug report for [2] was [3], and [3] was in fact a
>> completely
>>>>> different problem (although it is related to hairpin, too). So, I am
>> under
>>>>
>>>> I am not completely sure about the link between [3] and [2], maybe Tim
>>>> remembers more.
>>>>
>>>>> the impression that "an OVN-networked pod accesses two (or more)
>> services
>>>>> that might have the pod itself as a backend" might be a very rare use
>> case,
>>>>> if it exists at all.
>>>>>
>>>>
>>>> I went ahead and set the new ovn-allow-vips-share-hairpin-backend knob
>>>> to "false" and pushed it to my fork to run the ovn-kubernetes CI.  This
>>>> runs a subset of the kubernetes conformance tests (AFAICT) and some
>>>> specific e2e ovn-kubernetes tests.
>>>>
>>>> The results are here:
>>>>
>>>> https://github.com/dceara/ovn/runs/7230840427?check_suite_focus=true
>>>>
>>>> Focusing on the conformance failures:
>>>>
>>>> 2022-07-07T10:31:24.7228157Z  [91m [1m[Fail]  [0m [90m[sig-network]
>> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
>> function for endpoint-Service: http  [0m
>>>> 2022-07-07T10:31:24.7228940Z  [37mvendor/
>> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
>>>> ...
>>>> 2022-07-07T10:31:24.7240313Z  [91m [1m[Fail]  [0m [90m[sig-network]
>> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
>> function for multiple endpoint-Services with same selector  [0m
>>>> 2022-07-07T10:31:24.7240819Z  [37mvendor/
>> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
>>>> ...
>>>>
>>>> Checking how these tests are defined:
>>>>
>> https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L283
>>>>
>> https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L236
>>>>
>> Thanks for the test and information! Really need input from k8s folks to
>> understand more.
>>
>> Thanks,
>> Han
>>
> 
> Like I said below, these are kubernetes conformance tests so I'll let
> k8s folks confirm if such failures can be ignored/worked around.
> 
>>>> It seems to me that they're testing explicitly for a  "pod that accesses
>>>> two services that might have the pod itself as a backend".
>>>>
>>>> So, if I'm not wrong, we'd become non-compliant in this case.
>>>>
>>>>>> If this turns out to be mandatory I guess we might want to also look
>>>>>> into alternatives like:
>>>>>> - getting help from the HW to offload matches like ct_tuple()
>>>>>
>>>>> I believe this is going to happen in the future. HWOL is continuously
>>>>> enhanced.
>>>>>
>>>>
>>>> That would make things simpler.
>>>>
>>>>>> - limiting the impact of "a." only to some load balancers (e.g., would
>>>>>> it help to use different hairpin lookup tables for such load
>> balancers?)
>>>>>
>>>>> I am not sure if this would work, and not sure if this is a good
>> approach,
>>>>> either. In general, I believe it is possible to solve the problem with
>> more
>>>>> complex pipelines, but we need to keep in mind it is quite easy to
>>>>> introduce other performance problems (either control plane or data
>> plane) -
>>>>> many of the changes lead to the current implementation were for
>> performance
>>>>> optimizations, some for control plane, some for HWOL, and some for
>> reducing
>>>>> recirculations. I'd avoid complexity unless it is really necessary.
>> Let's
>>>>> get more input for the problem, and based on that we can decide if we
>> want
>>>>> to move to a more complex solution.
>>>>>
>>>>
>>>> Sure, I agree we need to find the best solution.
>>>>
>>>> In my option OVN should be HW-agnostic.  We did try to adjust the way
>>>> OVN generates openflows in order to make it more "HWOL-friendly" but
>>>> that shouldn't come with the cost of breaking CMS features (if that's
>>>> the case here).
>>>>
>>>>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1931599
>>>>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1903651
>>>>>
>>>>> Thanks,
>>>>> Han
>>>>>
>>>>
>>>> Thanks,
>>>> Dumitru
>>>
>>
Numan Siddique July 20, 2022, 7:13 p.m. UTC | #8
On Wed, Jul 20, 2022 at 6:18 AM Dumitru Ceara <dceara@redhat.com> wrote:
>
>
> Hi Han,
>
> On 7/7/22 19:02, Dumitru Ceara wrote:
> > On 7/7/22 18:21, Han Zhou wrote:
> >> On Thu, Jul 7, 2022 at 8:55 AM Dumitru Ceara <dceara@redhat.com> wrote:
> >>>
> >>> On 7/7/22 13:45, Dumitru Ceara wrote:
> >>>> On 7/7/22 00:08, Han Zhou wrote:
> >>>>> On Wed, Jul 6, 2022 at 8:45 AM Dumitru Ceara <dceara@redhat.com> wrote:
> >>>>>>
> >>>>>> Hi Han,
> >>>>>>
> >>>>>> On 7/6/22 00:41, Han Zhou wrote:
> >>>>>>> The ls_in_pre_stateful priority 120 flow that saves dst IP and Port
> >> to
> >>>>>>> registers is causing a critical dataplane performance impact to
> >>>>>>> short-lived connections, because it unwildcards megaflows with exact
> >>>>>>> match on dst IP and L4 ports. Any new connections with a different
> >>>>>>> client side L4 port will encounter datapath flow miss and upcall to
> >>>>>>> ovs-vswitchd.
> >>>>>>>
> >>>>>>> These fields (dst IP and port) were saved to registers to solve a
> >>>>>>> problem of LB hairpin use case when different VIPs are sharing
> >>>>>>> overlapping backend+port [0]. The change [0] might not have as wide
> >>>>>>> performance impact as it is now because at that time one of the match
> >>>>>>> condition "REGBIT_CONNTRACK_NAT == 1" was set only for established
> >> and
> >>>>>>> natted traffic, while now the impact is more obvious because
> >>>>>>> REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
> >>>>>>> configured on the LS) since commit [1], after several other
> >> indirectly
> >>>>>>> related optimizations and refactors.
> >>>>>>>
> >>>>>>> Since the changes that introduced the performance problem had their
> >>>>>>> own values (fixes problems or optimizes performance), so we don't
> >> want
> >>>>>>> to revert any of the changes (and it is also not straightforward to
> >>>>>>> revert any of them because there have been lots of changes and
> >> refactors
> >>>>>>> on top of them).
> >>>>>>>
> >>>>>>> Change [0] itself has added an alternative way to solve the
> >> overlapping
> >>>>>>> backends problem, which utilizes ct fields instead of saving dst IP
> >> and
> >>>>>>> port to registers. This patch forces to that approach and removes the
> >>>>>>> flows/actions that saves the dst IP and port to avoid the dataplane
> >>>>>>> performance problem for short-lived connections.
> >>>>>>>
> >>>>>>> (With this approach, the ct_state DNAT is not HW offload friendly,
> >> so it
> >>>>>>> may result in those flows not being offloaded, which is supposed to
> >> be
> >>>>>>> solved in a follow-up patch)
>
> While we're waiting for more input from ovn-k8s on this, I have a
> slightly different question.
>
> Aren't we hitting a similar problem in the router pipeline, due to
> REG_ORIG_TP_DPORT_ROUTER?
>
> Thanks,
> Dumitru
>
> >>>>>>>
> >>>>>>> [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with
> >> shared
> >>>>> backends.")
> >>>>>>> [1] 0038579d1928 ("northd: Optimize ct nat for load balancer
> >> traffic.")
> >>>>>>>
> >>>>>>> Signed-off-by: Han Zhou <hzhou@ovn.org>
> >>>>>>> ---
> >>>>>>
> >>>>>> I think the main concern I have is that this forces us to choose
> >> between:
> >>>>>> a. non hwol friendly flows (reduced performance)
> >>>>>> b. less functionality (with the knob in patch 3/3 set to false).
> >>>>>>
> >>>>> Thanks Dumitru for the comments! I agree the solution is not ideal,
> >> but if
> >>>>> we look at it from a different angle, even with a), for most
> >> pod->service
> >>>>> traffic the performance is still much better than how it is today (not
> >>>>> offloaded kernel datapath is still much better than userspace
> >> slowpath).
> >>>>> And *hopefully* b) is ok for most use cases to get HW-offload
> >> capability.
> >>>>>
> >>>
> >>> Just a note on this item.  I'm a bit confused about why all traffic
> >>> would be slowpath-ed?  It's just the first packet that goes to vswitchd
> >>> as an upcall, right?
> >>>
> >>
> >> It is about all traffic for *short-lived* connections. Any clients ->
> >> service traffic with the pattern:
> >> 1. TCP connection setup
> >> 2. Set API request, receives response
> >> 3. Close TCP connection
> >> It can be tested with netperf TCP_CRR. Every time the client side TCP port
> >> is different, but since the server -> client DP flow includes the client
> >> TCP port, for each such transaction there is going to be at least a DP flow
> >> miss and goes to userspace. Such application latency would be very high. In
> >> addition, it causes the OVS handler CPU spikes very high which would
> >> further impact the dataplane performance of the system.
> >>
> >>> Once the megaflow (even if it's more specific than ideal) is installed
> >>> all following traffic in that session should be forwarded in fast path
> >>> (kernel).
> >>>
> >>> Also, I'm not sure I follow why the same behavior wouldn't happen with
> >>> your changes too for pod->service.  The datapath flow includes the
> >>> dp_hash() match, and that's likely different for different connections.
> >>>
> >>
> >> With the change it is not going to happen, because the match is for server
> >> side port only.
> >> For dp_hash(), for what I remembered, there are as many as the number of
> >> megaflows as the number of buckets (the masked hash value) at most.
> >>
> >
> > OK, that explains it, thanks.
> >
> >>> Or am I missing something?
> >>>
> >>>>>> Change [0] was added to address the case when a service in kubernetes
> >> is
> >>>>>> exposed via two different k8s services objects that share the same
> >>>>>> endpoints.  That translates in ovn-k8s to two different OVN load
> >>>>>> balancer VIPs that share the same backends.  For such cases, if the
> >>>>>> service is being accessed by one of its own backends we need to be
> >> able
> >>>>>> to differentiate based on the VIP address it used to connect to.
> >>>>>>
> >>>>>> CC: Tim Rozet, Dan Williams for some more input from the ovn-k8s side
> >> on
> >>>>>> how common it is that an OVN-networked pod accesses two (or more)
> >>>>>> services that might have the pod itself as a backend.
> >>>>>>
> >>>>>
> >>>>> Yes, we definitely need input from ovn-k8s side. The information we
> >> got so
> >>>>> far: the change [0] was to fix a bug [2] reported by Tim. However, the
> >> bug
> >>>>> description didn't mention anything about two VIPs sharing the same
> >>>>> backend. Tim also mentioned in the ovn-k8s meeting last week that the
> >>>>> original user bug report for [2] was [3], and [3] was in fact a
> >> completely
> >>>>> different problem (although it is related to hairpin, too). So, I am
> >> under
> >>>>
> >>>> I am not completely sure about the link between [3] and [2], maybe Tim
> >>>> remembers more.
> >>>>
> >>>>> the impression that "an OVN-networked pod accesses two (or more)
> >> services
> >>>>> that might have the pod itself as a backend" might be a very rare use
> >> case,
> >>>>> if it exists at all.
> >>>>>
> >>>>
> >>>> I went ahead and set the new ovn-allow-vips-share-hairpin-backend knob
> >>>> to "false" and pushed it to my fork to run the ovn-kubernetes CI.  This
> >>>> runs a subset of the kubernetes conformance tests (AFAICT) and some
> >>>> specific e2e ovn-kubernetes tests.
> >>>>
> >>>> The results are here:
> >>>>
> >>>> https://github.com/dceara/ovn/runs/7230840427?check_suite_focus=true
> >>>>
> >>>> Focusing on the conformance failures:
> >>>>
> >>>> 2022-07-07T10:31:24.7228157Z  [91m [1m[Fail]  [0m [90m[sig-network]
> >> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
> >> function for endpoint-Service: http  [0m
> >>>> 2022-07-07T10:31:24.7228940Z  [37mvendor/
> >> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
> >>>> ...
> >>>> 2022-07-07T10:31:24.7240313Z  [91m [1m[Fail]  [0m [90m[sig-network]
> >> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
> >> function for multiple endpoint-Services with same selector  [0m
> >>>> 2022-07-07T10:31:24.7240819Z  [37mvendor/
> >> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
> >>>> ...
> >>>>
> >>>> Checking how these tests are defined:
> >>>>
> >> https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L283
> >>>>
> >> https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L236
> >>>>
> >> Thanks for the test and information! Really need input from k8s folks to
> >> understand more.
> >>
> >> Thanks,
> >> Han
> >>
> >
> > Like I said below, these are kubernetes conformance tests so I'll let
> > k8s folks confirm if such failures can be ignored/worked around.
> >
> >>>> It seems to me that they're testing explicitly for a  "pod that accesses
> >>>> two services that might have the pod itself as a backend".
> >>>>
> >>>> So, if I'm not wrong, we'd become non-compliant in this case.
> >>>>
> >>>>>> If this turns out to be mandatory I guess we might want to also look
> >>>>>> into alternatives like:
> >>>>>> - getting help from the HW to offload matches like ct_tuple()
> >>>>>
> >>>>> I believe this is going to happen in the future. HWOL is continuously
> >>>>> enhanced.
> >>>>>
> >>>>
> >>>> That would make things simpler.
> >>>>
> >>>>>> - limiting the impact of "a." only to some load balancers (e.g., would
> >>>>>> it help to use different hairpin lookup tables for such load
> >> balancers?)
> >>>>>
> >>>>> I am not sure if this would work, and not sure if this is a good
> >> approach,
> >>>>> either. In general, I believe it is possible to solve the problem with
> >> more
> >>>>> complex pipelines, but we need to keep in mind it is quite easy to
> >>>>> introduce other performance problems (either control plane or data
> >> plane) -
> >>>>> many of the changes lead to the current implementation were for
> >> performance
> >>>>> optimizations, some for control plane, some for HWOL, and some for
> >> reducing
> >>>>> recirculations. I'd avoid complexity unless it is really necessary.
> >> Let's
> >>>>> get more input for the problem, and based on that we can decide if we
> >> want
> >>>>> to move to a more complex solution.
> >>>>>
> >>>>
> >>>> Sure, I agree we need to find the best solution.
> >>>>
> >>>> In my option OVN should be HW-agnostic.  We did try to adjust the way
> >>>> OVN generates openflows in order to make it more "HWOL-friendly" but
> >>>> that shouldn't come with the cost of breaking CMS features (if that's
> >>>> the case here).
> >>>>
> >>>>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1931599
> >>>>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1903651
> >>>>>
> >>>>> Thanks,
> >>>>> Han
> >>>>>
> >>>>
> >>>> Thanks,
> >>>> Dumitru


Hi Han,  Dumitru,

What's the status of this patch series ?  Does it need a v2 ? Sorry I
didn't follow all the discussions.

If the patch series doesn't need a v2, a can probably start reviewing.

Thanks
Numan

> >>>
> >>
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
Han Zhou July 21, 2022, 5:55 a.m. UTC | #9
On Wed, Jul 20, 2022 at 12:13 PM Numan Siddique <numans@ovn.org> wrote:
>
> On Wed, Jul 20, 2022 at 6:18 AM Dumitru Ceara <dceara@redhat.com> wrote:
> >
> >
> > Hi Han,
> >
> > On 7/7/22 19:02, Dumitru Ceara wrote:
> > > On 7/7/22 18:21, Han Zhou wrote:
> > >> On Thu, Jul 7, 2022 at 8:55 AM Dumitru Ceara <dceara@redhat.com>
wrote:
> > >>>
> > >>> On 7/7/22 13:45, Dumitru Ceara wrote:
> > >>>> On 7/7/22 00:08, Han Zhou wrote:
> > >>>>> On Wed, Jul 6, 2022 at 8:45 AM Dumitru Ceara <dceara@redhat.com>
wrote:
> > >>>>>>
> > >>>>>> Hi Han,
> > >>>>>>
> > >>>>>> On 7/6/22 00:41, Han Zhou wrote:
> > >>>>>>> The ls_in_pre_stateful priority 120 flow that saves dst IP and
Port
> > >> to
> > >>>>>>> registers is causing a critical dataplane performance impact to
> > >>>>>>> short-lived connections, because it unwildcards megaflows with
exact
> > >>>>>>> match on dst IP and L4 ports. Any new connections with a
different
> > >>>>>>> client side L4 port will encounter datapath flow miss and
upcall to
> > >>>>>>> ovs-vswitchd.
> > >>>>>>>
> > >>>>>>> These fields (dst IP and port) were saved to registers to solve
a
> > >>>>>>> problem of LB hairpin use case when different VIPs are sharing
> > >>>>>>> overlapping backend+port [0]. The change [0] might not have as
wide
> > >>>>>>> performance impact as it is now because at that time one of the
match
> > >>>>>>> condition "REGBIT_CONNTRACK_NAT == 1" was set only for
established
> > >> and
> > >>>>>>> natted traffic, while now the impact is more obvious because
> > >>>>>>> REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
> > >>>>>>> configured on the LS) since commit [1], after several other
> > >> indirectly
> > >>>>>>> related optimizations and refactors.
> > >>>>>>>
> > >>>>>>> Since the changes that introduced the performance problem had
their
> > >>>>>>> own values (fixes problems or optimizes performance), so we
don't
> > >> want
> > >>>>>>> to revert any of the changes (and it is also not
straightforward to
> > >>>>>>> revert any of them because there have been lots of changes and
> > >> refactors
> > >>>>>>> on top of them).
> > >>>>>>>
> > >>>>>>> Change [0] itself has added an alternative way to solve the
> > >> overlapping
> > >>>>>>> backends problem, which utilizes ct fields instead of saving
dst IP
> > >> and
> > >>>>>>> port to registers. This patch forces to that approach and
removes the
> > >>>>>>> flows/actions that saves the dst IP and port to avoid the
dataplane
> > >>>>>>> performance problem for short-lived connections.
> > >>>>>>>
> > >>>>>>> (With this approach, the ct_state DNAT is not HW offload
friendly,
> > >> so it
> > >>>>>>> may result in those flows not being offloaded, which is
supposed to
> > >> be
> > >>>>>>> solved in a follow-up patch)
> >
> > While we're waiting for more input from ovn-k8s on this, I have a
> > slightly different question.
> >
> > Aren't we hitting a similar problem in the router pipeline, due to
> > REG_ORIG_TP_DPORT_ROUTER?
> >
> > Thanks,
> > Dumitru
> >
> > >>>>>>>
> > >>>>>>> [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs with
> > >> shared
> > >>>>> backends.")
> > >>>>>>> [1] 0038579d1928 ("northd: Optimize ct nat for load balancer
> > >> traffic.")
> > >>>>>>>
> > >>>>>>> Signed-off-by: Han Zhou <hzhou@ovn.org>
> > >>>>>>> ---
> > >>>>>>
> > >>>>>> I think the main concern I have is that this forces us to choose
> > >> between:
> > >>>>>> a. non hwol friendly flows (reduced performance)
> > >>>>>> b. less functionality (with the knob in patch 3/3 set to false).
> > >>>>>>
> > >>>>> Thanks Dumitru for the comments! I agree the solution is not
ideal,
> > >> but if
> > >>>>> we look at it from a different angle, even with a), for most
> > >> pod->service
> > >>>>> traffic the performance is still much better than how it is today
(not
> > >>>>> offloaded kernel datapath is still much better than userspace
> > >> slowpath).
> > >>>>> And *hopefully* b) is ok for most use cases to get HW-offload
> > >> capability.
> > >>>>>
> > >>>
> > >>> Just a note on this item.  I'm a bit confused about why all traffic
> > >>> would be slowpath-ed?  It's just the first packet that goes to
vswitchd
> > >>> as an upcall, right?
> > >>>
> > >>
> > >> It is about all traffic for *short-lived* connections. Any clients ->
> > >> service traffic with the pattern:
> > >> 1. TCP connection setup
> > >> 2. Set API request, receives response
> > >> 3. Close TCP connection
> > >> It can be tested with netperf TCP_CRR. Every time the client side
TCP port
> > >> is different, but since the server -> client DP flow includes the
client
> > >> TCP port, for each such transaction there is going to be at least a
DP flow
> > >> miss and goes to userspace. Such application latency would be very
high. In
> > >> addition, it causes the OVS handler CPU spikes very high which would
> > >> further impact the dataplane performance of the system.
> > >>
> > >>> Once the megaflow (even if it's more specific than ideal) is
installed
> > >>> all following traffic in that session should be forwarded in fast
path
> > >>> (kernel).
> > >>>
> > >>> Also, I'm not sure I follow why the same behavior wouldn't happen
with
> > >>> your changes too for pod->service.  The datapath flow includes the
> > >>> dp_hash() match, and that's likely different for different
connections.
> > >>>
> > >>
> > >> With the change it is not going to happen, because the match is for
server
> > >> side port only.
> > >> For dp_hash(), for what I remembered, there are as many as the
number of
> > >> megaflows as the number of buckets (the masked hash value) at most.
> > >>
> > >
> > > OK, that explains it, thanks.
> > >
> > >>> Or am I missing something?
> > >>>
> > >>>>>> Change [0] was added to address the case when a service in
kubernetes
> > >> is
> > >>>>>> exposed via two different k8s services objects that share the
same
> > >>>>>> endpoints.  That translates in ovn-k8s to two different OVN load
> > >>>>>> balancer VIPs that share the same backends.  For such cases, if
the
> > >>>>>> service is being accessed by one of its own backends we need to
be
> > >> able
> > >>>>>> to differentiate based on the VIP address it used to connect to.
> > >>>>>>
> > >>>>>> CC: Tim Rozet, Dan Williams for some more input from the ovn-k8s
side
> > >> on
> > >>>>>> how common it is that an OVN-networked pod accesses two (or more)
> > >>>>>> services that might have the pod itself as a backend.
> > >>>>>>
> > >>>>>
> > >>>>> Yes, we definitely need input from ovn-k8s side. The information
we
> > >> got so
> > >>>>> far: the change [0] was to fix a bug [2] reported by Tim.
However, the
> > >> bug
> > >>>>> description didn't mention anything about two VIPs sharing the
same
> > >>>>> backend. Tim also mentioned in the ovn-k8s meeting last week that
the
> > >>>>> original user bug report for [2] was [3], and [3] was in fact a
> > >> completely
> > >>>>> different problem (although it is related to hairpin, too). So, I
am
> > >> under
> > >>>>
> > >>>> I am not completely sure about the link between [3] and [2], maybe
Tim
> > >>>> remembers more.
> > >>>>
> > >>>>> the impression that "an OVN-networked pod accesses two (or more)
> > >> services
> > >>>>> that might have the pod itself as a backend" might be a very rare
use
> > >> case,
> > >>>>> if it exists at all.
> > >>>>>
> > >>>>
> > >>>> I went ahead and set the new ovn-allow-vips-share-hairpin-backend
knob
> > >>>> to "false" and pushed it to my fork to run the ovn-kubernetes CI.
This
> > >>>> runs a subset of the kubernetes conformance tests (AFAICT) and some
> > >>>> specific e2e ovn-kubernetes tests.
> > >>>>
> > >>>> The results are here:
> > >>>>
> > >>>>
https://github.com/dceara/ovn/runs/7230840427?check_suite_focus=true
> > >>>>
> > >>>> Focusing on the conformance failures:
> > >>>>
> > >>>> 2022-07-07T10:31:24.7228157Z  [91m [1m[Fail]  [0m [90m[sig-network]
> > >> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
> > >> function for endpoint-Service: http  [0m
> > >>>> 2022-07-07T10:31:24.7228940Z  [37mvendor/
> > >> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
> > >>>> ...
> > >>>> 2022-07-07T10:31:24.7240313Z  [91m [1m[Fail]  [0m [90m[sig-network]
> > >> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It] should
> > >> function for multiple endpoint-Services with same selector  [0m
> > >>>> 2022-07-07T10:31:24.7240819Z  [37mvendor/
> > >> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
> > >>>> ...
> > >>>>
> > >>>> Checking how these tests are defined:
> > >>>>
> > >>
https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L283
> > >>>>
> > >>
https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L236
> > >>>>
> > >> Thanks for the test and information! Really need input from k8s
folks to
> > >> understand more.
> > >>
> > >> Thanks,
> > >> Han
> > >>
> > >
> > > Like I said below, these are kubernetes conformance tests so I'll let
> > > k8s folks confirm if such failures can be ignored/worked around.
> > >
> > >>>> It seems to me that they're testing explicitly for a  "pod that
accesses
> > >>>> two services that might have the pod itself as a backend".
> > >>>>
> > >>>> So, if I'm not wrong, we'd become non-compliant in this case.
> > >>>>
> > >>>>>> If this turns out to be mandatory I guess we might want to also
look
> > >>>>>> into alternatives like:
> > >>>>>> - getting help from the HW to offload matches like ct_tuple()
> > >>>>>
> > >>>>> I believe this is going to happen in the future. HWOL is
continuously
> > >>>>> enhanced.
> > >>>>>
> > >>>>
> > >>>> That would make things simpler.
> > >>>>
> > >>>>>> - limiting the impact of "a." only to some load balancers (e.g.,
would
> > >>>>>> it help to use different hairpin lookup tables for such load
> > >> balancers?)
> > >>>>>
> > >>>>> I am not sure if this would work, and not sure if this is a good
> > >> approach,
> > >>>>> either. In general, I believe it is possible to solve the problem
with
> > >> more
> > >>>>> complex pipelines, but we need to keep in mind it is quite easy to
> > >>>>> introduce other performance problems (either control plane or data
> > >> plane) -
> > >>>>> many of the changes lead to the current implementation were for
> > >> performance
> > >>>>> optimizations, some for control plane, some for HWOL, and some for
> > >> reducing
> > >>>>> recirculations. I'd avoid complexity unless it is really
necessary.
> > >> Let's
> > >>>>> get more input for the problem, and based on that we can decide
if we
> > >> want
> > >>>>> to move to a more complex solution.
> > >>>>>
> > >>>>
> > >>>> Sure, I agree we need to find the best solution.
> > >>>>
> > >>>> In my option OVN should be HW-agnostic.  We did try to adjust the
way
> > >>>> OVN generates openflows in order to make it more "HWOL-friendly"
but
> > >>>> that shouldn't come with the cost of breaking CMS features (if
that's
> > >>>> the case here).
> > >>>>
> > >>>>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1931599
> > >>>>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1903651
> > >>>>>
> > >>>>> Thanks,
> > >>>>> Han
> > >>>>>
> > >>>>
> > >>>> Thanks,
> > >>>> Dumitru
>
>
> Hi Han,  Dumitru,
>
> What's the status of this patch series ?  Does it need a v2 ? Sorry I
> didn't follow all the discussions.
>
> If the patch series doesn't need a v2, a can probably start reviewing.

Hi Numan,
I am still waiting for clarification of the k8s requirement, which Tim said
last week at the ovn-k8s meeting that he would discuss with Dumitru.
At the same time I am trying an alternative to see if we can solve the
performance problem while still supporting the corner cases (without adding
too much complexity to the pipeline).
So you can hold the review for now.

Thanks,
Han

>
> Thanks
> Numan
>
> > >>>
> > >>
> >
> > _______________________________________________
> > dev mailing list
> > dev@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >
Han Zhou Aug. 24, 2022, 6:24 a.m. UTC | #10
On Wed, Jul 20, 2022 at 10:55 PM Han Zhou <hzhou@ovn.org> wrote:
>
>
>
> On Wed, Jul 20, 2022 at 12:13 PM Numan Siddique <numans@ovn.org> wrote:
> >
> > On Wed, Jul 20, 2022 at 6:18 AM Dumitru Ceara <dceara@redhat.com> wrote:
> > >
> > >
> > > Hi Han,
> > >
> > > On 7/7/22 19:02, Dumitru Ceara wrote:
> > > > On 7/7/22 18:21, Han Zhou wrote:
> > > >> On Thu, Jul 7, 2022 at 8:55 AM Dumitru Ceara <dceara@redhat.com>
wrote:
> > > >>>
> > > >>> On 7/7/22 13:45, Dumitru Ceara wrote:
> > > >>>> On 7/7/22 00:08, Han Zhou wrote:
> > > >>>>> On Wed, Jul 6, 2022 at 8:45 AM Dumitru Ceara <dceara@redhat.com>
wrote:
> > > >>>>>>
> > > >>>>>> Hi Han,
> > > >>>>>>
> > > >>>>>> On 7/6/22 00:41, Han Zhou wrote:
> > > >>>>>>> The ls_in_pre_stateful priority 120 flow that saves dst IP
and Port
> > > >> to
> > > >>>>>>> registers is causing a critical dataplane performance impact
to
> > > >>>>>>> short-lived connections, because it unwildcards megaflows
with exact
> > > >>>>>>> match on dst IP and L4 ports. Any new connections with a
different
> > > >>>>>>> client side L4 port will encounter datapath flow miss and
upcall to
> > > >>>>>>> ovs-vswitchd.
> > > >>>>>>>
> > > >>>>>>> These fields (dst IP and port) were saved to registers to
solve a
> > > >>>>>>> problem of LB hairpin use case when different VIPs are sharing
> > > >>>>>>> overlapping backend+port [0]. The change [0] might not have
as wide
> > > >>>>>>> performance impact as it is now because at that time one of
the match
> > > >>>>>>> condition "REGBIT_CONNTRACK_NAT == 1" was set only for
established
> > > >> and
> > > >>>>>>> natted traffic, while now the impact is more obvious because
> > > >>>>>>> REGBIT_CONNTRACK_NAT is now set for all IP traffic (if any VIP
> > > >>>>>>> configured on the LS) since commit [1], after several other
> > > >> indirectly
> > > >>>>>>> related optimizations and refactors.
> > > >>>>>>>
> > > >>>>>>> Since the changes that introduced the performance problem had
their
> > > >>>>>>> own values (fixes problems or optimizes performance), so we
don't
> > > >> want
> > > >>>>>>> to revert any of the changes (and it is also not
straightforward to
> > > >>>>>>> revert any of them because there have been lots of changes and
> > > >> refactors
> > > >>>>>>> on top of them).
> > > >>>>>>>
> > > >>>>>>> Change [0] itself has added an alternative way to solve the
> > > >> overlapping
> > > >>>>>>> backends problem, which utilizes ct fields instead of saving
dst IP
> > > >> and
> > > >>>>>>> port to registers. This patch forces to that approach and
removes the
> > > >>>>>>> flows/actions that saves the dst IP and port to avoid the
dataplane
> > > >>>>>>> performance problem for short-lived connections.
> > > >>>>>>>
> > > >>>>>>> (With this approach, the ct_state DNAT is not HW offload
friendly,
> > > >> so it
> > > >>>>>>> may result in those flows not being offloaded, which is
supposed to
> > > >> be
> > > >>>>>>> solved in a follow-up patch)
> > >
> > > While we're waiting for more input from ovn-k8s on this, I have a
> > > slightly different question.
> > >
> > > Aren't we hitting a similar problem in the router pipeline, due to
> > > REG_ORIG_TP_DPORT_ROUTER?
> > >
> > > Thanks,
> > > Dumitru
> > >
> > > >>>>>>>
> > > >>>>>>> [0] ce0ef8d59850 ("Properly handle hairpin traffic for VIPs
with
> > > >> shared
> > > >>>>> backends.")
> > > >>>>>>> [1] 0038579d1928 ("northd: Optimize ct nat for load balancer
> > > >> traffic.")
> > > >>>>>>>
> > > >>>>>>> Signed-off-by: Han Zhou <hzhou@ovn.org>
> > > >>>>>>> ---
> > > >>>>>>
> > > >>>>>> I think the main concern I have is that this forces us to
choose
> > > >> between:
> > > >>>>>> a. non hwol friendly flows (reduced performance)
> > > >>>>>> b. less functionality (with the knob in patch 3/3 set to
false).
> > > >>>>>>
> > > >>>>> Thanks Dumitru for the comments! I agree the solution is not
ideal,
> > > >> but if
> > > >>>>> we look at it from a different angle, even with a), for most
> > > >> pod->service
> > > >>>>> traffic the performance is still much better than how it is
today (not
> > > >>>>> offloaded kernel datapath is still much better than userspace
> > > >> slowpath).
> > > >>>>> And *hopefully* b) is ok for most use cases to get HW-offload
> > > >> capability.
> > > >>>>>
> > > >>>
> > > >>> Just a note on this item.  I'm a bit confused about why all
traffic
> > > >>> would be slowpath-ed?  It's just the first packet that goes to
vswitchd
> > > >>> as an upcall, right?
> > > >>>
> > > >>
> > > >> It is about all traffic for *short-lived* connections. Any clients
->
> > > >> service traffic with the pattern:
> > > >> 1. TCP connection setup
> > > >> 2. Set API request, receives response
> > > >> 3. Close TCP connection
> > > >> It can be tested with netperf TCP_CRR. Every time the client side
TCP port
> > > >> is different, but since the server -> client DP flow includes the
client
> > > >> TCP port, for each such transaction there is going to be at least
a DP flow
> > > >> miss and goes to userspace. Such application latency would be very
high. In
> > > >> addition, it causes the OVS handler CPU spikes very high which
would
> > > >> further impact the dataplane performance of the system.
> > > >>
> > > >>> Once the megaflow (even if it's more specific than ideal) is
installed
> > > >>> all following traffic in that session should be forwarded in fast
path
> > > >>> (kernel).
> > > >>>
> > > >>> Also, I'm not sure I follow why the same behavior wouldn't happen
with
> > > >>> your changes too for pod->service.  The datapath flow includes the
> > > >>> dp_hash() match, and that's likely different for different
connections.
> > > >>>
> > > >>
> > > >> With the change it is not going to happen, because the match is
for server
> > > >> side port only.
> > > >> For dp_hash(), for what I remembered, there are as many as the
number of
> > > >> megaflows as the number of buckets (the masked hash value) at most.
> > > >>
> > > >
> > > > OK, that explains it, thanks.
> > > >
> > > >>> Or am I missing something?
> > > >>>
> > > >>>>>> Change [0] was added to address the case when a service in
kubernetes
> > > >> is
> > > >>>>>> exposed via two different k8s services objects that share the
same
> > > >>>>>> endpoints.  That translates in ovn-k8s to two different OVN
load
> > > >>>>>> balancer VIPs that share the same backends.  For such cases,
if the
> > > >>>>>> service is being accessed by one of its own backends we need
to be
> > > >> able
> > > >>>>>> to differentiate based on the VIP address it used to connect
to.
> > > >>>>>>
> > > >>>>>> CC: Tim Rozet, Dan Williams for some more input from the
ovn-k8s side
> > > >> on
> > > >>>>>> how common it is that an OVN-networked pod accesses two (or
more)
> > > >>>>>> services that might have the pod itself as a backend.
> > > >>>>>>
> > > >>>>>
> > > >>>>> Yes, we definitely need input from ovn-k8s side. The
information we
> > > >> got so
> > > >>>>> far: the change [0] was to fix a bug [2] reported by Tim.
However, the
> > > >> bug
> > > >>>>> description didn't mention anything about two VIPs sharing the
same
> > > >>>>> backend. Tim also mentioned in the ovn-k8s meeting last week
that the
> > > >>>>> original user bug report for [2] was [3], and [3] was in fact a
> > > >> completely
> > > >>>>> different problem (although it is related to hairpin, too). So,
I am
> > > >> under
> > > >>>>
> > > >>>> I am not completely sure about the link between [3] and [2],
maybe Tim
> > > >>>> remembers more.
> > > >>>>
> > > >>>>> the impression that "an OVN-networked pod accesses two (or more)
> > > >> services
> > > >>>>> that might have the pod itself as a backend" might be a very
rare use
> > > >> case,
> > > >>>>> if it exists at all.
> > > >>>>>
> > > >>>>
> > > >>>> I went ahead and set the new
ovn-allow-vips-share-hairpin-backend knob
> > > >>>> to "false" and pushed it to my fork to run the ovn-kubernetes
CI.  This
> > > >>>> runs a subset of the kubernetes conformance tests (AFAICT) and
some
> > > >>>> specific e2e ovn-kubernetes tests.
> > > >>>>
> > > >>>> The results are here:
> > > >>>>
> > > >>>>
https://github.com/dceara/ovn/runs/7230840427?check_suite_focus=true
> > > >>>>
> > > >>>> Focusing on the conformance failures:
> > > >>>>
> > > >>>> 2022-07-07T10:31:24.7228157Z  [91m [1m[Fail]  [0m
[90m[sig-network]
> > > >> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It]
should
> > > >> function for endpoint-Service: http  [0m
> > > >>>> 2022-07-07T10:31:24.7228940Z  [37mvendor/
> > > >> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
> > > >>>> ...
> > > >>>> 2022-07-07T10:31:24.7240313Z  [91m [1m[Fail]  [0m
[90m[sig-network]
> > > >> Networking  [0m [0mGranular Checks: Services  [0m [91m [1m[It]
should
> > > >> function for multiple endpoint-Services with same selector  [0m
> > > >>>> 2022-07-07T10:31:24.7240819Z  [37mvendor/
> > > >> github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [0m
> > > >>>> ...
> > > >>>>
> > > >>>> Checking how these tests are defined:
> > > >>>>
> > > >>
https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L283
> > > >>>>
> > > >>
https://github.com/kubernetes/kubernetes/blob/2a017f94bcf8d04cbbbbdc6695bcf74273d630ed/test/e2e/network/networking.go#L236
> > > >>>>
> > > >> Thanks for the test and information! Really need input from k8s
folks to
> > > >> understand more.
> > > >>
> > > >> Thanks,
> > > >> Han
> > > >>
> > > >
> > > > Like I said below, these are kubernetes conformance tests so I'll
let
> > > > k8s folks confirm if such failures can be ignored/worked around.
> > > >
> > > >>>> It seems to me that they're testing explicitly for a  "pod that
accesses
> > > >>>> two services that might have the pod itself as a backend".
> > > >>>>
> > > >>>> So, if I'm not wrong, we'd become non-compliant in this case.
> > > >>>>
> > > >>>>>> If this turns out to be mandatory I guess we might want to
also look
> > > >>>>>> into alternatives like:
> > > >>>>>> - getting help from the HW to offload matches like ct_tuple()
> > > >>>>>
> > > >>>>> I believe this is going to happen in the future. HWOL is
continuously
> > > >>>>> enhanced.
> > > >>>>>
> > > >>>>
> > > >>>> That would make things simpler.
> > > >>>>
> > > >>>>>> - limiting the impact of "a." only to some load balancers
(e.g., would
> > > >>>>>> it help to use different hairpin lookup tables for such load
> > > >> balancers?)
> > > >>>>>
> > > >>>>> I am not sure if this would work, and not sure if this is a good
> > > >> approach,
> > > >>>>> either. In general, I believe it is possible to solve the
problem with
> > > >> more
> > > >>>>> complex pipelines, but we need to keep in mind it is quite easy
to
> > > >>>>> introduce other performance problems (either control plane or
data
> > > >> plane) -
> > > >>>>> many of the changes lead to the current implementation were for
> > > >> performance
> > > >>>>> optimizations, some for control plane, some for HWOL, and some
for
> > > >> reducing
> > > >>>>> recirculations. I'd avoid complexity unless it is really
necessary.
> > > >> Let's
> > > >>>>> get more input for the problem, and based on that we can decide
if we
> > > >> want
> > > >>>>> to move to a more complex solution.
> > > >>>>>
> > > >>>>
> > > >>>> Sure, I agree we need to find the best solution.
> > > >>>>
> > > >>>> In my option OVN should be HW-agnostic.  We did try to adjust
the way
> > > >>>> OVN generates openflows in order to make it more "HWOL-friendly"
but
> > > >>>> that shouldn't come with the cost of breaking CMS features (if
that's
> > > >>>> the case here).
> > > >>>>
> > > >>>>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1931599
> > > >>>>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1903651
> > > >>>>>
> > > >>>>> Thanks,
> > > >>>>> Han
> > > >>>>>
> > > >>>>
> > > >>>> Thanks,
> > > >>>> Dumitru
> >
> >
> > Hi Han,  Dumitru,
> >
> > What's the status of this patch series ?  Does it need a v2 ? Sorry I
> > didn't follow all the discussions.
> >
> > If the patch series doesn't need a v2, a can probably start reviewing.
>
> Hi Numan,
> I am still waiting for clarification of the k8s requirement, which Tim
said last week at the ovn-k8s meeting that he would discuss with Dumitru.
> At the same time I am trying an alternative to see if we can solve the
performance problem while still supporting the corner cases (without adding
too much complexity to the pipeline).
> So you can hold the review for now.
>
> Thanks,
> Han
>
Hi Dumitru, Numan,

I abandoned this series and replaced with a cleaner solution:
https://patchwork.ozlabs.org/project/ovn/patch/20220824061730.2523979-1-hzhou@ovn.org/

It should solve the performance problem without impacting the hairpin
feature or sacrificing HWOL. PTAL.
If it is ok, I'd expect it to be backported at least to 22.03 the LTS
branch, since this has significant performance implications.

Thanks,
Han

> >
> > Thanks
> > Numan
> >
> > > >>>
> > > >>
> > >
> > > _______________________________________________
> > > dev mailing list
> > > dev@openvswitch.org
> > > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> > >
diff mbox series

Patch

diff --git a/controller/lflow.c b/controller/lflow.c
index 934b23698..a44f6d056 100644
--- a/controller/lflow.c
+++ b/controller/lflow.c
@@ -1932,10 +1932,6 @@  add_lb_vip_hairpin_reply_action(struct in6_addr *vip6, ovs_be32 vip,
 }
 
 /* Adds flows to detect hairpin sessions.
- *
- * For backwards compatibilty with older ovn-northd versions, uses
- * ct_nw_dst(), ct_ipv6_dst(), ct_tp_dst(), otherwise uses the
- * original destination tuple stored by ovn-northd.
  */
 static void
 add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
@@ -1956,10 +1952,8 @@  add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
     /* Matching on ct_nw_dst()/ct_ipv6_dst()/ct_tp_dst() requires matching
      * on ct_state first.
      */
-    if (!lb->hairpin_orig_tuple) {
-        uint32_t ct_state = OVS_CS_F_TRACKED | OVS_CS_F_DST_NAT;
-        match_set_ct_state_masked(&hairpin_match, ct_state, ct_state);
-    }
+    uint32_t ct_state = OVS_CS_F_TRACKED | OVS_CS_F_DST_NAT;
+    match_set_ct_state_masked(&hairpin_match, ct_state, ct_state);
 
     if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
         ovs_be32 bip4 = in6_addr_get_mapped_ipv4(&lb_backend->ip);
@@ -1971,14 +1965,7 @@  add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
         match_set_dl_type(&hairpin_match, htons(ETH_TYPE_IP));
         match_set_nw_src(&hairpin_match, bip4);
         match_set_nw_dst(&hairpin_match, bip4);
-
-        if (!lb->hairpin_orig_tuple) {
-            match_set_ct_nw_dst(&hairpin_match, vip4);
-        } else {
-            match_set_reg(&hairpin_match,
-                          MFF_LOG_LB_ORIG_DIP_IPV4 - MFF_LOG_REG0,
-                          ntohl(vip4));
-        }
+        match_set_ct_nw_dst(&hairpin_match, vip4);
 
         add_lb_vip_hairpin_reply_action(NULL, snat_vip4, lb_proto,
                                         lb_backend->port,
@@ -1993,17 +1980,7 @@  add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
         match_set_dl_type(&hairpin_match, htons(ETH_TYPE_IPV6));
         match_set_ipv6_src(&hairpin_match, bip6);
         match_set_ipv6_dst(&hairpin_match, bip6);
-
-        if (!lb->hairpin_orig_tuple) {
-            match_set_ct_ipv6_dst(&hairpin_match, &lb_vip->vip);
-        } else {
-            ovs_be128 vip6_value;
-
-            memcpy(&vip6_value, &lb_vip->vip, sizeof vip6_value);
-            match_set_xxreg(&hairpin_match,
-                            MFF_LOG_LB_ORIG_DIP_IPV6 - MFF_LOG_XXREG0,
-                            ntoh128(vip6_value));
-        }
+        match_set_ct_ipv6_dst(&hairpin_match, &lb_vip->vip);
 
         add_lb_vip_hairpin_reply_action(snat_vip6, 0, lb_proto,
                                         lb_backend->port,
@@ -2014,14 +1991,8 @@  add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb,
     if (lb_backend->port) {
         match_set_nw_proto(&hairpin_match, lb_proto);
         match_set_tp_dst(&hairpin_match, htons(lb_backend->port));
-        if (!lb->hairpin_orig_tuple) {
-            match_set_ct_nw_proto(&hairpin_match, lb_proto);
-            match_set_ct_tp_dst(&hairpin_match, htons(lb_vip->vip_port));
-        } else {
-            match_set_reg_masked(&hairpin_match,
-                                 MFF_LOG_LB_ORIG_TP_DPORT - MFF_REG0,
-                                 lb_vip->vip_port, UINT16_MAX);
-        }
+        match_set_ct_nw_proto(&hairpin_match, lb_proto);
+        match_set_ct_tp_dst(&hairpin_match, htons(lb_vip->vip_port));
     }
 
     /* In the original direction, only match on traffic that was already
@@ -2218,44 +2189,23 @@  add_lb_ct_snat_hairpin_vip_flow(struct ovn_controller_lb *lb,
     /* Matching on ct_nw_dst()/ct_ipv6_dst()/ct_tp_dst() requires matching
      * on ct_state first.
      */
-    if (!lb->hairpin_orig_tuple) {
-        uint32_t ct_state = OVS_CS_F_TRACKED | OVS_CS_F_DST_NAT;
-        match_set_ct_state_masked(&match, ct_state, ct_state);
-    }
+    uint32_t ct_state = OVS_CS_F_TRACKED | OVS_CS_F_DST_NAT;
+    match_set_ct_state_masked(&match, ct_state, ct_state);
 
     if (address_family == AF_INET) {
         ovs_be32 vip4 = in6_addr_get_mapped_ipv4(&lb_vip->vip);
 
         match_set_dl_type(&match, htons(ETH_TYPE_IP));
-
-        if (!lb->hairpin_orig_tuple) {
-            match_set_ct_nw_dst(&match, vip4);
-        } else {
-            match_set_reg(&match, MFF_LOG_LB_ORIG_DIP_IPV4 - MFF_LOG_REG0,
-                          ntohl(vip4));
-        }
+        match_set_ct_nw_dst(&match, vip4);
     } else {
         match_set_dl_type(&match, htons(ETH_TYPE_IPV6));
-        if (!lb->hairpin_orig_tuple) {
-            match_set_ct_ipv6_dst(&match, &lb_vip->vip);
-        } else {
-            ovs_be128 vip6_value;
-
-            memcpy(&vip6_value, &lb_vip->vip, sizeof vip6_value);
-            match_set_xxreg(&match, MFF_LOG_LB_ORIG_DIP_IPV6 - MFF_LOG_XXREG0,
-                            ntoh128(vip6_value));
-        }
+        match_set_ct_ipv6_dst(&match, &lb_vip->vip);
     }
 
     match_set_nw_proto(&match, lb_proto);
     if (lb_vip->vip_port) {
-        if (!lb->hairpin_orig_tuple) {
-            match_set_ct_nw_proto(&match, lb_proto);
-            match_set_ct_tp_dst(&match, htons(lb_vip->vip_port));
-        } else {
-            match_set_reg_masked(&match, MFF_LOG_LB_ORIG_TP_DPORT - MFF_REG0,
-                                 lb_vip->vip_port, UINT16_MAX);
-        }
+        match_set_ct_nw_proto(&match, lb_proto);
+        match_set_ct_tp_dst(&match, htons(lb_vip->vip_port));
     }
 
     /* We need to "add_or_append" flows because this match may form part
diff --git a/include/ovn/logical-fields.h b/include/ovn/logical-fields.h
index bfb07ebef..1d5d4fbe3 100644
--- a/include/ovn/logical-fields.h
+++ b/include/ovn/logical-fields.h
@@ -45,11 +45,6 @@  enum ovn_controller_event {
  *
  * Make sure these don't overlap with the logical fields! */
 #define MFF_LOG_REG0             MFF_REG0
-#define MFF_LOG_LB_ORIG_DIP_IPV4 MFF_REG1
-#define MFF_LOG_LB_ORIG_TP_DPORT MFF_REG2
-
-#define MFF_LOG_XXREG0           MFF_XXREG0
-#define MFF_LOG_LB_ORIG_DIP_IPV6 MFF_XXREG1
 
 #define MFF_N_LOG_REGS 10
 
diff --git a/lib/lb.c b/lib/lb.c
index 7b0ed1abe..63eb5cf3d 100644
--- a/lib/lb.c
+++ b/lib/lb.c
@@ -301,9 +301,6 @@  ovn_controller_lb_create(const struct sbrec_load_balancer *sbrec_lb)
      */
     lb->n_vips = n_vips;
 
-    lb->hairpin_orig_tuple = smap_get_bool(&sbrec_lb->options,
-                                           "hairpin_orig_tuple",
-                                           false);
     ovn_lb_get_hairpin_snat_ip(&sbrec_lb->header_.uuid, &sbrec_lb->options,
                                &lb->hairpin_snat_ips);
     return lb;
diff --git a/lib/lb.h b/lib/lb.h
index 832ed31fb..424dd789e 100644
--- a/lib/lb.h
+++ b/lib/lb.h
@@ -98,9 +98,6 @@  struct ovn_controller_lb {
 
     struct ovn_lb_vip *vips;
     size_t n_vips;
-    bool hairpin_orig_tuple; /* True if ovn-northd stores the original
-                              * destination tuple in registers.
-                              */
 
     struct lport_addresses hairpin_snat_ips; /* IP (v4 and/or v6) to be used
                                               * as source for hairpinned
diff --git a/northd/northd.c b/northd/northd.c
index 6997c280c..79fcd0aaa 100644
--- a/northd/northd.c
+++ b/northd/northd.c
@@ -211,10 +211,6 @@  enum ovn_stage {
 #define REGBIT_FROM_RAMP          "reg0[14]"
 #define REGBIT_PORT_SEC_DROP      "reg0[15]"
 
-#define REG_ORIG_DIP_IPV4         "reg1"
-#define REG_ORIG_DIP_IPV6         "xxreg1"
-#define REG_ORIG_TP_DPORT         "reg2[0..15]"
-
 /* Register definitions for switches and routers. */
 
 /* Indicate that this packet has been recirculated using egress
@@ -266,26 +262,26 @@  enum ovn_stage {
  * OVS register usage:
  *
  * Logical Switch pipeline:
- * +----+----------------------------------------------+---+------------------+
- * | R0 |     REGBIT_{CONNTRACK/DHCP/DNS}              |   |                  |
- * |    |     REGBIT_{HAIRPIN/HAIRPIN_REPLY}           |   |                  |
- * |    | REGBIT_ACL_HINT_{ALLOW_NEW/ALLOW/DROP/BLOCK} |   |                  |
- * |    |     REGBIT_ACL_LABEL                         | X |                  |
- * +----+----------------------------------------------+ X |                  |
- * | R1 |         ORIG_DIP_IPV4 (>= IN_STATEFUL)       | R |                  |
- * +----+----------------------------------------------+ E |                  |
- * | R2 |         ORIG_TP_DPORT (>= IN_STATEFUL)       | G |                  |
- * +----+----------------------------------------------+ 0 |                  |
- * | R3 |                  ACL LABEL                   |   |                  |
- * +----+----------------------------------------------+---+------------------+
- * | R4 |                   UNUSED                     |   |                  |
- * +----+----------------------------------------------+ X |   ORIG_DIP_IPV6  |
- * | R5 |                   UNUSED                     | X | (>= IN_STATEFUL) |
- * +----+----------------------------------------------+ R |                  |
- * | R6 |                   UNUSED                     | E |                  |
- * +----+----------------------------------------------+ G |                  |
- * | R7 |                   UNUSED                     | 1 |                  |
- * +----+----------------------------------------------+---+------------------+
+ * +----+----------------------------------------------+
+ * | R0 |     REGBIT_{CONNTRACK/DHCP/DNS}              |
+ * |    |     REGBIT_{HAIRPIN/HAIRPIN_REPLY}           |
+ * |    | REGBIT_ACL_HINT_{ALLOW_NEW/ALLOW/DROP/BLOCK} |
+ * |    |     REGBIT_ACL_LABEL                         |
+ * +----+----------------------------------------------+
+ * | R1 |                   UNUSED                     |
+ * +----+----------------------------------------------+
+ * | R2 |                   UNUSED                     |
+ * +----+----------------------------------------------+
+ * | R3 |                  ACL LABEL                   |
+ * +----+----------------------------------------------+
+ * | R4 |                   UNUSED                     |
+ * +----+----------------------------------------------+
+ * | R5 |                   UNUSED                     |
+ * +----+----------------------------------------------+
+ * | R6 |                   UNUSED                     |
+ * +----+----------------------------------------------+
+ * | R7 |                   UNUSED                     |
+ * +----+----------------------------------------------+
  * | R8 |                   UNUSED                     |
  * +----+----------------------------------------------+
  * | R9 |                   UNUSED                     |
@@ -4288,7 +4284,7 @@  sync_lbs(struct northd_input *input_data, struct ovsdb_idl_txn *ovnsb_txn,
          */
         struct smap options;
         smap_clone(&options, &lb->nlb->options);
-        smap_replace(&options, "hairpin_orig_tuple", "true");
+        smap_replace(&options, "hairpin_orig_tuple", "false");
 
         struct sbrec_datapath_binding **lb_dps =
             xmalloc(lb->n_nb_ls * sizeof *lb_dps);
@@ -5917,42 +5913,14 @@  build_pre_stateful(struct ovn_datapath *od,
     ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_STATEFUL, 0, "1", "next;");
 
     const char *ct_lb_action = features->ct_no_masked_label
-                               ? "ct_lb_mark"
-                               : "ct_lb";
-    const char *lb_protocols[] = {"tcp", "udp", "sctp"};
-    struct ds actions = DS_EMPTY_INITIALIZER;
-    struct ds match = DS_EMPTY_INITIALIZER;
-
-    for (size_t i = 0; i < ARRAY_SIZE(lb_protocols); i++) {
-        ds_clear(&match);
-        ds_clear(&actions);
-        ds_put_format(&match, REGBIT_CONNTRACK_NAT" == 1 && ip4 && %s",
-                      lb_protocols[i]);
-        ds_put_format(&actions, REG_ORIG_DIP_IPV4 " = ip4.dst; "
-                                REG_ORIG_TP_DPORT " = %s.dst; %s;",
-                      lb_protocols[i], ct_lb_action);
-        ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 120,
-                      ds_cstr(&match), ds_cstr(&actions));
-
-        ds_clear(&match);
-        ds_clear(&actions);
-        ds_put_format(&match, REGBIT_CONNTRACK_NAT" == 1 && ip6 && %s",
-                      lb_protocols[i]);
-        ds_put_format(&actions, REG_ORIG_DIP_IPV6 " = ip6.dst; "
-                                REG_ORIG_TP_DPORT " = %s.dst; %s;",
-                      lb_protocols[i], ct_lb_action);
-        ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 120,
-                      ds_cstr(&match), ds_cstr(&actions));
-    }
-
-    ds_clear(&actions);
-    ds_put_format(&actions, "%s;", ct_lb_action);
+                               ? "ct_lb_mark;"
+                               : "ct_lb;";
 
     ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 110,
-                  REGBIT_CONNTRACK_NAT" == 1", ds_cstr(&actions));
+                  REGBIT_CONNTRACK_NAT" == 1", ct_lb_action);
 
     ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_STATEFUL, 110,
-                  REGBIT_CONNTRACK_NAT" == 1", ds_cstr(&actions));
+                  REGBIT_CONNTRACK_NAT" == 1", ct_lb_action);
 
     /* If REGBIT_CONNTRACK_DEFRAG is set as 1, then the packets should be
      * sent to conntrack for tracking and defragmentation. */
@@ -5961,9 +5929,6 @@  build_pre_stateful(struct ovn_datapath *od,
 
     ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_STATEFUL, 100,
                   REGBIT_CONNTRACK_DEFRAG" == 1", "ct_next;");
-
-    ds_destroy(&actions);
-    ds_destroy(&match);
 }
 
 static void
@@ -6879,12 +6844,8 @@  build_lb_rules(struct hmap *lflows, struct ovn_northd_lb *lb, bool ct_lb_mark,
          */
         if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
             ip_match = "ip4";
-            ds_put_format(action, REG_ORIG_DIP_IPV4 " = %s; ",
-                          lb_vip->vip_str);
         } else {
             ip_match = "ip6";
-            ds_put_format(action, REG_ORIG_DIP_IPV6 " = %s; ",
-                          lb_vip->vip_str);
         }
 
         const char *proto = NULL;
@@ -6897,12 +6858,6 @@  build_lb_rules(struct hmap *lflows, struct ovn_northd_lb *lb, bool ct_lb_mark,
                     proto = "sctp";
                 }
             }
-
-            /* Store the original destination port to be used when generating
-             * hairpin flows.
-             */
-            ds_put_format(action, REG_ORIG_TP_DPORT " = %"PRIu16"; ",
-                          lb_vip->vip_port);
         }
 
         /* New connections in Ingress table. */
diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
index 59c584710..d9b99a67f 100644
--- a/northd/ovn-northd.8.xml
+++ b/northd/ovn-northd.8.xml
@@ -535,26 +535,11 @@ 
       traffic to the next table.
     </p>
     <ul>
-      <li>
-        Priority-120 flows that send the packets to connection tracker using
-        <code>ct_lb_mark;</code> as the action so that the already established
-        traffic destined to the load balancer VIP gets DNATted based on a hint
-        provided by the previous tables (with a match
-        for <code>reg0[2] == 1</code> and on supported load balancer protocols
-        and address families).  For IPv4 traffic the flows also load the
-        original destination IP and transport port in registers
-        <code>reg1</code> and <code>reg2</code>.  For IPv6 traffic the flows
-        also load the original destination IP and transport port in
-        registers <code>xxreg1</code> and <code>reg2</code>.
-      </li>
-
       <li>
          A priority-110 flow sends the packets to connection tracker based
          on a hint provided by the previous tables
          (with a match for <code>reg0[2] == 1</code>) by using the
-         <code>ct_lb_mark;</code> action.  This flow is added to handle
-         the traffic for load balancer VIPs whose protocol is not defined
-         (mainly for ICMP traffic).
+         <code>ct_lb_mark;</code> action.
       </li>
 
       <li>
@@ -877,11 +862,7 @@ 
         of <var>VIP</var>. If health check is enabled, then <var>args</var>
         will only contain those endpoints whose service monitor status entry
         in <code>OVN_Southbound</code> db is either <code>online</code> or
-        empty.  For IPv4 traffic the flow also loads the original destination
-        IP and transport port in registers <code>reg1</code> and
-        <code>reg2</code>.  For IPv6 traffic the flow also loads the original
-        destination IP and transport port in registers <code>xxreg1</code> and
-        <code>reg2</code>.
+        empty.
         The above flow is created even if the load balancer is attached to a
         logical router connected to the current logical switch and
         the <code>install_ls_lb_from_router</code> variable in
@@ -897,11 +878,6 @@ 
         VIP</var></code>. The action on this flow is <code>
         ct_lb_mark(<var>args</var>)</code>, where <var>args</var> contains comma
         separated IP addresses of the same address family as <var>VIP</var>.
-        For IPv4 traffic the flow also loads the original destination
-        IP and transport port in registers <code>reg1</code> and
-        <code>reg2</code>.  For IPv6 traffic the flow also loads the original
-        destination IP and transport port in registers <code>xxreg1</code> and
-        <code>reg2</code>.
         The above flow is created even if the load balancer is attached to a
         logical router connected to the current logical switch and
         the <code>install_ls_lb_from_router</code> variable in
@@ -1919,7 +1895,7 @@  output;
 
     <ul>
       <li>
-        A Priority-120 flow that send the packets to connection tracker using
+        A Priority-110 flow that send the packets to connection tracker using
         <code>ct_lb_mark;</code> as the action so that the already established
         traffic gets unDNATted from the backend IP to the load balancer VIP
         based on a hint provided by the previous tables with a match
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index 033b58b8c..ed6ac3b17 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -1220,7 +1220,7 @@  check ovn-nbctl --wait=sb ls-lb-add sw0 lb1
 AT_CAPTURE_FILE([sbflows])
 OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows sw0 | tee sbflows | grep 'priority=120.*backends' | sed 's/table=..//'], 0, [dnl
-  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
+  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
 ])
 
 AS_BOX([Delete the Load_Balancer_Health_Check])
@@ -1230,7 +1230,7 @@  wait_row_count Service_Monitor 0
 AT_CAPTURE_FILE([sbflows2])
 OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows sw0 | tee sbflows2 | grep 'priority=120.*backends' | sed 's/table=..//'], [0],
-[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
+[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
 ])
 
 AS_BOX([Create the Load_Balancer_Health_Check again.])
@@ -1242,7 +1242,7 @@  check ovn-nbctl --wait=sb sync
 
 ovn-sbctl dump-flows sw0 | grep backends | grep priority=120 > lflows.txt
 AT_CHECK([cat lflows.txt | sed 's/table=..//'], [0], [dnl
-  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
+  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
 ])
 
 AS_BOX([Get the uuid of both the service_monitor])
@@ -1252,7 +1252,7 @@  sm_sw1_p1=$(fetch_column Service_Monitor _uuid logical_port=sw1-p1)
 AT_CAPTURE_FILE([sbflows3])
 OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows sw0 | tee sbflows 3 | grep 'priority=120.*backends' | sed 's/table=..//'], [0],
-[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
+[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
 ])
 
 AS_BOX([Set the service monitor for sw1-p1 to offline])
@@ -1263,7 +1263,7 @@  check ovn-nbctl --wait=sb sync
 AT_CAPTURE_FILE([sbflows4])
 OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows sw0 | tee sbflows4 | grep 'priority=120.*backends' | sed 's/table=..//'], [0],
-[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80);)
+[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80);)
 ])
 
 AS_BOX([Set the service monitor for sw0-p1 to offline])
@@ -1292,7 +1292,7 @@  check ovn-nbctl --wait=sb sync
 AT_CAPTURE_FILE([sbflows7])
 OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows sw0 | tee sbflows7 | grep backends | grep priority=120 | sed 's/table=..//'], 0,
-[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
+[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
 ])
 
 AS_BOX([Set the service monitor for sw1-p1 to error])
@@ -1303,7 +1303,7 @@  check ovn-nbctl --wait=sb sync
 ovn-sbctl dump-flows sw0 | grep "ip4.dst == 10.0.0.10 && tcp.dst == 80" \
 | grep priority=120 > lflows.txt
 AT_CHECK([cat lflows.txt | sed 's/table=..//'], [0], [dnl
-  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80);)
+  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80);)
 ])
 
 AS_BOX([Add one more vip to lb1])
@@ -1329,8 +1329,8 @@  AT_CAPTURE_FILE([sbflows9])
 OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows sw0 | tee sbflows9 | grep backends | grep priority=120 | sed 's/table=..//' | sort],
   0,
-[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80);)
-  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; reg1 = 10.0.0.40; reg2[[0..15]] = 1000; ct_lb_mark(backends=10.0.0.3:1000);)
+[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80);)
+  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:1000);)
 ])
 
 AS_BOX([Set the service monitor for sw1-p1 to online])
@@ -1343,8 +1343,8 @@  AT_CAPTURE_FILE([sbflows10])
 OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows sw0 | tee sbflows10 | grep backends | grep priority=120 | sed 's/table=..//' | sort],
   0,
-[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
-  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; reg1 = 10.0.0.40; reg2[[0..15]] = 1000; ct_lb_mark(backends=10.0.0.3:1000,20.0.0.3:80);)
+[  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
+  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:1000,20.0.0.3:80);)
 ])
 
 AS_BOX([Associate lb1 to sw1])
@@ -1353,8 +1353,8 @@  AT_CAPTURE_FILE([sbflows11])
 OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows sw1 | tee sbflows11 | grep backends | grep priority=120 | sed 's/table=..//' | sort],
   0, [dnl
-  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
-  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; reg1 = 10.0.0.40; reg2[[0..15]] = 1000; ct_lb_mark(backends=10.0.0.3:1000,20.0.0.3:80);)
+  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80);)
+  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.40 && tcp.dst == 1000), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:1000,20.0.0.3:80);)
 ])
 
 AS_BOX([Now create lb2 same as lb1 but udp protocol.])
@@ -2555,7 +2555,7 @@  check_column "" sb:datapath_binding load_balancers external_ids:name=sw1
 echo
 echo "__file__:__line__: Set hairpin_snat_ip on lb1 and check that SB DB is updated."
 check ovn-nbctl --wait=sb set Load_Balancer lb1 options:hairpin_snat_ip="42.42.42.42 4242::4242"
-check_column "$lb1_uuid" sb:load_balancer _uuid name=lb1 options='{hairpin_orig_tuple="true", hairpin_snat_ip="42.42.42.42 4242::4242"}'
+check_column "$lb1_uuid" sb:load_balancer _uuid name=lb1 options='{hairpin_orig_tuple="false", hairpin_snat_ip="42.42.42.42 4242::4242"}'
 
 echo
 echo "__file__:__line__: Delete load balancers lb1 and lbg1 and check that datapath sw1's load_balancers is still empty."
@@ -3947,18 +3947,12 @@  check_stateful_flows() {
   table=? (ls_in_pre_stateful ), priority=0    , match=(1), action=(next;)
   table=? (ls_in_pre_stateful ), priority=100  , match=(reg0[[0]] == 1), action=(ct_next;)
   table=? (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
 ])
 
     AT_CHECK([grep "ls_in_lb" sw0flows | sort | sed 's/table=../table=??/'], [0], [dnl
   table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
-  table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.4:8080);)
-  table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.20 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.20; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.40:8080);)
+  table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.4:8080);)
+  table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.20 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.40:8080);)
 ])
 
     AT_CHECK([grep "ls_in_stateful" sw0flows | sort | sed 's/table=../table=??/'], [0], [dnl
@@ -4019,12 +4013,6 @@  AT_CHECK([grep "ls_in_pre_stateful" sw0flows | sort | sed 's/table=./table=?/'],
   table=? (ls_in_pre_stateful ), priority=0    , match=(1), action=(next;)
   table=? (ls_in_pre_stateful ), priority=100  , match=(reg0[[0]] == 1), action=(ct_next;)
   table=? (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  table=? (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
 ])
 
 AT_CHECK([grep "ls_in_lb" sw0flows | sort | sed 's/table=../table=??/'], [0], [dnl
@@ -6392,7 +6380,7 @@  AT_CHECK([grep -e "ls_in_acl" lsflows | sed 's/table=../table=??/' | sort], [0],
 
 AT_CHECK([grep -e "ls_in_lb" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
   table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
-  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; reg1 = 10.0.0.2; ct_lb_mark(backends=10.0.0.10);)
+  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.10);)
 ])
 
 AT_CHECK([grep -e "ls_in_stateful" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
@@ -6445,7 +6433,7 @@  AT_CHECK([grep -e "ls_in_acl" lsflows | sed 's/table=../table=??/' | sort], [0],
 
 AT_CHECK([grep -e "ls_in_lb" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
   table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
-  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; reg1 = 10.0.0.2; ct_lb_mark(backends=10.0.0.10);)
+  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.10);)
 ])
 
 AT_CHECK([grep -e "ls_in_stateful" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
@@ -6498,7 +6486,7 @@  AT_CHECK([grep -e "ls_in_acl" lsflows | sed 's/table=../table=??/' | sort], [0],
 
 AT_CHECK([grep -e "ls_in_lb" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
   table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
-  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; reg1 = 10.0.0.2; ct_lb_mark(backends=10.0.0.10);)
+  table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 10.0.0.2), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.10);)
 ])
 
 AT_CHECK([grep -e "ls_in_stateful" lsflows | sed 's/table=../table=??/' | sort], [0], [dnl
@@ -7468,14 +7456,8 @@  check ovn-nbctl --wait=sb sync
 AT_CHECK([ovn-sbctl lflow-list | grep -e natted -e ct_lb], [0], [dnl
   table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 66.66.66.66 && ct_mark.natted == 1), action=(next;)
   table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 66.66.66.66), action=(ct_lb_mark(backends=42.42.42.2);)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
   table=6 (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
-  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; reg1 = 66.66.66.66; ct_lb_mark(backends=42.42.42.2);)
+  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; ct_lb_mark(backends=42.42.42.2);)
   table=2 (ls_out_pre_stateful), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
 ])
 
@@ -7485,14 +7467,8 @@  check ovn-nbctl --wait=sb sync
 AT_CHECK([ovn-sbctl lflow-list | grep -e natted -e ct_lb], [0], [dnl
   table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 66.66.66.66 && ct_label.natted == 1), action=(next;)
   table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 66.66.66.66), action=(ct_lb(backends=42.42.42.2);)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb;)
   table=6 (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb;)
-  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; reg1 = 66.66.66.66; ct_lb(backends=42.42.42.2);)
+  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; ct_lb(backends=42.42.42.2);)
   table=2 (ls_out_pre_stateful), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb;)
 ])
 
@@ -7502,14 +7478,8 @@  check ovn-nbctl --wait=sb sync
 AT_CHECK([ovn-sbctl lflow-list | grep -e natted -e ct_lb], [0], [dnl
   table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 66.66.66.66 && ct_mark.natted == 1), action=(next;)
   table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 66.66.66.66), action=(ct_lb_mark(backends=42.42.42.2);)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  table=6 (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
   table=6 (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
-  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; reg1 = 66.66.66.66; ct_lb_mark(backends=42.42.42.2);)
+  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 66.66.66.66), action=(reg0[[1]] = 0; ct_lb_mark(backends=42.42.42.2);)
   table=2 (ls_out_pre_stateful), priority=110  , match=(reg0[[2]] == 1), action=(ct_lb_mark;)
 ])
 
@@ -7680,11 +7650,11 @@  AT_CAPTURE_FILE([S1flows])
 
 AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl
   table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
-  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);)
+  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.2:80);)
 ])
 AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl
   table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
-  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);)
+  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.2:80);)
 ])
 
 s0_uuid=$(ovn-sbctl get datapath S0 _uuid)
diff --git a/tests/ovn.at b/tests/ovn.at
index a4a696d51..0dd9a1c2e 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -23578,13 +23578,7 @@  OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows > sbflows
    ovn-sbctl dump-flows sw0 | grep ct_lb_mark | grep priority=120 | sed 's/table=..//'], 0,
   [dnl
-  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && sctp), action=(reg1 = ip4.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && tcp), action=(reg1 = ip4.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip4 && udp), action=(reg1 = ip4.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
-  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && sctp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = sctp.dst; ct_lb_mark;)
-  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && tcp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = tcp.dst; ct_lb_mark;)
-  (ls_in_pre_stateful ), priority=120  , match=(reg0[[2]] == 1 && ip6 && udp), action=(xxreg1 = ip6.dst; reg2[[0..15]] = udp.dst; ct_lb_mark;)
-  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");)
+  (ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.3:80,20.0.0.3:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");)
 ])
 
 AT_CAPTURE_FILE([sbflows2])
@@ -28503,7 +28497,7 @@  OVS_WAIT_UNTIL(
 )
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69], [0], [dnl
@@ -28511,7 +28505,7 @@  NXST_FLOW reply (xid=0x8):
 ])
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | grep -v NXST], [1], [dnl
@@ -28530,9 +28524,9 @@  OVS_WAIT_UNTIL(
 )
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69], [0], [dnl
@@ -28540,8 +28534,8 @@  NXST_FLOW reply (xid=0x8):
 ])
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | grep -v NXST], [1], [dnl
@@ -28563,17 +28557,17 @@  OVS_WAIT_UNTIL(
 )
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
 check ovn-nbctl --wait=hv ls-lb-add sw0 lb-ipv4-udp
@@ -28587,35 +28581,35 @@  OVS_WAIT_UNTIL(
 )
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
 ])
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
- table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
- table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
 check ovn-nbctl --wait=hv ls-lb-add sw0 lb-ipv6-tcp
@@ -28629,39 +28623,39 @@  OVS_WAIT_UNTIL(
 )
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
 ])
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
- table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
- table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
- table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
- table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
 check ovn-nbctl --wait=hv ls-lb-add sw0 lb-ipv6-udp
@@ -28675,43 +28669,43 @@  OVS_WAIT_UNTIL(
 )
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
 ])
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
- table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
- table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
- table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
- table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
 check ovn-nbctl --wait=hv ls-lb-add sw1 lb-ipv6-udp
@@ -28727,59 +28721,6 @@  OVS_WAIT_UNTIL(
 )
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
-])
-
-AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
-])
-
-AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
- table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
- table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
-])
-
-AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x58585858,reg2=0x1f90/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=42.42.42.42,nw_dst=42.42.42.42,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
-])
-
-AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
-])
-
-AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp,reg1=0x58585858,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,tcp,reg1=0x5858585a,reg2=0x1f90/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
- table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
- table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
-])
-
-# Check backwards compatibility with ovn-northd versions that don't store the
-# original destination tuple.
-#
-# ovn-controller should fall back to matching on ct_nw_dst()/ct_tp_dst().
-as northd-backup ovn-appctl -t NORTHD_TYPE pause
-as northd ovn-appctl -t NORTHD_TYPE pause
-
-check ovn-sbctl \
-    -- remove load_balancer lb-ipv4-tcp options hairpin_orig_tuple \
-    -- remove load_balancer lb-ipv6-tcp options hairpin_orig_tuple \
-    -- remove load_balancer lb-ipv4-udp options hairpin_orig_tuple \
-    -- remove load_balancer lb-ipv6-udp options hairpin_orig_tuple
-
-OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
  table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
  table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
  table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
@@ -28788,10 +28729,10 @@  OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_a
  table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp,nw_src=52.52.52.52,nw_dst=52.52.52.52,tp_dst=4042 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.90,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
-AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
+AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
 ])
 
-OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
+AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
  table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
  table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
  table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
@@ -28799,7 +28740,7 @@  OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_a
  table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
-OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
+AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
  table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
  table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
  table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
@@ -28811,7 +28752,7 @@  OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_a
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69 | grep -v NXST], [1], [dnl
 ])
 
-OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
+AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
  table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
  table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
  table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
@@ -28819,11 +28760,6 @@  OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_a
  table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.90,ct_nw_proto=6,ct_tp_dst=8080,tcp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.90))
 ])
 
-# Resume ovn-northd.
-as northd ovn-appctl -t NORTHD_TYPE resume
-as northd-backup ovn-appctl -t NORTHD_TYPE resume
-check ovn-nbctl --wait=hv sync
-
 as hv2 ovs-vsctl del-port hv2-vif1
 OVS_WAIT_UNTIL([test x$(ovn-nbctl lsp-get-up sw0-p2) = xdown])
 
@@ -28857,9 +28793,9 @@  OVS_WAIT_UNTIL(
 )
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=68 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=68, priority=100,ct_mark=0x2/0x2,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp,reg1=0x58585858,reg2=0xfc8/0xffff,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
- table=68, priority=100,ct_mark=0x2/0x2,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6,ipv6_src=4200::1,ipv6_dst=4200::1,tp_dst=4041 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x86dd,NXM_NX_IPV6_SRC[[]],ipv6_dst=8800::88,nw_proto=6,NXM_OF_TCP_SRC[[]]=NXM_OF_TCP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
+ table=68, priority=100,ct_state=+trk+dnat,ct_mark=0x2/0x2,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp,nw_src=42.42.42.1,nw_dst=42.42.42.1,tp_dst=2021 actions=load:0x1->NXM_NX_REG10[[7]],learn(table=69,delete_learned,OXM_OF_METADATA[[]],eth_type=0x800,NXM_OF_IP_SRC[[]],ip_dst=88.88.88.88,nw_proto=17,NXM_OF_UDP_SRC[[]]=NXM_OF_UDP_DST[[]],load:0x1->NXM_NX_REG10[[7]])
 ])
 
 AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=69], [0], [dnl
@@ -28867,9 +28803,9 @@  NXST_FLOW reply (xid=0x8):
 ])
 
 AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_all | grep -v NXST], [0], [dnl
- table=70, priority=100,tcp6,reg2=0x1f90/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
- table=70, priority=100,udp,reg1=0x58585858,reg2=0xfc8/0xffff actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
- table=70, priority=100,udp6,reg2=0xfc8/0xffff,reg4=0x88000000,reg5=0,reg6=0,reg7=0x88 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=17,ct_tp_dst=4040,udp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_ipv6_dst=8800::88,ct_nw_proto=6,ct_tp_dst=8080,tcp6 actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=8800::88))
+ table=70, priority=100,ct_state=+trk+dnat,ct_nw_dst=88.88.88.88,ct_nw_proto=17,ct_tp_dst=4040,udp actions=ct(commit,zone=NXM_NX_REG12[[0..15]],nat(src=88.88.88.88))
 ])
 
 check ovn-nbctl --wait=hv ls-del sw0