diff mbox series

[ovs-dev,v3] northd: Make the use of common zone in NAT configurable

Message ID 20230309061604.723764-1-amusil@redhat.com
State Superseded
Headers show
Series [ovs-dev,v3] northd: Make the use of common zone in NAT configurable | expand

Checks

Context Check Description
ovsrobot/apply-robot success apply and check: success
ovsrobot/github-robot-_Build_and_Test success github build: passed
ovsrobot/github-robot-_ovn-kubernetes success github build: passed

Commit Message

Ales Musil March 9, 2023, 6:16 a.m. UTC
There are essentially three problems with the current
combination of DGP + SNAT + LB:

1) The first packet is being SNATed in common zone due
to a problem with pinctrl not preserving ct_mark/ct_label.
The commit would create a SNAT entry within the same with DNAT
entry.

2) The unSNAT for reply always happened in common zone because of
the loopback check which would be triggered only when we loop
the packet through the LR. Now there are two possibilities how
the reply packet would be handled:

a) If the entry for SNAT in common zone did not time out yet, the
packet would be passed through unSNAT in common zone which would
be fine and continue on. However, the unDNAT wouldn't work due to
the limitation of CT not capable of doing unSNAT/unDNAT on the same
packet twice in the same zone. So the reply would arrive to
the original interface, but with wrong source address.

b) If the entry for SNAT timed out it would loop and do unSNAT correctly
in separate zone and then also unDNAT. This is not possible anymore with
a recent change 8c341b9d (northd: Drop packets destined to router owned NAT IP for DGP).
The reply would be dropped before looping after that change co the traffic
would never arrive to the original interface.

3) The unDNAT was happening only if the DGP was outport meaning
the reply traffic was routed out, but in the opposite case
the unDNAT was happening only because of the looping which made
outport=inport. That's why it worked before introduction of explicit drop.

In order to fix all those issues do two changes:

1) Include inport in the unDNAT match, so that we have both
routing directions covered e.g. (inport == "dgp_port" || outport == "dpg_port").

2) Always use the separate zone for SNAT and DNAT. As the common
zone was needed for HWOL make the common zone optional with
configuration option called "use_common_zone". This option is
by default "false" and can be specified per LR. Use of separate
zones also eliminates the need for the flag propagation
in "lr_out_chk_dnat_local" stage, removing the match on ct_mark/ct_label.

Reported-at: https://bugzilla.redhat.com/2161281
Signed-off-by: Ales Musil <amusil@redhat.com>
---
v2: Fix flaky system test.
v3: Rebase on top of current main.
---
 northd/northd.c         | 509 +++++++++++++++++++++-------------------
 northd/ovn-northd.8.xml |  90 +------
 ovn-nb.xml              |  10 +
 tests/ovn-northd.at     | 217 ++++++++++++-----
 tests/ovn.at            |   3 +
 tests/system-ovn.at     |  75 +++++-
 6 files changed, 508 insertions(+), 396 deletions(-)

Comments

Simon Horman March 10, 2023, 1:25 p.m. UTC | #1
On Thu, Mar 09, 2023 at 07:16:04AM +0100, Ales Musil wrote:
> There are essentially three problems with the current
> combination of DGP + SNAT + LB:
> 
> 1) The first packet is being SNATed in common zone due
> to a problem with pinctrl not preserving ct_mark/ct_label.
> The commit would create a SNAT entry within the same with DNAT
> entry.
> 
> 2) The unSNAT for reply always happened in common zone because of
> the loopback check which would be triggered only when we loop
> the packet through the LR. Now there are two possibilities how
> the reply packet would be handled:
> 
> a) If the entry for SNAT in common zone did not time out yet, the
> packet would be passed through unSNAT in common zone which would
> be fine and continue on. However, the unDNAT wouldn't work due to
> the limitation of CT not capable of doing unSNAT/unDNAT on the same
> packet twice in the same zone. So the reply would arrive to
> the original interface, but with wrong source address.
> 
> b) If the entry for SNAT timed out it would loop and do unSNAT correctly
> in separate zone and then also unDNAT. This is not possible anymore with
> a recent change 8c341b9d (northd: Drop packets destined to router owned NAT IP for DGP).
> The reply would be dropped before looping after that change co the traffic
> would never arrive to the original interface.
> 
> 3) The unDNAT was happening only if the DGP was outport meaning
> the reply traffic was routed out, but in the opposite case
> the unDNAT was happening only because of the looping which made
> outport=inport. That's why it worked before introduction of explicit drop.
> 
> In order to fix all those issues do two changes:
> 
> 1) Include inport in the unDNAT match, so that we have both
> routing directions covered e.g. (inport == "dgp_port" || outport == "dpg_port").
> 
> 2) Always use the separate zone for SNAT and DNAT. As the common
> zone was needed for HWOL make the common zone optional with
> configuration option called "use_common_zone". This option is
> by default "false" and can be specified per LR. Use of separate
> zones also eliminates the need for the flag propagation
> in "lr_out_chk_dnat_local" stage, removing the match on ct_mark/ct_label.
> 
> Reported-at: https://bugzilla.redhat.com/2161281
> Signed-off-by: Ales Musil <amusil@redhat.com>
> ---
> v2: Fix flaky system test.
> v3: Rebase on top of current main.

I am seeing consistent failure of system-tests with this version :(

237: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=yes -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
238: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=yes -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
239: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=no -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
240: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=no -- ovn_monitor_all=no FAILED (system-ovn.at:8702)

Link: https://github.com/horms/ovn/actions/runs/4383139214/jobs/7676416462#step:13:3734
Ales Musil March 13, 2023, 7:20 a.m. UTC | #2
On Fri, Mar 10, 2023 at 2:25 PM Simon Horman <simon.horman@corigine.com>
wrote:

> On Thu, Mar 09, 2023 at 07:16:04AM +0100, Ales Musil wrote:
> > There are essentially three problems with the current
> > combination of DGP + SNAT + LB:
> >
> > 1) The first packet is being SNATed in common zone due
> > to a problem with pinctrl not preserving ct_mark/ct_label.
> > The commit would create a SNAT entry within the same with DNAT
> > entry.
> >
> > 2) The unSNAT for reply always happened in common zone because of
> > the loopback check which would be triggered only when we loop
> > the packet through the LR. Now there are two possibilities how
> > the reply packet would be handled:
> >
> > a) If the entry for SNAT in common zone did not time out yet, the
> > packet would be passed through unSNAT in common zone which would
> > be fine and continue on. However, the unDNAT wouldn't work due to
> > the limitation of CT not capable of doing unSNAT/unDNAT on the same
> > packet twice in the same zone. So the reply would arrive to
> > the original interface, but with wrong source address.
> >
> > b) If the entry for SNAT timed out it would loop and do unSNAT correctly
> > in separate zone and then also unDNAT. This is not possible anymore with
> > a recent change 8c341b9d (northd: Drop packets destined to router owned
> NAT IP for DGP).
> > The reply would be dropped before looping after that change co the
> traffic
> > would never arrive to the original interface.
> >
> > 3) The unDNAT was happening only if the DGP was outport meaning
> > the reply traffic was routed out, but in the opposite case
> > the unDNAT was happening only because of the looping which made
> > outport=inport. That's why it worked before introduction of explicit
> drop.
> >
> > In order to fix all those issues do two changes:
> >
> > 1) Include inport in the unDNAT match, so that we have both
> > routing directions covered e.g. (inport == "dgp_port" || outport ==
> "dpg_port").
> >
> > 2) Always use the separate zone for SNAT and DNAT. As the common
> > zone was needed for HWOL make the common zone optional with
> > configuration option called "use_common_zone". This option is
> > by default "false" and can be specified per LR. Use of separate
> > zones also eliminates the need for the flag propagation
> > in "lr_out_chk_dnat_local" stage, removing the match on ct_mark/ct_label.
> >
> > Reported-at: https://bugzilla.redhat.com/2161281
> > Signed-off-by: Ales Musil <amusil@redhat.com>
> > ---
> > v2: Fix flaky system test.
> > v3: Rebase on top of current main.
>
> I am seeing consistent failure of system-tests with this version :(
>
> 237: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=yes
> -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> 238: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=yes
> -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
> 239: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=no
> -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> 240: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=no
> -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
>
> Link:
> https://github.com/horms/ovn/actions/runs/4383139214/jobs/7676416462#step:13:3734
>
>
I was wondering why it was always green for me and this is the system tests
over userspace datapath.
It's nice that we are catching bugs with that already. I will take a look
at why it is failing. But IMO it shouldn't
be as this patch doesn't have anything specific per datapath type.

Thanks,
Ales
Simon Horman March 13, 2023, 12:08 p.m. UTC | #3
On Mon, Mar 13, 2023 at 08:20:32AM +0100, Ales Musil wrote:
> On Fri, Mar 10, 2023 at 2:25 PM Simon Horman <simon.horman@corigine.com>
> wrote:
> 
> > On Thu, Mar 09, 2023 at 07:16:04AM +0100, Ales Musil wrote:
> > > There are essentially three problems with the current
> > > combination of DGP + SNAT + LB:
> > >
> > > 1) The first packet is being SNATed in common zone due
> > > to a problem with pinctrl not preserving ct_mark/ct_label.
> > > The commit would create a SNAT entry within the same with DNAT
> > > entry.
> > >
> > > 2) The unSNAT for reply always happened in common zone because of
> > > the loopback check which would be triggered only when we loop
> > > the packet through the LR. Now there are two possibilities how
> > > the reply packet would be handled:
> > >
> > > a) If the entry for SNAT in common zone did not time out yet, the
> > > packet would be passed through unSNAT in common zone which would
> > > be fine and continue on. However, the unDNAT wouldn't work due to
> > > the limitation of CT not capable of doing unSNAT/unDNAT on the same
> > > packet twice in the same zone. So the reply would arrive to
> > > the original interface, but with wrong source address.
> > >
> > > b) If the entry for SNAT timed out it would loop and do unSNAT correctly
> > > in separate zone and then also unDNAT. This is not possible anymore with
> > > a recent change 8c341b9d (northd: Drop packets destined to router owned
> > NAT IP for DGP).
> > > The reply would be dropped before looping after that change co the
> > traffic
> > > would never arrive to the original interface.
> > >
> > > 3) The unDNAT was happening only if the DGP was outport meaning
> > > the reply traffic was routed out, but in the opposite case
> > > the unDNAT was happening only because of the looping which made
> > > outport=inport. That's why it worked before introduction of explicit
> > drop.
> > >
> > > In order to fix all those issues do two changes:
> > >
> > > 1) Include inport in the unDNAT match, so that we have both
> > > routing directions covered e.g. (inport == "dgp_port" || outport ==
> > "dpg_port").
> > >
> > > 2) Always use the separate zone for SNAT and DNAT. As the common
> > > zone was needed for HWOL make the common zone optional with
> > > configuration option called "use_common_zone". This option is
> > > by default "false" and can be specified per LR. Use of separate
> > > zones also eliminates the need for the flag propagation
> > > in "lr_out_chk_dnat_local" stage, removing the match on ct_mark/ct_label.
> > >
> > > Reported-at: https://bugzilla.redhat.com/2161281
> > > Signed-off-by: Ales Musil <amusil@redhat.com>
> > > ---
> > > v2: Fix flaky system test.
> > > v3: Rebase on top of current main.
> >
> > I am seeing consistent failure of system-tests with this version :(
> >
> > 237: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=yes
> > -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> > 238: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=yes
> > -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
> > 239: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=no
> > -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> > 240: SNAT in separate zone from DNAT -- ovn-northd -- parallelization=no
> > -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
> >
> > Link:
> > https://github.com/horms/ovn/actions/runs/4383139214/jobs/7676416462#step:13:3734
> >
> >
> I was wondering why it was always green for me and this is the system tests
> over userspace datapath.
> It's nice that we are catching bugs with that already. I will take a look
> at why it is failing. But IMO it shouldn't
> be as this patch doesn't have anything specific per datapath type.

It may be that there is some unreliability in the test, unrelated to your
patch. If so, I wouldn't see it as blocking your patch. But it would be
nice to get on top of it at some point.
Ales Musil March 13, 2023, 2:29 p.m. UTC | #4
On Mon, Mar 13, 2023 at 1:08 PM Simon Horman <simon.horman@corigine.com>
wrote:

> On Mon, Mar 13, 2023 at 08:20:32AM +0100, Ales Musil wrote:
> > On Fri, Mar 10, 2023 at 2:25 PM Simon Horman <simon.horman@corigine.com>
> > wrote:
> >
> > > On Thu, Mar 09, 2023 at 07:16:04AM +0100, Ales Musil wrote:
> > > > There are essentially three problems with the current
> > > > combination of DGP + SNAT + LB:
> > > >
> > > > 1) The first packet is being SNATed in common zone due
> > > > to a problem with pinctrl not preserving ct_mark/ct_label.
> > > > The commit would create a SNAT entry within the same with DNAT
> > > > entry.
> > > >
> > > > 2) The unSNAT for reply always happened in common zone because of
> > > > the loopback check which would be triggered only when we loop
> > > > the packet through the LR. Now there are two possibilities how
> > > > the reply packet would be handled:
> > > >
> > > > a) If the entry for SNAT in common zone did not time out yet, the
> > > > packet would be passed through unSNAT in common zone which would
> > > > be fine and continue on. However, the unDNAT wouldn't work due to
> > > > the limitation of CT not capable of doing unSNAT/unDNAT on the same
> > > > packet twice in the same zone. So the reply would arrive to
> > > > the original interface, but with wrong source address.
> > > >
> > > > b) If the entry for SNAT timed out it would loop and do unSNAT
> correctly
> > > > in separate zone and then also unDNAT. This is not possible anymore
> with
> > > > a recent change 8c341b9d (northd: Drop packets destined to router
> owned
> > > NAT IP for DGP).
> > > > The reply would be dropped before looping after that change co the
> > > traffic
> > > > would never arrive to the original interface.
> > > >
> > > > 3) The unDNAT was happening only if the DGP was outport meaning
> > > > the reply traffic was routed out, but in the opposite case
> > > > the unDNAT was happening only because of the looping which made
> > > > outport=inport. That's why it worked before introduction of explicit
> > > drop.
> > > >
> > > > In order to fix all those issues do two changes:
> > > >
> > > > 1) Include inport in the unDNAT match, so that we have both
> > > > routing directions covered e.g. (inport == "dgp_port" || outport ==
> > > "dpg_port").
> > > >
> > > > 2) Always use the separate zone for SNAT and DNAT. As the common
> > > > zone was needed for HWOL make the common zone optional with
> > > > configuration option called "use_common_zone". This option is
> > > > by default "false" and can be specified per LR. Use of separate
> > > > zones also eliminates the need for the flag propagation
> > > > in "lr_out_chk_dnat_local" stage, removing the match on
> ct_mark/ct_label.
> > > >
> > > > Reported-at: https://bugzilla.redhat.com/2161281
> > > > Signed-off-by: Ales Musil <amusil@redhat.com>
> > > > ---
> > > > v2: Fix flaky system test.
> > > > v3: Rebase on top of current main.
> > >
> > > I am seeing consistent failure of system-tests with this version :(
> > >
> > > 237: SNAT in separate zone from DNAT -- ovn-northd --
> parallelization=yes
> > > -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> > > 238: SNAT in separate zone from DNAT -- ovn-northd --
> parallelization=yes
> > > -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
> > > 239: SNAT in separate zone from DNAT -- ovn-northd --
> parallelization=no
> > > -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> > > 240: SNAT in separate zone from DNAT -- ovn-northd --
> parallelization=no
> > > -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
> > >
> > > Link:
> > >
> https://github.com/horms/ovn/actions/runs/4383139214/jobs/7676416462#step:13:3734
> > >
> > >
> > I was wondering why it was always green for me and this is the system
> tests
> > over userspace datapath.
> > It's nice that we are catching bugs with that already. I will take a look
> > at why it is failing. But IMO it shouldn't
> > be as this patch doesn't have anything specific per datapath type.
>
> It may be that there is some unreliability in the test, unrelated to your
> patch. If so, I wouldn't see it as blocking your patch. But it would be
> nice to get on top of it at some point.
>
>
We are actually hitting the recirculation limit in this scenario (6 for
userspace datapath).
I'm not yet sure how to properly solve that.

Thanks,
Ales
Simon Horman March 14, 2023, 10:29 a.m. UTC | #5
On Mon, Mar 13, 2023 at 03:29:19PM +0100, Ales Musil wrote:
> On Mon, Mar 13, 2023 at 1:08 PM Simon Horman <simon.horman@corigine.com>
> wrote:
> > On Mon, Mar 13, 2023 at 08:20:32AM +0100, Ales Musil wrote:
> > > On Fri, Mar 10, 2023 at 2:25 PM Simon Horman <simon.horman@corigine.com>
> > > wrote:
> > > > On Thu, Mar 09, 2023 at 07:16:04AM +0100, Ales Musil wrote:

...

> > > > I am seeing consistent failure of system-tests with this version :(
> > > >
> > > > 237: SNAT in separate zone from DNAT -- ovn-northd --
> > parallelization=yes
> > > > -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> > > > 238: SNAT in separate zone from DNAT -- ovn-northd --
> > parallelization=yes
> > > > -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
> > > > 239: SNAT in separate zone from DNAT -- ovn-northd --
> > parallelization=no
> > > > -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> > > > 240: SNAT in separate zone from DNAT -- ovn-northd --
> > parallelization=no
> > > > -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
> > > >
> > > > Link:
> > > >
> > https://github.com/horms/ovn/actions/runs/4383139214/jobs/7676416462#step:13:3734
> > > >
> > > >
> > > I was wondering why it was always green for me and this is the system
> > tests
> > > over userspace datapath.
> > > It's nice that we are catching bugs with that already. I will take a look
> > > at why it is failing. But IMO it shouldn't
> > > be as this patch doesn't have anything specific per datapath type.
> >
> > It may be that there is some unreliability in the test, unrelated to your
> > patch. If so, I wouldn't see it as blocking your patch. But it would be
> > nice to get on top of it at some point.
> >
> >
> We are actually hitting the recirculation limit in this scenario (6 for
> userspace datapath).
> I'm not yet sure how to properly solve that.

Is it an issue with the way the test is structured,
the feature the test is exercising,
or something else?
Ales Musil March 14, 2023, 10:42 a.m. UTC | #6
On Tue, Mar 14, 2023 at 11:29 AM Simon Horman <simon.horman@corigine.com>
wrote:

> On Mon, Mar 13, 2023 at 03:29:19PM +0100, Ales Musil wrote:
> > On Mon, Mar 13, 2023 at 1:08 PM Simon Horman <simon.horman@corigine.com>
> > wrote:
> > > On Mon, Mar 13, 2023 at 08:20:32AM +0100, Ales Musil wrote:
> > > > On Fri, Mar 10, 2023 at 2:25 PM Simon Horman <
> simon.horman@corigine.com>
> > > > wrote:
> > > > > On Thu, Mar 09, 2023 at 07:16:04AM +0100, Ales Musil wrote:
>
> ...
>
> > > > > I am seeing consistent failure of system-tests with this version :(
> > > > >
> > > > > 237: SNAT in separate zone from DNAT -- ovn-northd --
> > > parallelization=yes
> > > > > -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> > > > > 238: SNAT in separate zone from DNAT -- ovn-northd --
> > > parallelization=yes
> > > > > -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
> > > > > 239: SNAT in separate zone from DNAT -- ovn-northd --
> > > parallelization=no
> > > > > -- ovn_monitor_all=yes FAILED (system-ovn.at:8702)
> > > > > 240: SNAT in separate zone from DNAT -- ovn-northd --
> > > parallelization=no
> > > > > -- ovn_monitor_all=no FAILED (system-ovn.at:8702)
> > > > >
> > > > > Link:
> > > > >
> > >
> https://github.com/horms/ovn/actions/runs/4383139214/jobs/7676416462#step:13:3734
> > > > >
> > > > >
> > > > I was wondering why it was always green for me and this is the system
> > > tests
> > > > over userspace datapath.
> > > > It's nice that we are catching bugs with that already. I will take a
> look
> > > > at why it is failing. But IMO it shouldn't
> > > > be as this patch doesn't have anything specific per datapath type.
> > >
> > > It may be that there is some unreliability in the test, unrelated to
> your
> > > patch. If so, I wouldn't see it as blocking your patch. But it would be
> > > nice to get on top of it at some point.
> > >
> > >
> > We are actually hitting the recirculation limit in this scenario (6 for
> > userspace datapath).
> > I'm not yet sure how to properly solve that.
>
> Is it an issue with the way the test is structured,
> the feature the test is exercising,
> or something else?
>
>
It looks like a combination of multiple factors:

1) We have recently started testing userspace OvS.
2) This patch redesigns the test and the way we operate with CT zones.
3) And the overall way how OVN pipeline is designed.
diff mbox series

Patch

diff --git a/northd/northd.c b/northd/northd.c
index ab10fea6a..974394a07 100644
--- a/northd/northd.c
+++ b/northd/northd.c
@@ -10475,7 +10475,13 @@  build_distr_lrouter_nat_flows_for_lb(struct lrouter_nat_lb_flows_ctx *ctx,
                                      enum lrouter_nat_lb_flow_type type,
                                      struct ovn_datapath *od)
 {
-    char *gw_action = od->is_gw_router ? "ct_dnat;" : "ct_dnat_in_czone;";
+    bool use_common_zone =
+        smap_get_bool(&od->nbr->options, "use_common_zone", false);
+    char *gw_action = !od->is_gw_router && use_common_zone
+                      ? "ct_dnat_in_czone;"
+                      : "ct_dnat;";
+    struct ovn_port *dgp = od->l3dgw_ports[0];
+
     /* Store the match lengths, so we can reuse the ds buffer. */
     size_t new_match_len = ctx->new_match->length;
     size_t est_match_len = ctx->est_match->length;
@@ -10512,10 +10518,9 @@  build_distr_lrouter_nat_flows_for_lb(struct lrouter_nat_lb_flows_ctx *ctx,
     const char *action = (type == LROUTER_NAT_LB_FLOW_NORMAL)
                          ? gw_action : ctx->est_action[type];
 
-    ds_put_format(ctx->undnat_match,
-                  ") && outport == %s && is_chassis_resident(%s)",
-                  od->l3dgw_ports[0]->json_key,
-                  od->l3dgw_ports[0]->cr_port->json_key);
+    ds_put_format(ctx->undnat_match, ") && (inport == %s || outport == %s)"
+                  " && is_chassis_resident(%s)", dgp->json_key, dgp->json_key,
+                  dgp->cr_port->json_key);
     ovn_lflow_add_with_hint(ctx->lflows, od, S_ROUTER_OUT_UNDNAT, 120,
                             ds_cstr(ctx->undnat_match), action,
                             &ctx->lb->nlb->header_);
@@ -10997,13 +11002,8 @@  copy_ra_to_sb(struct ovn_port *op, const char *address_mode)
 static inline bool
 lrouter_nat_is_stateless(const struct nbrec_nat *nat)
 {
-    const char *stateless = smap_get(&nat->options, "stateless");
-
-    if (stateless && !strcmp(stateless, "true")) {
-        return true;
-    }
-
-    return false;
+    return smap_get_bool(&nat->options, "stateless", false) &&
+           !strcmp(nat->type, "dnat_and_snat");
 }
 
 /* Handles the match criteria and actions in logical flow
@@ -12716,7 +12716,6 @@  build_gateway_redirect_flows_for_lrouter(
             const struct ovn_nat *nat = &od->nat_entries[j];
 
             if (!lrouter_nat_is_stateless(nat->nb) ||
-                strcmp(nat->nb->type, "dnat_and_snat") ||
                 (!nat->nb->allowed_ext_ips && !nat->nb->exempted_ext_ips)) {
                 continue;
             }
@@ -13479,11 +13478,51 @@  build_lrouter_ipv4_ip_input(struct ovn_port *op,
     }
 }
 
+static void
+build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
+                                      struct ovn_datapath *od,
+                                      const struct nbrec_nat *nat,
+                                      struct ds *match, bool distributed,
+                                      bool is_v6, struct ovn_port *l3dgw_port)
+{
+    struct ds zone_match = DS_EMPTY_INITIALIZER;
+
+    ds_put_format(match, "ip && ip%c.dst == %s && inport == %s",
+                  is_v6 ? '6' : '4', nat->external_ip, l3dgw_port->json_key);
+    ds_clone(&zone_match, match);
+
+    ds_put_cstr(match, " && flags.loopback == 0");
+
+    /* Update common zone match for the hairpin traffic. */
+    ds_put_cstr(&zone_match, " && flags.loopback == 1"
+                             " && flags.use_snat_zone == 1");
+
+
+    if (!distributed && od->n_l3dgw_ports) {
+        /* Flows for NAT rules that are centralized are only
+         * programmed on the gateway chassis. */
+        ds_put_format(match, " && is_chassis_resident(%s)",
+                      l3dgw_port->cr_port->json_key);
+        ds_put_format(&zone_match, " && is_chassis_resident(%s)",
+                      l3dgw_port->cr_port->json_key);
+    }
+
+    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_UNSNAT,
+                            100, ds_cstr(match), "ct_snat_in_czone;",
+                            &nat->header_);
+
+    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_UNSNAT,
+                            100, ds_cstr(&zone_match), "ct_snat;",
+                            &nat->header_);
+
+    ds_destroy(&zone_match);
+}
+
 static void
 build_lrouter_in_unsnat_flow(struct hmap *lflows, struct ovn_datapath *od,
                              const struct nbrec_nat *nat, struct ds *match,
-                             struct ds *actions, bool distributed, bool is_v6,
-                             struct ovn_port *l3dgw_port)
+                             bool distributed, bool is_v6,
+                             struct ovn_port *l3dgw_port, bool use_common_zone)
 {
     /* Ingress UNSNAT table: It is for already established connections'
     * reverse traffic. i.e., SNAT has already been done in egress
@@ -13498,66 +13537,39 @@  build_lrouter_in_unsnat_flow(struct hmap *lflows, struct ovn_datapath *od,
         return;
     }
 
+    ds_clear(match);
+
     bool stateless = lrouter_nat_is_stateless(nat);
-    if (od->is_gw_router) {
-        ds_clear(match);
-        ds_clear(actions);
-        ds_put_format(match, "ip && ip%s.dst == %s",
-                      is_v6 ? "6" : "4", nat->external_ip);
-        if (!strcmp(nat->type, "dnat_and_snat") && stateless) {
-            ds_put_format(actions, "next;");
-        } else {
-            ds_put_cstr(actions, "ct_snat;");
-        }
 
-        ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_UNSNAT,
-                                90, ds_cstr(match), ds_cstr(actions),
-                                &nat->header_);
-    } else {
+    if (!od->is_gw_router && use_common_zone && !stateless) {
+        build_lrouter_in_unsnat_in_czone_flow(lflows, od, nat, match,
+                                              distributed, is_v6, l3dgw_port);
+        return;
+    }
+
+    uint16_t priority = od->is_gw_router ? 90 : 100;
+    const char *action = stateless ? "next;" : "ct_snat;";
+
+    ds_put_format(match, "ip && ip%c.dst == %s",
+                  is_v6 ? '6' : '4', nat->external_ip);
+
+    if (!od->is_gw_router) {
         /* Distributed router. */
 
         /* Traffic received on l3dgw_port is subject to NAT. */
-        ds_clear(match);
-        ds_clear(actions);
-        ds_put_format(match, "ip && ip%s.dst == %s && inport == %s && "
-                      "flags.loopback == 0", is_v6 ? "6" : "4",
-                      nat->external_ip, l3dgw_port->json_key);
+        ds_put_format(match, " && inport == %s", l3dgw_port->json_key);
+
         if (!distributed && od->n_l3dgw_ports) {
             /* Flows for NAT rules that are centralized are only
-            * programmed on the gateway chassis. */
+             * programmed on the gateway chassis. */
             ds_put_format(match, " && is_chassis_resident(%s)",
                           l3dgw_port->cr_port->json_key);
         }
-
-        if (!strcmp(nat->type, "dnat_and_snat") && stateless) {
-            ds_put_format(actions, "next;");
-        } else {
-            ds_put_cstr(actions, "ct_snat_in_czone;");
-        }
-
-        ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_UNSNAT,
-                                100, ds_cstr(match), ds_cstr(actions),
-                                &nat->header_);
-
-        if (!stateless) {
-            ds_clear(match);
-            ds_clear(actions);
-            ds_put_format(match, "ip && ip%s.dst == %s && inport == %s && "
-                          "flags.loopback == 1 && flags.use_snat_zone == 1",
-                          is_v6 ? "6" : "4", nat->external_ip,
-                          l3dgw_port->json_key);
-            if (!distributed && od->n_l3dgw_ports) {
-                /* Flows for NAT rules that are centralized are only
-                * programmed on the gateway chassis. */
-                ds_put_format(match, " && is_chassis_resident(%s)",
-                            l3dgw_port->cr_port->json_key);
-            }
-            ds_put_cstr(actions, "ct_snat;");
-            ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_UNSNAT,
-                                    100, ds_cstr(match), ds_cstr(actions),
-                                    &nat->header_);
-        }
     }
+
+    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_UNSNAT,
+                            priority, ds_cstr(match), action,
+                            &nat->header_);
 }
 
 static void
@@ -13565,87 +13577,69 @@  build_lrouter_in_dnat_flow(struct hmap *lflows, struct ovn_datapath *od,
                            const struct nbrec_nat *nat, struct ds *match,
                            struct ds *actions, bool distributed,
                            int cidr_bits, bool is_v6,
-                           struct ovn_port *l3dgw_port)
+                           struct ovn_port *l3dgw_port, bool use_common_zone)
 {
     /* Ingress DNAT table: Packets enter the pipeline with destination
     * IP address that needs to be DNATted from a external IP address
     * to a logical IP address. */
-    if (!strcmp(nat->type, "dnat") || !strcmp(nat->type, "dnat_and_snat")) {
-        bool stateless = lrouter_nat_is_stateless(nat);
+    if (strcmp(nat->type, "dnat") && strcmp(nat->type, "dnat_and_snat")) {
+        return;
+    }
 
-        if (od->is_gw_router) {
-            /* Packet when it goes from the initiator to destination.
-             * We need to set flags.loopback because the router can
-             * send the packet back through the same interface. */
-            ds_clear(match);
-            ds_put_format(match, "ip && ip%s.dst == %s",
-                          is_v6 ? "6" : "4", nat->external_ip);
-            ds_clear(actions);
-            if (nat->allowed_ext_ips || nat->exempted_ext_ips) {
-                lrouter_nat_add_ext_ip_match(od, lflows, match, nat,
-                                             is_v6, true, cidr_bits);
-            }
+    ds_clear(match);
+    ds_clear(actions);
 
-            if (!lport_addresses_is_empty(&od->dnat_force_snat_addrs)) {
-                /* Indicate to the future tables that a DNAT has taken
-                 * place and a force SNAT needs to be done in the
-                 * Egress SNAT table. */
-                ds_put_format(actions, "flags.force_snat_for_dnat = 1; ");
-            }
+    const char *nat_action = !od->is_gw_router && use_common_zone
+                             ? "ct_dnat_in_czone"
+                             : "ct_dnat";
 
-            if (!strcmp(nat->type, "dnat_and_snat") && stateless) {
-                ds_put_format(actions, "flags.loopback = 1; "
-                              "ip%s.dst=%s; next;",
-                              is_v6 ? "6" : "4", nat->logical_ip);
-            } else {
-                ds_put_format(actions, "flags.loopback = 1; ct_dnat(%s",
-                              nat->logical_ip);
+    ds_put_format(match, "ip && ip%c.dst == %s", is_v6 ? '6' : '4',
+                  nat->external_ip);
 
-                if (nat->external_port_range[0]) {
-                    ds_put_format(actions, ",%s", nat->external_port_range);
-                }
-                ds_put_format(actions, ");");
-            }
+    if (od->is_gw_router) {
+        if (!lport_addresses_is_empty(&od->dnat_force_snat_addrs)) {
+            /* Indicate to the future tables that a DNAT has taken
+             * place and a force SNAT needs to be done in the
+             * Egress SNAT table. */
+            ds_put_cstr(actions, "flags.force_snat_for_dnat = 1; ");
+        }
 
-            ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DNAT, 100,
-                                    ds_cstr(match), ds_cstr(actions),
-                                    &nat->header_);
-        } else {
-            /* Distributed router. */
+        /* Packet when it goes from the initiator to destination.
+        * We need to set flags.loopback because the router can
+        * send the packet back through the same interface. */
+        ds_put_cstr(actions, "flags.loopback = 1; ");
+    } else {
+        /* Distributed router. */
 
-            /* Traffic received on l3dgw_port is subject to NAT. */
-            ds_clear(match);
-            ds_put_format(match, "ip && ip%s.dst == %s && inport == %s",
-                          is_v6 ? "6" : "4", nat->external_ip,
-                          l3dgw_port->json_key);
-            if (!distributed && od->n_l3dgw_ports) {
-                /* Flows for NAT rules that are centralized are only
-                * programmed on the gateway chassis. */
-                ds_put_format(match, " && is_chassis_resident(%s)",
-                              l3dgw_port->cr_port->json_key);
-            }
-            ds_clear(actions);
-            if (nat->allowed_ext_ips || nat->exempted_ext_ips) {
-                lrouter_nat_add_ext_ip_match(od, lflows, match, nat,
-                                             is_v6, true, cidr_bits);
-            }
+        /* Traffic received on l3dgw_port is subject to NAT. */
+        ds_put_format(match, " && inport == %s", l3dgw_port->json_key);
+        if (!distributed && od->n_l3dgw_ports) {
+            /* Flows for NAT rules that are centralized are only
+            * programmed on the gateway chassis. */
+            ds_put_format(match, " && is_chassis_resident(%s)",
+                          l3dgw_port->cr_port->json_key);
+        }
+    }
 
-            if (!strcmp(nat->type, "dnat_and_snat") && stateless) {
-                ds_put_format(actions, "ip%s.dst=%s; next;",
-                              is_v6 ? "6" : "4", nat->logical_ip);
-            } else {
-                ds_put_format(actions, "ct_dnat_in_czone(%s", nat->logical_ip);
-                if (nat->external_port_range[0]) {
-                    ds_put_format(actions, ",%s", nat->external_port_range);
-                }
-                ds_put_format(actions, ");");
-            }
+    if (nat->allowed_ext_ips || nat->exempted_ext_ips) {
+        lrouter_nat_add_ext_ip_match(od, lflows, match, nat,
+                                     is_v6, true, cidr_bits);
+    }
 
-            ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DNAT, 100,
-                                    ds_cstr(match), ds_cstr(actions),
-                                    &nat->header_);
+    if (lrouter_nat_is_stateless(nat)) {
+        ds_put_format(actions, "ip%c.dst=%s; next;",
+                      is_v6 ? '6' : '4', nat->logical_ip);
+    } else {
+        ds_put_format(actions, "%s(%s", nat_action, nat->logical_ip);
+        if (nat->external_port_range[0]) {
+            ds_put_format(actions, ",%s", nat->external_port_range);
         }
+        ds_put_format(actions, ");");
     }
+
+    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DNAT, 100,
+                            ds_cstr(match), ds_cstr(actions),
+                            &nat->header_);
 }
 
 static void
@@ -13653,7 +13647,8 @@  build_lrouter_out_undnat_flow(struct hmap *lflows, struct ovn_datapath *od,
                               const struct nbrec_nat *nat, struct ds *match,
                               struct ds *actions, bool distributed,
                               struct eth_addr mac, bool is_v6,
-                              struct ovn_port *l3dgw_port)
+                              struct ovn_port *l3dgw_port,
+                              bool use_common_zone)
 {
     /* Egress UNDNAT table: It is for already established connections'
     * reverse traffic. i.e., DNAT has already been done in ingress
@@ -13668,8 +13663,10 @@  build_lrouter_out_undnat_flow(struct hmap *lflows, struct ovn_datapath *od,
     }
 
     ds_clear(match);
-    ds_put_format(match, "ip && ip%s.src == %s && outport == %s",
-                  is_v6 ? "6" : "4", nat->logical_ip,
+    ds_clear(actions);
+
+    ds_put_format(match, "ip && ip%c.src == %s && outport == %s",
+                  is_v6 ? '6' : '4', nat->logical_ip,
                   l3dgw_port->json_key);
     if (!distributed && od->n_l3dgw_ports) {
         /* Flows for NAT rules that are centralized are only
@@ -13677,18 +13674,17 @@  build_lrouter_out_undnat_flow(struct hmap *lflows, struct ovn_datapath *od,
         ds_put_format(match, " && is_chassis_resident(%s)",
                       l3dgw_port->cr_port->json_key);
     }
-    ds_clear(actions);
+
     if (distributed) {
         ds_put_format(actions, "eth.src = "ETH_ADDR_FMT"; ",
                       ETH_ADDR_ARGS(mac));
     }
 
-    if (!strcmp(nat->type, "dnat_and_snat") &&
-        lrouter_nat_is_stateless(nat)) {
-        ds_put_format(actions, "next;");
+    if (lrouter_nat_is_stateless(nat)) {
+        ds_put_cstr(actions, "next;");
     } else {
-        ds_put_format(actions,
-                      od->is_gw_router ? "ct_dnat;" : "ct_dnat_in_czone;");
+        ds_put_cstr(actions,
+                    use_common_zone ? "ct_dnat_in_czone;" : "ct_dnat;");
     }
 
     ovn_lflow_add_with_hint(lflows, od, S_ROUTER_OUT_UNDNAT, 100,
@@ -13726,12 +13722,76 @@  build_lrouter_out_is_dnat_local(struct hmap *lflows, struct ovn_datapath *od,
                             &nat->header_);
 }
 
+static void
+build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
+                                     struct ovn_datapath *od,
+                                     const struct nbrec_nat *nat,
+                                     struct ds *match,
+                                     struct ds *actions, bool distributed,
+                                     struct eth_addr mac, int cidr_bits,
+                                     bool is_v6, struct ovn_port *l3dgw_port)
+{
+    /* The priority here is calculated such that the
+     * nat->logical_ip with the longest mask gets a higher
+     * priority. */
+    uint16_t priority = cidr_bits + 1;
+    struct ds zone_actions = DS_EMPTY_INITIALIZER;
+
+    ds_put_format(match, "ip && ip%c.src == %s && outport == %s",
+                  is_v6 ? '6' : '4', nat->logical_ip, l3dgw_port->json_key);
+
+    if (od->n_l3dgw_ports) {
+        priority += 128;
+        ds_put_format(match, " && is_chassis_resident(\"%s\")",
+                      distributed
+                      ? nat->logical_port
+                      : l3dgw_port->cr_port->key);
+    }
+
+    if (distributed) {
+        ds_put_format(actions, "eth.src = "ETH_ADDR_FMT"; ",
+                      ETH_ADDR_ARGS(mac));
+        ds_put_format(&zone_actions, "eth.src = "ETH_ADDR_FMT"; ",
+                      ETH_ADDR_ARGS(mac));
+    }
+
+    if (nat->allowed_ext_ips || nat->exempted_ext_ips) {
+        lrouter_nat_add_ext_ip_match(od, lflows, match, nat,
+                                     is_v6, false, cidr_bits);
+    }
+
+    ds_put_cstr(&zone_actions, REGBIT_DST_NAT_IP_LOCAL" = 0; ");
+
+    ds_put_format(actions, "ct_snat_in_czone(%s", nat->external_ip);
+    ds_put_format(&zone_actions, "ct_snat(%s", nat->external_ip);
+
+    if (nat->external_port_range[0]) {
+        ds_put_format(actions, ",%s", nat->external_port_range);
+        ds_put_format(&zone_actions, ",%s", nat->external_port_range);
+    }
+
+    ds_put_cstr(actions, ");");
+    ds_put_cstr(&zone_actions, ");");
+
+    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_OUT_SNAT,
+                            priority, ds_cstr(match),
+                            ds_cstr(actions), &nat->header_);
+
+    ds_put_cstr(match, " && "REGBIT_DST_NAT_IP_LOCAL" == 1");
+
+    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_OUT_SNAT,
+                            priority + 1, ds_cstr(match),
+                            ds_cstr(&zone_actions), &nat->header_);
+
+    ds_destroy(&zone_actions);
+}
+
 static void
 build_lrouter_out_snat_flow(struct hmap *lflows, struct ovn_datapath *od,
                             const struct nbrec_nat *nat, struct ds *match,
                             struct ds *actions, bool distributed,
                             struct eth_addr mac, int cidr_bits, bool is_v6,
-                            struct ovn_port *l3dgw_port)
+                            struct ovn_port *l3dgw_port, bool use_common_zone)
 {
     /* Egress SNAT table: Packets enter the egress pipeline with
     * source ip address that needs to be SNATted to a external ip
@@ -13740,109 +13800,67 @@  build_lrouter_out_snat_flow(struct hmap *lflows, struct ovn_datapath *od,
         return;
     }
 
-    bool stateless = lrouter_nat_is_stateless(nat);
-    if (od->is_gw_router) {
-        ds_clear(match);
-        ds_put_format(match, "ip && ip%s.src == %s",
-                      is_v6 ? "6" : "4", nat->logical_ip);
-        ds_clear(actions);
+    ds_clear(match);
+    ds_clear(actions);
 
-        if (nat->allowed_ext_ips || nat->exempted_ext_ips) {
-            lrouter_nat_add_ext_ip_match(od, lflows, match, nat,
-                                         is_v6, false, cidr_bits);
-        }
+    bool stateless = lrouter_nat_is_stateless(nat);
 
-        if (!strcmp(nat->type, "dnat_and_snat") && stateless) {
-            ds_put_format(actions, "ip%s.src=%s; next;",
-                          is_v6 ? "6" : "4", nat->external_ip);
-        } else {
-            ds_put_format(match, " && (!ct.trk || !ct.rpl)");
-            ds_put_format(actions, "ct_snat(%s", nat->external_ip);
+    if (!od->is_gw_router && use_common_zone && !stateless) {
+        build_lrouter_out_snat_in_czone_flow(lflows, od, nat, match, actions,
+                                             distributed, mac, cidr_bits,
+                                             is_v6, l3dgw_port);
+        return;
+    }
 
-            if (nat->external_port_range[0]) {
-                ds_put_format(actions, ",%s",
-                              nat->external_port_range);
-            }
-            ds_put_format(actions, ");");
-        }
+    /* The priority here is calculated such that the
+     * nat->logical_ip with the longest mask gets a higher
+     * priority. */
+    uint16_t priority = cidr_bits + 1;
 
-        /* The priority here is calculated such that the
-        * nat->logical_ip with the longest mask gets a higher
-        * priority. */
-        ovn_lflow_add_with_hint(lflows, od, S_ROUTER_OUT_SNAT,
-                                cidr_bits + 1, ds_cstr(match),
-                                ds_cstr(actions), &nat->header_);
-    } else {
-        uint16_t priority = cidr_bits + 1;
+    ds_put_format(match, "ip && ip%c.src == %s",
+                  is_v6 ? '6' : '4', nat->logical_ip);
 
+    if (!od->is_gw_router) {
         /* Distributed router. */
-        ds_clear(match);
-        ds_put_format(match, "ip && ip%s.src == %s && outport == %s",
-                      is_v6 ? "6" : "4", nat->logical_ip,
-                      l3dgw_port->json_key);
+        ds_put_format(match, " && outport == %s", l3dgw_port->json_key);
         if (od->n_l3dgw_ports) {
-            if (distributed) {
-                ovs_assert(nat->logical_port);
-                priority += 128;
-                ds_put_format(match, " && is_chassis_resident(\"%s\")",
-                              nat->logical_port);
-            } else {
-                /* Flows for NAT rules that are centralized are only
-                * programmed on the gateway chassis. */
-                priority += 128;
-                ds_put_format(match, " && is_chassis_resident(%s)",
-                              l3dgw_port->cr_port->json_key);
-            }
-        }
-        ds_clear(actions);
-
-        if (nat->allowed_ext_ips || nat->exempted_ext_ips) {
-            lrouter_nat_add_ext_ip_match(od, lflows, match, nat,
-                                         is_v6, false, cidr_bits);
+            priority += 128;
+            ds_put_format(match, " && is_chassis_resident(\"%s\")",
+                          distributed
+                          ? nat->logical_port
+                          : l3dgw_port->cr_port->key);
         }
 
         if (distributed) {
             ds_put_format(actions, "eth.src = "ETH_ADDR_FMT"; ",
                           ETH_ADDR_ARGS(mac));
         }
+    }
 
-        if (!strcmp(nat->type, "dnat_and_snat") && stateless) {
-            ds_put_format(actions, "ip%s.src=%s; next;",
-                          is_v6 ? "6" : "4", nat->external_ip);
-        } else {
-            ds_put_format(actions, "ct_snat_in_czone(%s",
-                        nat->external_ip);
-            if (nat->external_port_range[0]) {
-                ds_put_format(actions, ",%s", nat->external_port_range);
-            }
-            ds_put_format(actions, ");");
-        }
+    if (nat->allowed_ext_ips || nat->exempted_ext_ips) {
+        lrouter_nat_add_ext_ip_match(od, lflows, match, nat,
+                                     is_v6, false, cidr_bits);
+    }
 
-        /* The priority here is calculated such that the
-        * nat->logical_ip with the longest mask gets a higher
-        * priority. */
-        ovn_lflow_add_with_hint(lflows, od, S_ROUTER_OUT_SNAT,
-                                priority, ds_cstr(match),
-                                ds_cstr(actions), &nat->header_);
+    if (od->is_gw_router && !stateless) {
+        /* Gateway router. */
+        ds_put_cstr(match, " && (!ct.trk || !ct.rpl)");
+    }
 
-        if (!stateless) {
-            ds_put_cstr(match, " && "REGBIT_DST_NAT_IP_LOCAL" == 1");
-            ds_clear(actions);
-            if (distributed) {
-                ds_put_format(actions, "eth.src = "ETH_ADDR_FMT"; ",
-                              ETH_ADDR_ARGS(mac));
-            }
-            ds_put_format(actions,  REGBIT_DST_NAT_IP_LOCAL" = 0; ct_snat(%s",
-                          nat->external_ip);
-            if (nat->external_port_range[0]) {
-                ds_put_format(actions, ",%s", nat->external_port_range);
-            }
-            ds_put_format(actions, ");");
-            ovn_lflow_add_with_hint(lflows, od, S_ROUTER_OUT_SNAT,
-                                    priority + 1, ds_cstr(match),
-                                    ds_cstr(actions), &nat->header_);
+    if (stateless) {
+        ds_put_format(actions, "ip%c.src=%s; next;",
+                      is_v6 ? '6' : '4', nat->external_ip);
+    } else {
+        ds_put_format(actions, "ct_snat(%s", nat->external_ip);
+        if (nat->external_port_range[0]) {
+            ds_put_format(actions, ",%s", nat->external_port_range);
         }
+        ds_put_format(actions, ");");
     }
+
+    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_OUT_SNAT,
+                            priority, ds_cstr(match),
+                            ds_cstr(actions), &nat->header_);
 }
 
 static void
@@ -14186,6 +14204,9 @@  build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od, struct hmap *lflows,
         return;
     }
 
+    bool use_common_zone =
+        smap_get_bool(&od->nbr->options, "use_common_zone", false);
+
     struct sset nat_entries = SSET_INITIALIZER(&nat_entries);
 
     bool dnat_force_snat_ip =
@@ -14207,11 +14228,12 @@  build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od, struct hmap *lflows,
         }
 
         /* S_ROUTER_IN_UNSNAT */
-        build_lrouter_in_unsnat_flow(lflows, od, nat, match, actions, distributed,
-                                     is_v6, l3dgw_port);
+        build_lrouter_in_unsnat_flow(lflows, od, nat, match, distributed,
+                                     is_v6, l3dgw_port, use_common_zone);
         /* S_ROUTER_IN_DNAT */
-        build_lrouter_in_dnat_flow(lflows, od, nat, match, actions, distributed,
-                                   cidr_bits, is_v6, l3dgw_port);
+        build_lrouter_in_dnat_flow(lflows, od, nat, match, actions,
+                                   distributed, cidr_bits, is_v6, l3dgw_port,
+                                   use_common_zone);
 
         /* ARP resolve for NAT IPs. */
         if (od->is_gw_router) {
@@ -14280,16 +14302,20 @@  build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od, struct hmap *lflows,
             }
         }
 
-        /* S_ROUTER_OUT_DNAT_LOCAL */
-        build_lrouter_out_is_dnat_local(lflows, od, nat, match, actions,
-                                        distributed, is_v6, l3dgw_port);
+        if (use_common_zone) {
+            /* S_ROUTER_OUT_DNAT_LOCAL */
+            build_lrouter_out_is_dnat_local(lflows, od, nat, match, actions,
+                                            distributed, is_v6, l3dgw_port);
+        }
 
         /* S_ROUTER_OUT_UNDNAT */
-        build_lrouter_out_undnat_flow(lflows, od, nat, match, actions, distributed,
-                                      mac, is_v6, l3dgw_port);
+        build_lrouter_out_undnat_flow(lflows, od, nat, match, actions,
+                                      distributed, mac, is_v6, l3dgw_port,
+                                      use_common_zone);
         /* S_ROUTER_OUT_SNAT */
-        build_lrouter_out_snat_flow(lflows, od, nat, match, actions, distributed,
-                                    mac, cidr_bits, is_v6, l3dgw_port);
+        build_lrouter_out_snat_flow(lflows, od, nat, match, actions,
+                                    distributed, mac, cidr_bits, is_v6,
+                                    l3dgw_port, use_common_zone);
 
         /* S_ROUTER_IN_ADMISSION - S_ROUTER_IN_IP_INPUT */
         build_lrouter_ingress_flow(lflows, od, nat, match, actions, mac,
@@ -14358,8 +14384,11 @@  build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od, struct hmap *lflows,
                           "clone { ct_clear; "
                           "inport = outport; outport = \"\"; "
                           "eth.dst <-> eth.src; "
-                          "flags = 0; flags.loopback = 1; "
-                          "flags.use_snat_zone = "REGBIT_DST_NAT_IP_LOCAL"; ");
+                          "flags = 0; flags.loopback = 1; ");
+            if (use_common_zone) {
+                ds_put_cstr(actions, "flags.use_snat_zone = "
+                            REGBIT_DST_NAT_IP_LOCAL"; ");
+            }
             for (int j = 0; j < MFF_N_LOG_REGS; j++) {
                 ds_put_format(actions, "reg%d = 0; ", j);
             }
@@ -14372,7 +14401,7 @@  build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od, struct hmap *lflows,
         }
     }
 
-    if (od->nbr->n_nat) {
+    if (use_common_zone && od->nbr->n_nat) {
         ds_clear(match);
         const char *ct_natted = features->ct_no_masked_label ?
                                 "ct_mark.natted" :
diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
index 131345e85..0135bb0b9 100644
--- a/northd/ovn-northd.8.xml
+++ b/northd/ovn-northd.8.xml
@@ -3229,13 +3229,11 @@  icmp6 {
             <p>
               The first flow matches <code>ip &amp;&amp;
               ip4.dst == <var>B</var> &amp;&amp; inport == <var>GW</var>
-              &amp;&amp; flags.loopback == 0</code> or
-              <code>ip &amp;&amp;
-              ip6.dst == <var>B</var> &amp;&amp; inport == <var>GW</var>
-              &amp;&amp; flags.loopback == 0</code>
+              </code> or <code>ip &amp;&amp; ip6.dst == <var>B</var> &amp;&amp;
+              inport == <var>GW</var></code>
               where <var>GW</var> is the distributed gateway port
               corresponding to the NAT rule (specified or inferred), with an
-              action <code>ct_snat_in_czone;</code> to unSNAT in the common
+              action <code>ct_snat;</code> to unSNAT in the common
               zone.  If the NAT rule is of type dnat_and_snat and has
               <code>stateless=true</code> in the options, then the action
               would be <code>next;</code>.
@@ -3248,32 +3246,6 @@  icmp6 {
               <var>GW</var>.
             </p>
           </li>
-
-          <li>
-            <p>
-              The second flow matches <code>ip &amp;&amp;
-              ip4.dst == <var>B</var> &amp;&amp; inport == <var>GW</var>
-              &amp;&amp; flags.loopback == 1 &amp;&amp;
-              flags.use_snat_zone == 1</code> or
-              <code>ip &amp;&amp;
-              ip6.dst == <var>B</var> &amp;&amp; inport == <var>GW</var>
-              &amp;&amp; flags.loopback == 0 &amp;&amp;
-              flags.use_snat_zone == 1</code>
-              where <var>GW</var> is the distributed gateway port
-              corresponding to the NAT rule (specified or inferred), with an
-              action <code>ct_snat;</code> to unSNAT in the snat zone. If the
-              NAT rule is of type dnat_and_snat and has
-              <code>stateless=true</code> in the options, then the action
-              would be <code>ip4/6.dst=(<var>B</var>)</code>.
-            </p>
-
-            <p>
-              If the NAT entry is of type <code>snat</code>, then there is an
-              additional match <code>is_chassis_resident(<var>cr-GW</var>)
-              </code> where <var>cr-GW</var> is the chassis resident port of
-              <var>GW</var>.
-            </p>
-          </li>
         </ul>
 
         <p>
@@ -4649,46 +4621,12 @@  nd_ns {
     </p>
 
     <ul>
-      <li>
-        <p>
-          For each NAT rule in the OVN Northbound database on a
-          distributed router, a priority-50 logical flow with match
-          <code>ip4.dst == <var>E</var> &amp;&amp;
-          is_chassis_resident(<var>P</var>)</code>, where <var>E</var> is the
-          external IP address specified in the NAT rule, <var>GW</var>
-          is the logical router distributed gateway port. For dnat_and_snat
-          NAT rule, <var>P</var> is the logical port specified in the NAT rule.
-          If <ref column="logical_port"
-          table="NAT" db="OVN_Northbound"/> column of
-          <ref table="NAT" db="OVN_Northbound"/> table is NOT set, then
-          <var>P</var> is the <code>chassisredirect port</code> of
-          <var>GW</var> with the actions:
-          <code>REGBIT_DST_NAT_IP_LOCAL = 1; next; </code>
-        </p>
-      </li>
-
       <li>
         A priority-0 logical flow with match <code>1</code> has actions
         <code>REGBIT_DST_NAT_IP_LOCAL = 0; next;</code>.
       </li>
     </ul>
 
-    <p>
-      This table also installs a priority-50 logical flow for each logical
-      router that has NATs configured on it. The flow has match
-      <code>ip &amp;&amp; ct_label.natted == 1</code> and action
-      <code>REGBIT_DST_NAT_IP_LOCAL = 1; next;</code>. This is intended
-      to ensure that traffic that was DNATted locally will use a separate
-      conntrack zone for SNAT if SNAT is required later in the egress
-      pipeline. Note that this flow checks the value of
-      <code>ct_label.natted</code>, which is set in the ingress pipeline.
-      This means that ovn-northd assumes that this value is carried over
-      from the ingress pipeline to the egress pipeline and is not altered
-      or cleared. If conntrack label values are ever changed to be cleared
-      between the ingress and egress pipelines, then the match conditions
-      of this flow will be updated accordingly.
-    </p>
-
     <h3>Egress Table 1: UNDNAT</h3>
 
     <p>
@@ -4725,7 +4663,7 @@  nd_ns {
           gateway chassis that matches
           <code>ip &amp;&amp; ip4.src == <var>B</var> &amp;&amp;
           outport == <var>GW</var></code>, where <var>GW</var> is the logical
-          router gateway port with an action <code>ct_dnat_in_czone;</code>.
+          router gateway port with an action <code>ct_dnat;</code>.
           If the backend IPv4 address <var>B</var> is also configured with
           L4 port <var>PORT</var> of protocol <var>P</var>, then the
           match also includes <code>P.src</code> == <var>PORT</var>.  These
@@ -4747,7 +4685,7 @@  nd_ns {
           matches <code>ip &amp;&amp; ip4.src == <var>B</var>
           &amp;&amp; outport == <var>GW</var></code>, where <var>GW</var>
           is the logical router gateway port, with an action
-          <code>ct_dnat_in_czone;</code>. If the NAT rule is of type
+          <code>ct_dnat;</code>. If the NAT rule is of type
           dnat_and_snat and has <code>stateless=true</code> in the
           options, then the action would be <code>next;</code>.
         </p>
@@ -4755,7 +4693,7 @@  nd_ns {
         <p>
           If the NAT rule cannot be handled in a distributed manner, then
           the priority-100 flow above is only programmed on the
-          gateway chassis with the action <code>ct_dnat_in_czone</code>.
+          gateway chassis with the action <code>ct_dnat</code>.
         </p>
 
         <p>
@@ -4931,24 +4869,11 @@  nd_ns {
             and match <code>ip &amp;&amp; ip4.src == <var>A</var> &amp;&amp;
             outport == <var>GW</var></code>, where <var>GW</var> is the
             logical router gateway port, with an action
-            <code>ct_snat_in_czone(<var>B</var>);</code> to SNATed in the
+            <code>ct_snat(<var>B</var>);</code> to SNATed in the
             common zone.  If the NAT rule is of type dnat_and_snat and has
             <code>stateless=true</code> in the options, then the action
             would be <code>ip4/6.src=(<var>B</var>)</code>.
           </li>
-
-          <li>
-            The second flow is added with the calculated priority
-            <code><var>P</var> + 1 </code> and match
-            <code>ip &amp;&amp; ip4.src == <var>A</var> &amp;&amp;
-            outport == <var>GW</var> &amp;&amp;
-            REGBIT_DST_NAT_IP_LOCAL == 0</code>, where <var>GW</var> is the
-            logical router gateway port, with an action
-            <code>ct_snat(<var>B</var>);</code> to SNAT in the snat zone.
-            If the NAT rule is of type dnat_and_snat and has
-            <code>stateless=true</code> in the options, then the action would
-            be <code>ip4/6.src=(<var>B</var>)</code>.
-          </li>
         </ul>
 
         <p>
@@ -5032,7 +4957,6 @@  clone {
     outport = "";
     flags = 0;
     flags.loopback = 1;
-    flags.use_snat_zone = REGBIT_DST_NAT_IP_LOCAL;
     reg0 = 0;
     reg1 = 0;
     ...
diff --git a/ovn-nb.xml b/ovn-nb.xml
index 8d56d0c6e..494a3ae41 100644
--- a/ovn-nb.xml
+++ b/ovn-nb.xml
@@ -2537,6 +2537,16 @@  or
         exceeding this timeout will be automatically removed. The value
         defaults to 0, which means disabled.
       </column>
+
+      <column name="options" key="use_common_zone"
+              type='{"type": "boolean"}'>
+        Default value is <code>false</code>. If set to <code>true</code>
+        the SNAT and DNAT happens in common zone, instead of happening in
+        separate zones, depending on the configuration. However, this option
+        breaks traffic when there is configuration of DGP + LB + SNAT on
+        this LR. The value <code>true</code> should be used only in case
+        of HWOL compatibility with GDP.
+      </column>
     </group>
 
     <group title="Common Columns">
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index d0f6893e9..eaa468bf6 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -892,7 +892,7 @@  check_flow_match_sets() {
 echo
 echo "IPv4: stateful"
 ovn-nbctl --wait=sb lr-nat-add R1 dnat_and_snat  172.16.1.1 50.0.0.11
-check_flow_match_sets 3 4 2 0 0 0 0
+check_flow_match_sets 2 2 2 0 0 0 0
 ovn-nbctl lr-nat-del R1 dnat_and_snat  172.16.1.1
 
 echo
@@ -904,7 +904,7 @@  ovn-nbctl lr-nat-del R1 dnat_and_snat  172.16.1.1
 echo
 echo "IPv6: stateful"
 ovn-nbctl --wait=sb lr-nat-add R1 dnat_and_snat fd01::1 fd11::2
-check_flow_match_sets 3 4 2 0 0 0 0
+check_flow_match_sets 2 2 2 0 0 0 0
 ovn-nbctl lr-nat-del R1 dnat_and_snat  fd01::1
 
 echo
@@ -939,9 +939,9 @@  echo "CR-LRP UUID is: " $uuid
 ovn-nbctl --portrange lr-nat-add R1 dnat_and_snat  172.16.1.1 50.0.0.11 1-3000
 
 AT_CAPTURE_FILE([sbflows])
-OVS_WAIT_UNTIL([ovn-sbctl dump-flows R1 > sbflows && test 3 = `grep -c lr_in_unsnat sbflows`])
+OVS_WAIT_UNTIL([ovn-sbctl dump-flows R1 > sbflows && test 2 = `grep -c lr_in_unsnat sbflows`])
 AT_CHECK([grep -c 'ct_snat.*3000' sbflows && grep -c 'ct_dnat.*3000' sbflows],
-  [0], [2
+  [0], [1
 1
 ])
 
@@ -949,9 +949,9 @@  ovn-nbctl lr-nat-del R1 dnat_and_snat  172.16.1.1
 ovn-nbctl --wait=sb --portrange lr-nat-add R1 snat  172.16.1.1 50.0.0.11 1-3000
 
 AT_CAPTURE_FILE([sbflows2])
-OVS_WAIT_UNTIL([ovn-sbctl dump-flows R1 > sbflows2 && test 3 = `grep -c lr_in_unsnat sbflows`])
+OVS_WAIT_UNTIL([ovn-sbctl dump-flows R1 > sbflows2 && test 2 = `grep -c lr_in_unsnat sbflows`])
 AT_CHECK([grep -c 'ct_snat.*3000' sbflows2 && grep -c 'ct_dnat.*3000' sbflows2],
-  [1], [2
+  [1], [1
 0
 ])
 
@@ -959,7 +959,7 @@  ovn-nbctl lr-nat-del R1 snat  172.16.1.1
 ovn-nbctl --wait=sb --portrange --stateless lr-nat-add R1 dnat_and_snat  172.16.1.2 50.0.0.12 1-3000
 
 AT_CAPTURE_FILE([sbflows3])
-OVS_WAIT_UNTIL([ovn-sbctl dump-flows R1 > sbflows3 && test 4 = `grep -c lr_in_unsnat sbflows3`])
+OVS_WAIT_UNTIL([ovn-sbctl dump-flows R1 > sbflows3 && test 3 = `grep -c lr_in_unsnat sbflows3`])
 AT_CHECK([grep 'ct_[s]dnat.*172\.16\.1\.2.*3000' sbflows3], [1])
 
 ovn-nbctl lr-nat-del R1 dnat_and_snat  172.16.1.1
@@ -1026,8 +1026,7 @@  AT_CAPTURE_FILE([crflows])
 AT_CHECK([grep -e "lr_out_snat" drflows | sed 's/table=../table=??/' | sort], [0], [dnl
   table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
   table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && ip4.dst == $allowed_range), action=(ct_snat_in_czone(172.16.1.1);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && ip4.dst == $allowed_range && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(172.16.1.1);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && ip4.dst == $allowed_range), action=(ct_snat(172.16.1.1);)
 ])
 
 AT_CHECK([grep -e "lr_out_snat" crflows | sed 's/table=../table=??/' | sort], [0], [dnl
@@ -1057,8 +1056,7 @@  AT_CAPTURE_FILE([crflows2])
 AT_CHECK([grep -e "lr_out_snat" drflows2 | sed 's/table=../table=??/' | sort], [0], [dnl
   table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
   table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat_in_czone(172.16.1.1);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(172.16.1.1);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat(172.16.1.1);)
   table=??(lr_out_snat        ), priority=163  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && ip4.dst == $disallowed_range), action=(next;)
 ])
 
@@ -1087,8 +1085,7 @@  AT_CAPTURE_FILE([crflows2])
 AT_CHECK([grep -e "lr_out_snat" drflows3 | sed 's/table=../table=??/' | sort], [0], [dnl
   table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
   table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && ip4.dst == $allowed_range), action=(ct_snat_in_czone(172.16.1.2);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && ip4.dst == $allowed_range && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(172.16.1.2);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && ip4.dst == $allowed_range), action=(ct_snat(172.16.1.2);)
 ])
 
 AT_CHECK([grep -e "lr_out_snat" crflows3 | sed 's/table=../table=??/' | sort], [0], [dnl
@@ -1115,8 +1112,7 @@  AT_CAPTURE_FILE([crflows2])
 AT_CHECK([grep -e "lr_out_snat" drflows4 | sed 's/table=../table=??/' | sort], [0], [dnl
   table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
   table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat_in_czone(172.16.1.2);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(172.16.1.2);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat(172.16.1.2);)
   table=??(lr_out_snat        ), priority=163  , match=(ip && ip4.src == 50.0.0.11 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && ip4.dst == $disallowed_range), action=(next;)
 ])
 
@@ -5034,7 +5030,7 @@  AT_CLEANUP
 ])
 
 OVN_FOR_EACH_NORTHD_NO_HV([
-AT_SETUP([ovn -- LR NAT flows])
+AT_SETUP([ovn -- LR NAT flows - use common zone])
 ovn_start
 
 check ovn-nbctl \
@@ -5132,6 +5128,9 @@  ovn-nbctl lsp-add public public-lr0 -- set Logical_Switch_Port public-lr0 \
     type=router options:router-port=lr0-public \
     -- lsp-set-addresses public-lr0 router
 
+# Common zone for DGP
+
+check ovn-nbctl set logical_router lr0 options:use_common_zone="true"
 check ovn-nbctl --wait=sb sync
 
 ovn-sbctl dump-flows lr0 > lr0flows
@@ -5184,6 +5183,51 @@  AT_CHECK([grep "lr_out_snat" lr0flows | sed 's/table=./table=?/' | sort], [0], [
   table=? (lr_out_snat        ), priority=162  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(172.168.0.20);)
 ])
 
+# Separate zones for DGP
+
+check ovn-nbctl remove logical_router lr0 options use_common_zone
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.10 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.20 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.30 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+  table=7 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.168.0.20 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat(10.0.0.3);)
+])
+
+AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
+  table=? (lr_out_chk_dnat_local), priority=0    , match=(1), action=(reg9[[4]] = 0; next;)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
+  table=? (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=? (lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
+  table=? (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
+  table=? (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=? (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=? (lr_out_snat        ), priority=153  , match=(ip && ip4.src == 10.0.0.0/24 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.10);)
+  table=? (lr_out_snat        ), priority=161  , match=(ip && ip4.src == 10.0.0.10 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.30);)
+  table=? (lr_out_snat        ), priority=161  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.20);)
+])
+
 # Associate load balancer to lr0
 
 check ovn-nbctl lb-add lb0 172.168.0.100:8082 "10.0.0.50:82,10.0.0.60:82"
@@ -5195,6 +5239,10 @@  check ovn-nbctl lb-add lb2 172.168.0.210:60 "10.0.0.50:6062,10.0.0.60:6062" udp
 check ovn-nbctl lr-lb-add lr0 lb0
 check ovn-nbctl lr-lb-add lr0 lb1
 check ovn-nbctl lr-lb-add lr0 lb2
+
+# Common zone for DGP
+
+check ovn-nbctl set logical_router lr0 options:use_common_zone="true"
 check ovn-nbctl --wait=sb sync
 
 ovn-sbctl dump-flows lr0 > lr0flows
@@ -5246,10 +5294,10 @@  AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | sor
 AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
   table=? (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
   table=? (lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone;)
-  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.4 && tcp.src == 8080)) && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone;)
-  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.50 && tcp.src == 82) || (ip4.src == 10.0.0.60 && tcp.src == 82)) && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone;)
-  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.50 && udp.src == 6062) || (ip4.src == 10.0.0.60 && udp.src == 6062)) && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone;)
-  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.80) || (ip4.src == 10.0.0.81)) && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone;)
+  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.4 && tcp.src == 8080)) && (inport == "lr0-public" || outport == "lr0-public") && is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone;)
+  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.50 && tcp.src == 82) || (ip4.src == 10.0.0.60 && tcp.src == 82)) && (inport == "lr0-public" || outport == "lr0-public") && is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone;)
+  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.50 && udp.src == 6062) || (ip4.src == 10.0.0.60 && udp.src == 6062)) && (inport == "lr0-public" || outport == "lr0-public") && is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone;)
+  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.80) || (ip4.src == 10.0.0.81)) && (inport == "lr0-public" || outport == "lr0-public") && is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone;)
 ])
 
 AT_CHECK([grep "lr_out_post_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
@@ -5267,6 +5315,71 @@  AT_CHECK([grep "lr_out_snat" lr0flows | sed 's/table=./table=?/' | sort], [0], [
   table=? (lr_out_snat        ), priority=162  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(172.168.0.20);)
 ])
 
+# Separate zones for DGP
+
+check ovn-nbctl remove logical_router lr0 options use_common_zone
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.10 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.20 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.30 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=110  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; reg9[[16..31]] = tcp.dst; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=110  , match=(ip && ip4.dst == 172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; reg9[[16..31]] = tcp.dst; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=110  , match=(ip && ip4.dst == 172.168.0.210 && udp), action=(reg0 = 172.168.0.210; reg9[[16..31]] = udp.dst; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=50   , match=(icmp || icmp6), action=(ct_dnat;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+  table=7 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.168.0.20 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat(10.0.0.3);)
+  table=7 (lr_in_dnat         ), priority=110  , match=(ct.est && !ct.rel && ip4 && reg0 == 172.168.0.200 && ct_mark.natted == 1 && is_chassis_resident("cr-lr0-public")), action=(next;)
+  table=7 (lr_in_dnat         ), priority=110  , match=(ct.new && !ct.rel && ip4 && reg0 == 172.168.0.200 && is_chassis_resident("cr-lr0-public")), action=(ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
+  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && !ct.rel && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1 && is_chassis_resident("cr-lr0-public")), action=(next;)
+  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && !ct.rel && ip4 && reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 1 && is_chassis_resident("cr-lr0-public")), action=(next;)
+  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && !ct.rel && ip4 && reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1 && is_chassis_resident("cr-lr0-public")), action=(next;)
+  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && !ct.rel && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && is_chassis_resident("cr-lr0-public")), action=(ct_lb_mark(backends=10.0.0.4:8080);)
+  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && !ct.rel && ip4 && reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && is_chassis_resident("cr-lr0-public")), action=(ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
+  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && !ct.rel && ip4 && reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && is_chassis_resident("cr-lr0-public")), action=(ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
+  table=7 (lr_in_dnat         ), priority=50   , match=(ct.rel && !ct.est && !ct.new), action=(ct_commit_nat;)
+  table=7 (lr_in_dnat         ), priority=70   , match=(ct.rel && !ct.est && !ct.new && ct_mark.force_snat == 1), action=(flags.force_snat_for_lb = 1; ct_commit_nat;)
+  table=7 (lr_in_dnat         ), priority=70   , match=(ct.rel && !ct.est && !ct.new && ct_mark.skip_snat == 1), action=(flags.skip_snat_for_lb = 1; ct_commit_nat;)
+])
+
+AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
+  table=? (lr_out_chk_dnat_local), priority=0    , match=(1), action=(reg9[[4]] = 0; next;)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
+  table=? (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=? (lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.4 && tcp.src == 8080)) && (inport == "lr0-public" || outport == "lr0-public") && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.50 && tcp.src == 82) || (ip4.src == 10.0.0.60 && tcp.src == 82)) && (inport == "lr0-public" || outport == "lr0-public") && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.50 && udp.src == 6062) || (ip4.src == 10.0.0.60 && udp.src == 6062)) && (inport == "lr0-public" || outport == "lr0-public") && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=? (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.80) || (ip4.src == 10.0.0.81)) && (inport == "lr0-public" || outport == "lr0-public") && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
+  table=? (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
+  table=? (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=? (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=? (lr_out_snat        ), priority=153  , match=(ip && ip4.src == 10.0.0.0/24 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.10);)
+  table=? (lr_out_snat        ), priority=161  , match=(ip && ip4.src == 10.0.0.10 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.30);)
+  table=? (lr_out_snat        ), priority=161  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.20);)
+])
+
 # Make the logical router as Gateway router
 check ovn-nbctl clear logical_router_port lr0-public gateway_chassis
 check ovn-nbctl set logical_router lr0 options:chassis=gw1
@@ -5310,7 +5423,6 @@  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
 
 AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
   table=? (lr_out_chk_dnat_local), priority=0    , match=(1), action=(reg9[[4]] = 0; next;)
-  table=? (lr_out_chk_dnat_local), priority=50   , match=(ip && ct_mark.natted == 1), action=(reg9[[4]] = 1; next;)
 ])
 
 AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
@@ -5375,7 +5487,6 @@  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
 
 AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
   table=? (lr_out_chk_dnat_local), priority=0    , match=(1), action=(reg9[[4]] = 0; next;)
-  table=? (lr_out_chk_dnat_local), priority=50   , match=(ip && ct_mark.natted == 1), action=(reg9[[4]] = 1; next;)
 ])
 
 AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
@@ -5445,7 +5556,6 @@  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
 
 AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
   table=? (lr_out_chk_dnat_local), priority=0    , match=(1), action=(reg9[[4]] = 0; next;)
-  table=? (lr_out_chk_dnat_local), priority=50   , match=(ip && ct_mark.natted == 1), action=(reg9[[4]] = 1; next;)
 ])
 
 AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
@@ -5528,7 +5638,6 @@  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
 
 AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
   table=? (lr_out_chk_dnat_local), priority=0    , match=(1), action=(reg9[[4]] = 0; next;)
-  table=? (lr_out_chk_dnat_local), priority=50   , match=(ip && ct_mark.natted == 1), action=(reg9[[4]] = 1; next;)
 ])
 
 AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], [0], [dnl
@@ -6968,21 +7077,15 @@  AT_CHECK([grep lr_in_ip_input lrflows | grep arp | grep -e 172.16.1.10 -e 10.0.0
 ])
 
 AT_CHECK([grep lr_in_unsnat lrflows | grep ct_snat | sed 's/table=../table=??/' | sort], [0], [dnl
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && flags.loopback == 0 && is_chassis_resident("cr-DR-S2")), action=(ct_snat_in_czone;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && flags.loopback == 1 && flags.use_snat_zone == 1 && is_chassis_resident("cr-DR-S2")), action=(ct_snat;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && flags.loopback == 0 && is_chassis_resident("cr-DR-S1")), action=(ct_snat_in_czone;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && flags.loopback == 1 && flags.use_snat_zone == 1 && is_chassis_resident("cr-DR-S1")), action=(ct_snat;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 192.168.0.10 && inport == "DR-S3" && flags.loopback == 0 && is_chassis_resident("cr-DR-S3")), action=(ct_snat_in_czone;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 192.168.0.10 && inport == "DR-S3" && flags.loopback == 1 && flags.use_snat_zone == 1 && is_chassis_resident("cr-DR-S3")), action=(ct_snat;)
+  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_snat;)
+  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat;)
+  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 192.168.0.10 && inport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_snat;)
 ])
 
 AT_CHECK([grep lr_out_snat lrflows | grep ct_snat | sed 's/table=../table=??/' | sort], [0], [dnl
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat_in_czone(172.16.1.10);)
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_snat_in_czone(10.0.0.10);)
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_snat_in_czone(192.168.0.10);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(172.16.1.10);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(10.0.0.10);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(192.168.0.10);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat(172.16.1.10);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_snat(10.0.0.10);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_snat(192.168.0.10);)
 ])
 
 check ovn-nbctl --wait=sb lr-nat-del DR snat 20.0.0.10
@@ -7011,15 +7114,15 @@  AT_CHECK([grep lr_in_ip_input lrflows | grep arp | grep -e 172.16.1.10 -e 10.0.0
 ])
 
 AT_CHECK([grep lr_in_dnat lrflows | grep ct_dnat | sed 's/table=../table=??/' | sort], [0], [dnl
-  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_dnat_in_czone(20.0.0.10);)
-  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_dnat_in_czone(20.0.0.10);)
-  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_dnat_in_czone(20.0.0.10);)
+  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_dnat(20.0.0.10);)
+  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_dnat(20.0.0.10);)
+  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_dnat(20.0.0.10);)
 ])
 
 AT_CHECK([grep lr_out_undnat lrflows | grep ct_dnat | sed 's/table=../table=??/' | sort], [0], [dnl
-  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_dnat_in_czone;)
-  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_dnat_in_czone;)
-  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_dnat_in_czone;)
+  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_dnat;)
+  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_dnat;)
+  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_dnat;)
 ])
 
 check ovn-nbctl --wait=sb lr-nat-del DR dnat
@@ -7050,33 +7153,27 @@  AT_CHECK([grep lr_in_ip_input lrflows | grep arp | grep -e 172.16.1.10 -e 10.0.0
 ])
 
 AT_CHECK([grep lr_in_unsnat lrflows | grep ct_snat | sed 's/table=../table=??/' | sort], [0], [dnl
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && flags.loopback == 0 && is_chassis_resident("cr-DR-S2")), action=(ct_snat_in_czone;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && flags.loopback == 1 && flags.use_snat_zone == 1 && is_chassis_resident("cr-DR-S2")), action=(ct_snat;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && flags.loopback == 0 && is_chassis_resident("cr-DR-S1")), action=(ct_snat_in_czone;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && flags.loopback == 1 && flags.use_snat_zone == 1 && is_chassis_resident("cr-DR-S1")), action=(ct_snat;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 192.168.0.10 && inport == "DR-S3" && flags.loopback == 0 && is_chassis_resident("cr-DR-S3")), action=(ct_snat_in_czone;)
-  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 192.168.0.10 && inport == "DR-S3" && flags.loopback == 1 && flags.use_snat_zone == 1 && is_chassis_resident("cr-DR-S3")), action=(ct_snat;)
+  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_snat;)
+  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat;)
+  table=??(lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 192.168.0.10 && inport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_snat;)
 ])
 
 AT_CHECK([grep lr_out_snat lrflows | grep ct_snat | sed 's/table=../table=??/' | sort], [0], [dnl
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat_in_czone(172.16.1.10);)
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_snat_in_czone(10.0.0.10);)
-  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_snat_in_czone(192.168.0.10);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(172.16.1.10);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(10.0.0.10);)
-  table=??(lr_out_snat        ), priority=162  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3") && reg9[[4]] == 1), action=(reg9[[4]] = 0; ct_snat(192.168.0.10);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_snat(172.16.1.10);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_snat(10.0.0.10);)
+  table=??(lr_out_snat        ), priority=161  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_snat(192.168.0.10);)
 ])
 
 AT_CHECK([grep lr_in_dnat lrflows | grep ct_dnat | sed 's/table=../table=??/' | sort], [0], [dnl
-  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_dnat_in_czone(20.0.0.10);)
-  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_dnat_in_czone(20.0.0.10);)
-  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 192.168.0.10 && inport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_dnat_in_czone(20.0.0.10);)
+  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && inport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_dnat(20.0.0.10);)
+  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.16.1.10 && inport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_dnat(20.0.0.10);)
+  table=??(lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 192.168.0.10 && inport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_dnat(20.0.0.10);)
 ])
 
 AT_CHECK([grep lr_out_undnat lrflows | grep ct_dnat | sed 's/table=../table=??/' | sort], [0], [dnl
-  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_dnat_in_czone;)
-  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_dnat_in_czone;)
-  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_dnat_in_czone;)
+  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S1" && is_chassis_resident("cr-DR-S1")), action=(ct_dnat;)
+  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S2" && is_chassis_resident("cr-DR-S2")), action=(ct_dnat;)
+  table=??(lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 20.0.0.10 && outport == "DR-S3" && is_chassis_resident("cr-DR-S3")), action=(ct_dnat;)
 ])
 
 check ovn-nbctl --wait=sb lr-nat-del DR dnat_and_snat
diff --git a/tests/ovn.at b/tests/ovn.at
index 40d701bb2..d4df6ce28 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -33332,6 +33332,9 @@  check ovn-nbctl lrp-set-gateway-chassis lr0-ext hv1
 check ovn-nbctl lr-nat-add lr0 snat 172.16.0.2 10.0.0.0/24
 check ovn-nbctl lr-nat-add lr0 dnat 172.16.0.2 10.0.0.2
 
+# Set lr0 to use common zone
+check ovn-nbctl set logical_router lr0 options:use_common_zone="true"
+
 check ovn-nbctl --wait=hv sync
 # Use constants so that if tables or registers change, this test can
 # be updated easily.
diff --git a/tests/system-ovn.at b/tests/system-ovn.at
index 501f9ad06..e0e7a7deb 100644
--- a/tests/system-ovn.at
+++ b/tests/system-ovn.at
@@ -9296,6 +9296,7 @@  AT_CLEANUP
 OVN_FOR_EACH_NORTHD([
 AT_SETUP([SNAT in separate zone from DNAT])
 
+AT_SKIP_IF([test $HAVE_NC = no])
 CHECK_CONNTRACK()
 CHECK_CONNTRACK_NAT()
 ovn_start
@@ -9368,30 +9369,78 @@  ADD_VETH(vm2, vm2, br-int, "173.0.2.2/24", "00:de:ad:01:00:02", \
          "173.0.2.1")
 
 check ovn-nbctl lr-nat-add r1 dnat_and_snat 172.16.0.101 173.0.1.2 vm1 00:00:00:01:02:03
+
+wait_for_ports_up
 check ovn-nbctl --wait=hv sync
 
-# Next, make sure that a ping works as expected
-NS_CHECK_EXEC([vm1], [ping -q -c 3 -i 0.3 -w 2 30.0.0.1 | FORMAT_PING], \
-[0], [dnl
+# Create service that listens for TCP and UDP
+NETNS_DAEMONIZE([vm2], [nc -l -u 1234], [nc0.pid])
+NETNS_DAEMONIZE([vm2], [nc -l -k 1235], [nc1.pid])
+
+test_icmp() {
+    # Make sure that a ping works as expected
+    NS_CHECK_EXEC([vm1], [ping -c 3 -i 0.3 -w 2 30.0.0.1 | FORMAT_PING], \
+    [0], [dnl
 3 packets transmitted, 3 received, 0% packet loss, time 0ms
 ])
 
-# Finally, make sure that conntrack shows two separate zones being used for
-# DNAT and SNAT
-AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(30.0.0.1) | \
-sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+    # Finally, make sure that conntrack shows two separate zones being used for
+    # DNAT and SNAT
+    AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(30.0.0.1) | \
+    sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
 icmp,orig=(src=173.0.1.2,dst=30.0.0.1,id=<cleared>,type=8,code=0),reply=(src=172.16.0.102,dst=173.0.1.2,id=<cleared>,type=0,code=0),zone=<cleared>,mark=2
 ])
 
-# The final two entries appear identical here. That is because FORMAT_CT
-# scrubs the zone numbers. In actuality, the zone numbers are different,
-# which is why there are two entries.
-AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.0.102) | \
-sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+    AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.0.102) | \
+    sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
 icmp,orig=(src=172.16.0.101,dst=172.16.0.102,id=<cleared>,type=8,code=0),reply=(src=173.0.2.2,dst=172.16.0.101,id=<cleared>,type=0,code=0),zone=<cleared>
 icmp,orig=(src=173.0.1.2,dst=172.16.0.102,id=<cleared>,type=8,code=0),reply=(src=172.16.0.102,dst=172.16.0.101,id=<cleared>,type=0,code=0),zone=<cleared>
-icmp,orig=(src=173.0.1.2,dst=172.16.0.102,id=<cleared>,type=8,code=0),reply=(src=172.16.0.102,dst=172.16.0.101,id=<cleared>,type=0,code=0),zone=<cleared>
 ])
+}
+
+test_udp() {
+    NS_CHECK_EXEC([vm1], [nc -u 30.0.0.1 1234 -p 1222 -z])
+
+    AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(30.0.0.1) | \
+    sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+udp,orig=(src=173.0.1.2,dst=30.0.0.1,sport=<cleared>,dport=<cleared>),reply=(src=172.16.0.102,dst=173.0.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2
+])
+
+    AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.0.102) | \
+    sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+udp,orig=(src=172.16.0.101,dst=172.16.0.102,sport=<cleared>,dport=<cleared>),reply=(src=173.0.2.2,dst=172.16.0.101,sport=<cleared>,dport=<cleared>),zone=<cleared>
+udp,orig=(src=173.0.1.2,dst=172.16.0.102,sport=<cleared>,dport=<cleared>),reply=(src=172.16.0.102,dst=172.16.0.101,sport=<cleared>,dport=<cleared>),zone=<cleared>
+])
+}
+
+test_tcp() {
+    NS_CHECK_EXEC([vm1], [nc 30.0.0.1 1235 -z])
+
+    AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(30.0.0.1) | \
+    sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+tcp,orig=(src=173.0.1.2,dst=30.0.0.1,sport=<cleared>,dport=<cleared>),reply=(src=172.16.0.102,dst=173.0.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
+])
+
+    AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.0.102) | \
+    sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+tcp,orig=(src=172.16.0.101,dst=172.16.0.102,sport=<cleared>,dport=<cleared>),reply=(src=173.0.2.2,dst=172.16.0.101,sport=<cleared>,dport=<cleared>),zone=<cleared>,protoinfo=(state=<cleared>)
+tcp,orig=(src=173.0.1.2,dst=172.16.0.102,sport=<cleared>,dport=<cleared>),reply=(src=172.16.0.102,dst=172.16.0.101,sport=<cleared>,dport=<cleared>),zone=<cleared>,protoinfo=(state=<cleared>)
+])
+}
+
+for type in icmp udp tcp; do
+    AS_BOX([Testing $type])
+    # First time, when the packet needs to pass through pinctrl buffering
+    check ovs-appctl dpctl/flush-conntrack
+    ovn-sbctl --all destroy mac_binding
+    wait_row_count mac_binding 0
+    test_$type
+
+    # Second time with MAC binding being already set
+    check ovs-appctl dpctl/flush-conntrack
+    wait_row_count mac_binding 1 ip="172.16.0.102"
+    test_$type
+done
 
 OVS_APP_EXIT_AND_WAIT([ovn-controller])