diff mbox series

[ovs-dev,v6,1/3] conntrack: Fix race for NAT cleanup.

Message ID 1552687280-40867-1-git-send-email-dlu998@gmail.com
State Accepted
Commit a720a7fa80b2fdf1bb5f5b9e706191a31ae02dca
Headers show
Series [ovs-dev,v6,1/3] conntrack: Fix race for NAT cleanup. | expand

Commit Message

Darrell Ball March 15, 2019, 10:01 p.m. UTC
Reference lists are not fully protected during cleanup of
NAT connections where the bucket lock is transiently not held during
list traversal.  This can lead to referencing freed memory during
cleaning from multiple contexts.  Fix this by protecting with
the existing 'cleanup' mutex in the missed cases where 'conn_clean()'
is called.  'conntrack_flush()' is converted to expiry list traversal
to support the proper bucket level protection with the 'cleanup' mutex.

The NAT exhaustion case cleanup in 'conn_not_found()' is also modified
to avoid the same issue.

Fixes: 286de2729955 ("dpdk: Userspace Datapath: Introduce NAT Support.")
Reported-by: solomon <liwei.solomon@gmail.com>
Reported-at: https://mail.openvswitch.org/pipermail/ovs-dev/2019-March/357056.html
Tested-by: solomon <liwei.solomon@gmail.com>
Signed-off-by: Darrell Ball <dlu998@gmail.com>
---

This patch is targeted for earlier releases as new RCU patches
inherently don't have this race.

Backport to 2.8.

v6: Change some comments to lock annotations and added some other comments
    to a few functions.
    Changed the name of conn_lookup_any() to conn_lookup_def() to
    reflect the usage, which is now enforced.

v5: Fix recent version compiler warning (Ilya) about local variable going
    out of scope.  I don't think stack memory is reclaimed when block
    scope ends, but theoretically some compiler could do it.

    Moved "structure copy -> memcpy" changes into a new patch 3.

    Add function comment to conn_clean_safe().

v4: Fix exhaustion case cleanup race in conn_not_found() as well.
    At the same time, simplify the related code.

v3: Use cleanup_mutex in conntrack_destroy().

v2: Fix typo.



 lib/conntrack.c | 142 ++++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 98 insertions(+), 44 deletions(-)

Comments

Ben Pfaff March 15, 2019, 10:56 p.m. UTC | #1
On Fri, Mar 15, 2019 at 03:01:18PM -0700, Darrell Ball wrote:
> Reference lists are not fully protected during cleanup of
> NAT connections where the bucket lock is transiently not held during
> list traversal.  This can lead to referencing freed memory during
> cleaning from multiple contexts.  Fix this by protecting with
> the existing 'cleanup' mutex in the missed cases where 'conn_clean()'
> is called.  'conntrack_flush()' is converted to expiry list traversal
> to support the proper bucket level protection with the 'cleanup' mutex.
> 
> The NAT exhaustion case cleanup in 'conn_not_found()' is also modified
> to avoid the same issue.
> 
> Fixes: 286de2729955 ("dpdk: Userspace Datapath: Introduce NAT Support.")
> Reported-by: solomon <liwei.solomon@gmail.com>
> Reported-at: https://mail.openvswitch.org/pipermail/ovs-dev/2019-March/357056.html
> Tested-by: solomon <liwei.solomon@gmail.com>
> Signed-off-by: Darrell Ball <dlu998@gmail.com>
> ---
> 
> This patch is targeted for earlier releases as new RCU patches
> inherently don't have this race.
> 
> Backport to 2.8.

Thanks.  I applied this to master, branch-2.11, and branch-2.10.  2.9
and 2.8 had conflicts.
0-day Robot March 15, 2019, 10:57 p.m. UTC | #2
Bleep bloop.  Greetings Darrell Ball, I am a robot and I have tried out your patch.
Thanks for your contribution.

I encountered some error that I wasn't expecting.  See the details below.


git-am:
Failed to merge in the changes.
Patch failed at 0001 conntrack: Fix race for NAT cleanup.
The copy of the patch that failed is found in:
   /var/lib/jenkins/jobs/upstream_build_from_pw/workspace/.git/rebase-apply/patch
When you have resolved this problem, run "git am --resolved".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".


Please check this out.  If you feel there has been an error, please email aconole@bytheb.org

Thanks,
0-day Robot
Darrell Ball March 15, 2019, 11:17 p.m. UTC | #3
On Fri, Mar 15, 2019 at 3:56 PM Ben Pfaff <blp@ovn.org> wrote:

> On Fri, Mar 15, 2019 at 03:01:18PM -0700, Darrell Ball wrote:
> > Reference lists are not fully protected during cleanup of
> > NAT connections where the bucket lock is transiently not held during
> > list traversal.  This can lead to referencing freed memory during
> > cleaning from multiple contexts.  Fix this by protecting with
> > the existing 'cleanup' mutex in the missed cases where 'conn_clean()'
> > is called.  'conntrack_flush()' is converted to expiry list traversal
> > to support the proper bucket level protection with the 'cleanup' mutex.
> >
> > The NAT exhaustion case cleanup in 'conn_not_found()' is also modified
> > to avoid the same issue.
> >
> > Fixes: 286de2729955 ("dpdk: Userspace Datapath: Introduce NAT Support.")
> > Reported-by: solomon <liwei.solomon@gmail.com>
> > Reported-at:
> https://mail.openvswitch.org/pipermail/ovs-dev/2019-March/357056.html
> > Tested-by: solomon <liwei.solomon@gmail.com>
> > Signed-off-by: Darrell Ball <dlu998@gmail.com>
> > ---
> >
> > This patch is targeted for earlier releases as new RCU patches
> > inherently don't have this race.
> >
> > Backport to 2.8.
>
> Thanks.  I applied this to master, branch-2.11, and branch-2.10.  2.9
> and 2.8 had conflicts.
>

I will create the backport patches for 2.9 and 2.8.

Regarding branch 2.8 - it has diverged quite a bit in general from branch
>=2.9.
This is because of some small features/cosmetic changes that went into 2.9.
One option would be to bring 2.8 into sync with 2.9 in one patch.
Alternatively,
backport all dependencies  and fixes separately. Thoughts ?
Ben Pfaff March 15, 2019, 11:31 p.m. UTC | #4
On Fri, Mar 15, 2019 at 04:17:34PM -0700, Darrell Ball wrote:
> On Fri, Mar 15, 2019 at 3:56 PM Ben Pfaff <blp@ovn.org> wrote:
> 
> > On Fri, Mar 15, 2019 at 03:01:18PM -0700, Darrell Ball wrote:
> > > Reference lists are not fully protected during cleanup of
> > > NAT connections where the bucket lock is transiently not held during
> > > list traversal.  This can lead to referencing freed memory during
> > > cleaning from multiple contexts.  Fix this by protecting with
> > > the existing 'cleanup' mutex in the missed cases where 'conn_clean()'
> > > is called.  'conntrack_flush()' is converted to expiry list traversal
> > > to support the proper bucket level protection with the 'cleanup' mutex.
> > >
> > > The NAT exhaustion case cleanup in 'conn_not_found()' is also modified
> > > to avoid the same issue.
> > >
> > > Fixes: 286de2729955 ("dpdk: Userspace Datapath: Introduce NAT Support.")
> > > Reported-by: solomon <liwei.solomon@gmail.com>
> > > Reported-at:
> > https://mail.openvswitch.org/pipermail/ovs-dev/2019-March/357056.html
> > > Tested-by: solomon <liwei.solomon@gmail.com>
> > > Signed-off-by: Darrell Ball <dlu998@gmail.com>
> > > ---
> > >
> > > This patch is targeted for earlier releases as new RCU patches
> > > inherently don't have this race.
> > >
> > > Backport to 2.8.
> >
> > Thanks.  I applied this to master, branch-2.11, and branch-2.10.  2.9
> > and 2.8 had conflicts.
> >
> 
> I will create the backport patches for 2.9 and 2.8.
> 
> Regarding branch 2.8 - it has diverged quite a bit in general from branch
> >=2.9.
> This is because of some small features/cosmetic changes that went into 2.9.
> One option would be to bring 2.8 into sync with 2.9 in one patch.
> Alternatively,
> backport all dependencies  and fixes separately. Thoughts ?

Usually it's better to backport them separately, because it makes it
clear at a glance what happened in a list of patches.  But that can
sometimes be a lot of trouble, and in that case a single patch can make
sense.
Darrell Ball March 15, 2019, 11:40 p.m. UTC | #5
On Fri, Mar 15, 2019 at 4:31 PM Ben Pfaff <blp@ovn.org> wrote:

> On Fri, Mar 15, 2019 at 04:17:34PM -0700, Darrell Ball wrote:
> > On Fri, Mar 15, 2019 at 3:56 PM Ben Pfaff <blp@ovn.org> wrote:
> >
> > > On Fri, Mar 15, 2019 at 03:01:18PM -0700, Darrell Ball wrote:
> > > > Reference lists are not fully protected during cleanup of
> > > > NAT connections where the bucket lock is transiently not held during
> > > > list traversal.  This can lead to referencing freed memory during
> > > > cleaning from multiple contexts.  Fix this by protecting with
> > > > the existing 'cleanup' mutex in the missed cases where 'conn_clean()'
> > > > is called.  'conntrack_flush()' is converted to expiry list traversal
> > > > to support the proper bucket level protection with the 'cleanup'
> mutex.
> > > >
> > > > The NAT exhaustion case cleanup in 'conn_not_found()' is also
> modified
> > > > to avoid the same issue.
> > > >
> > > > Fixes: 286de2729955 ("dpdk: Userspace Datapath: Introduce NAT
> Support.")
> > > > Reported-by: solomon <liwei.solomon@gmail.com>
> > > > Reported-at:
> > > https://mail.openvswitch.org/pipermail/ovs-dev/2019-March/357056.html
> > > > Tested-by: solomon <liwei.solomon@gmail.com>
> > > > Signed-off-by: Darrell Ball <dlu998@gmail.com>
> > > > ---
> > > >
> > > > This patch is targeted for earlier releases as new RCU patches
> > > > inherently don't have this race.
> > > >
> > > > Backport to 2.8.
> > >
> > > Thanks.  I applied this to master, branch-2.11, and branch-2.10.  2.9
> > > and 2.8 had conflicts.
> > >
> >
> > I will create the backport patches for 2.9 and 2.8.
> >
> > Regarding branch 2.8 - it has diverged quite a bit in general from branch
> > >=2.9.
> > This is because of some small features/cosmetic changes that went into
> 2.9.
> > One option would be to bring 2.8 into sync with 2.9 in one patch.
> > Alternatively,
> > backport all dependencies  and fixes separately. Thoughts ?
>
> Usually it's better to backport them separately, because it makes it
> clear at a glance what happened in a list of patches.


yep


> But that can
> sometimes be a lot of trouble, and in that case a single patch can make
> sense.
>

It is the "lot of trouble" part I am trying to avoid. Let me see.
diff mbox series

Patch

diff --git a/lib/conntrack.c b/lib/conntrack.c
index 691782c..dd6e19b 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -355,7 +355,7 @@  conntrack_destroy(struct conntrack *ct)
         struct conntrack_bucket *ctb = &ct->buckets[i];
         struct conn *conn;
 
-        ovs_mutex_destroy(&ctb->cleanup_mutex);
+        ovs_mutex_lock(&ctb->cleanup_mutex);
         ct_lock_lock(&ctb->lock);
         HMAP_FOR_EACH_POP (conn, node, &ctb->connections) {
             if (conn->conn_type == CT_CONN_TYPE_DEFAULT) {
@@ -365,7 +365,9 @@  conntrack_destroy(struct conntrack *ct)
         }
         hmap_destroy(&ctb->connections);
         ct_lock_unlock(&ctb->lock);
+        ovs_mutex_unlock(&ctb->cleanup_mutex);
         ct_lock_destroy(&ctb->lock);
+        ovs_mutex_destroy(&ctb->cleanup_mutex);
     }
     ct_rwlock_wrlock(&ct->resources_lock);
     struct nat_conn_key_node *nat_conn_key_node;
@@ -753,6 +755,27 @@  conn_lookup(struct conntrack *ct, const struct conn_key *key, long long now)
     return ctx.conn;
 }
 
+/* Only used when looking up 'CT_CONN_TYPE_DEFAULT' conns. */
+static struct conn *
+conn_lookup_def(const struct conn_key *key,
+                const struct conntrack_bucket *ctb, uint32_t hash)
+    OVS_REQUIRES(ctb->lock)
+{
+    struct conn *conn = NULL;
+
+    HMAP_FOR_EACH_WITH_HASH (conn, node, hash, &ctb->connections) {
+        if (!conn_key_cmp(&conn->key, key)
+            && conn->conn_type == CT_CONN_TYPE_DEFAULT) {
+            break;
+        }
+        if (!conn_key_cmp(&conn->rev_key, key)
+            && conn->conn_type == CT_CONN_TYPE_DEFAULT) {
+            break;
+        }
+    }
+    return conn;
+}
+
 static void
 conn_seq_skew_set(struct conntrack *ct, const struct conn_key *key,
                   long long now, int seq_skew, bool seq_skew_dir)
@@ -823,6 +846,22 @@  conn_clean(struct conntrack *ct, struct conn *conn,
     }
 }
 
+/* Only called for 'CT_CONN_TYPE_DEFAULT' conns; must be called with no
+ * locks held and upon return no locks are held. */
+static void
+conn_clean_safe(struct conntrack *ct, struct conn *conn,
+                struct conntrack_bucket *ctb, uint32_t hash)
+{
+    ovs_mutex_lock(&ctb->cleanup_mutex);
+    ct_lock_lock(&ctb->lock);
+    conn = conn_lookup_def(&conn->key, ctb, hash);
+    if (conn) {
+        conn_clean(ct, conn, ctb);
+    }
+    ct_lock_unlock(&ctb->lock);
+    ovs_mutex_unlock(&ctb->cleanup_mutex);
+}
+
 static bool
 ct_verify_helper(const char *helper, enum ct_alg_ctl_type ct_alg_ctl)
 {
@@ -854,6 +893,7 @@  conn_not_found(struct conntrack *ct, struct dp_packet *pkt,
                enum ct_alg_ctl_type ct_alg_ctl)
 {
     struct conn *nc = NULL;
+    struct conn connl;
 
     if (!valid_new(pkt, &ctx->key)) {
         pkt->md.ct_state = CS_INVALID;
@@ -876,8 +916,9 @@  conn_not_found(struct conntrack *ct, struct dp_packet *pkt,
         }
 
         unsigned bucket = hash_to_bucket(ctx->hash);
-        nc = new_conn(&ct->buckets[bucket], pkt, &ctx->key, now);
-        ctx->conn = nc;
+        nc = &connl;
+        memset(nc, 0, sizeof *nc);
+        memcpy(&nc->key, &ctx->key, sizeof nc->key);
         nc->rev_key = nc->key;
         conn_key_reverse(&nc->rev_key);
 
@@ -921,6 +962,7 @@  conn_not_found(struct conntrack *ct, struct dp_packet *pkt,
                 ct_rwlock_wrlock(&ct->resources_lock);
                 bool nat_res = nat_select_range_tuple(ct, nc,
                                                       conn_for_un_nat_copy);
+                ct_rwlock_unlock(&ct->resources_lock);
 
                 if (!nat_res) {
                     goto nat_res_exhaustion;
@@ -929,14 +971,24 @@  conn_not_found(struct conntrack *ct, struct dp_packet *pkt,
                 /* Update nc with nat adjustments made to
                  * conn_for_un_nat_copy by nat_select_range_tuple(). */
                 *nc = *conn_for_un_nat_copy;
-                ct_rwlock_unlock(&ct->resources_lock);
             }
             conn_for_un_nat_copy->conn_type = CT_CONN_TYPE_UN_NAT;
             conn_for_un_nat_copy->nat_info = NULL;
             conn_for_un_nat_copy->alg = NULL;
             nat_packet(pkt, nc, ctx->icmp_related);
         }
-        hmap_insert(&ct->buckets[bucket].connections, &nc->node, ctx->hash);
+        struct conn *nconn = new_conn(&ct->buckets[bucket], pkt, &ctx->key,
+                                      now);
+        memcpy(&nconn->key, &nc->key, sizeof nconn->key);
+        memcpy(&nconn->rev_key, &nc->rev_key, sizeof nconn->rev_key);
+        memcpy(&nconn->master_key, &nc->master_key, sizeof nconn->master_key);
+        nconn->alg_related = nc->alg_related;
+        nconn->alg = nc->alg;
+        nconn->mark = nc->mark;
+        nconn->label = nc->label;
+        nconn->nat_info = nc->nat_info;
+        ctx->conn = nc = nconn;
+        hmap_insert(&ct->buckets[bucket].connections, &nconn->node, ctx->hash);
         atomic_count_inc(&ct->n_conn);
     }
 
@@ -949,13 +1001,8 @@  conn_not_found(struct conntrack *ct, struct dp_packet *pkt,
      * against with firewall rules or a separate firewall.
      * Also using zone partitioning can limit DoS impact. */
 nat_res_exhaustion:
-    ovs_list_remove(&nc->exp_node);
-    delete_conn(nc);
-    /* conn_for_un_nat_copy is a local variable in process_one; this
-     * memset() serves to document that conn_for_un_nat_copy is from
-     * this point on unused. */
-    memset(conn_for_un_nat_copy, 0, sizeof *conn_for_un_nat_copy);
-    ct_rwlock_unlock(&ct->resources_lock);
+    free(nc->alg);
+    free(nc->nat_info);
     static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5);
     VLOG_WARN_RL(&rl, "Unable to NAT due to tuple space exhaustion - "
                  "if DoS attack, use firewalling and/or zone partitioning.");
@@ -969,6 +1016,7 @@  conn_update_state(struct conntrack *ct, struct dp_packet *pkt,
     OVS_REQUIRES(ct->buckets[bucket].lock)
 {
     bool create_new_conn = false;
+    struct conn lconn;
 
     if (ctx->icmp_related) {
         pkt->md.ct_state |= CS_RELATED;
@@ -995,7 +1043,10 @@  conn_update_state(struct conntrack *ct, struct dp_packet *pkt,
             pkt->md.ct_state = CS_INVALID;
             break;
         case CT_UPDATE_NEW:
-            conn_clean(ct, *conn, &ct->buckets[bucket]);
+            memcpy(&lconn, *conn, sizeof lconn);
+            ct_lock_unlock(&ct->buckets[bucket].lock);
+            conn_clean_safe(ct, &lconn, &ct->buckets[bucket], ctx->hash);
+            ct_lock_lock(&ct->buckets[bucket].lock);
             create_new_conn = true;
             break;
         default:
@@ -1184,8 +1235,12 @@  process_one(struct conntrack *ct, struct dp_packet *pkt,
     conn = ctx->conn;
 
     /* Delete found entry if in wrong direction. 'force' implies commit. */
-    if (conn && force && ctx->reply) {
-        conn_clean(ct, conn, &ct->buckets[bucket]);
+    if (OVS_UNLIKELY(force && ctx->reply && conn)) {
+        struct conn lconn;
+        memcpy(&lconn, conn, sizeof lconn);
+        ct_lock_unlock(&ct->buckets[bucket].lock);
+        conn_clean_safe(ct, &lconn, &ct->buckets[bucket], ctx->hash);
+        ct_lock_lock(&ct->buckets[bucket].lock);
         conn = NULL;
     }
 
@@ -1391,19 +1446,17 @@  sweep_bucket(struct conntrack *ct, struct conntrack_bucket *ctb,
 
     for (unsigned i = 0; i < N_CT_TM; i++) {
         LIST_FOR_EACH_SAFE (conn, next, exp_node, &ctb->exp_lists[i]) {
-            if (conn->conn_type == CT_CONN_TYPE_DEFAULT) {
-                if (!conn_expired(conn, now) || count >= limit) {
-                    min_expiration = MIN(min_expiration, conn->expiration);
-                    if (count >= limit) {
-                        /* Do not check other lists. */
-                        COVERAGE_INC(conntrack_long_cleanup);
-                        return min_expiration;
-                    }
-                    break;
+            if (!conn_expired(conn, now) || count >= limit) {
+                min_expiration = MIN(min_expiration, conn->expiration);
+                if (count >= limit) {
+                    /* Do not check other lists. */
+                    COVERAGE_INC(conntrack_long_cleanup);
+                    return min_expiration;
                 }
-                conn_clean(ct, conn, ctb);
-                count++;
+                break;
             }
+            conn_clean(ct, conn, ctb);
+            count++;
         }
     }
     return min_expiration;
@@ -2344,12 +2397,7 @@  static struct conn *
 new_conn(struct conntrack_bucket *ctb, struct dp_packet *pkt,
          struct conn_key *key, long long now)
 {
-    struct conn *newconn = l4_protos[key->nw_proto]->new_conn(ctb, pkt, now);
-    if (newconn) {
-        newconn->key = *key;
-    }
-
-    return newconn;
+    return l4_protos[key->nw_proto]->new_conn(ctb, pkt, now);
 }
 
 static void
@@ -2547,16 +2595,19 @@  int
 conntrack_flush(struct conntrack *ct, const uint16_t *zone)
 {
     for (unsigned i = 0; i < CONNTRACK_BUCKETS; i++) {
-        struct conn *conn, *next;
-
-        ct_lock_lock(&ct->buckets[i].lock);
-        HMAP_FOR_EACH_SAFE (conn, next, node, &ct->buckets[i].connections) {
-            if ((!zone || *zone == conn->key.zone) &&
-                (conn->conn_type == CT_CONN_TYPE_DEFAULT)) {
-                conn_clean(ct, conn, &ct->buckets[i]);
+        struct conntrack_bucket *ctb = &ct->buckets[i];
+        ovs_mutex_lock(&ctb->cleanup_mutex);
+        ct_lock_lock(&ctb->lock);
+        for (unsigned j = 0; j < N_CT_TM; j++) {
+            struct conn *conn, *next;
+            LIST_FOR_EACH_SAFE (conn, next, exp_node, &ctb->exp_lists[j]) {
+                if (!zone || *zone == conn->key.zone) {
+                    conn_clean(ct, conn, ctb);
+                }
             }
         }
-        ct_lock_unlock(&ct->buckets[i].lock);
+        ct_lock_unlock(&ctb->lock);
+        ovs_mutex_unlock(&ctb->cleanup_mutex);
     }
 
     return 0;
@@ -2573,16 +2624,19 @@  conntrack_flush_tuple(struct conntrack *ct, const struct ct_dpif_tuple *tuple,
     tuple_to_conn_key(tuple, zone, &ctx.key);
     ctx.hash = conn_key_hash(&ctx.key, ct->hash_basis);
     unsigned bucket = hash_to_bucket(ctx.hash);
+    struct conntrack_bucket *ctb = &ct->buckets[bucket];
 
-    ct_lock_lock(&ct->buckets[bucket].lock);
-    conn_key_lookup(&ct->buckets[bucket], &ctx, time_msec());
+    ovs_mutex_lock(&ctb->cleanup_mutex);
+    ct_lock_lock(&ctb->lock);
+    conn_key_lookup(ctb, &ctx, time_msec());
     if (ctx.conn && ctx.conn->conn_type == CT_CONN_TYPE_DEFAULT) {
-        conn_clean(ct, ctx.conn, &ct->buckets[bucket]);
+        conn_clean(ct, ctx.conn, ctb);
     } else {
         VLOG_WARN("Must flush tuple using the original pre-NATed tuple");
         error = ENOENT;
     }
-    ct_lock_unlock(&ct->buckets[bucket].lock);
+    ct_lock_unlock(&ctb->lock);
+    ovs_mutex_unlock(&ctb->cleanup_mutex);
     return error;
 }