From patchwork Wed Nov 28 16:31:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Darrell Ball X-Patchwork-Id: 1004626 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="nG/pje7Q"; dkim-atps=neutral Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 434mS26bh3z9s2P for ; Thu, 29 Nov 2018 03:33:38 +1100 (AEDT) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id B37B2BDD; Wed, 28 Nov 2018 16:32:14 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id A0047B66 for ; Wed, 28 Nov 2018 16:32:12 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 57012762 for ; Wed, 28 Nov 2018 16:32:09 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id z23so17637736plo.0 for ; Wed, 28 Nov 2018 08:32:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=jXyLrVQboiyXreitDqCD2l8CeiTyOxo9VUTF85dGZyU=; b=nG/pje7QoFarywWQPBXFGI2TxXHc5CTWjapFl04KsO9Wmacy6TszAeuKWFkrpbv3mE 5yeWAgmJ6mImnxjk/LDl5i41w/L7ZaouY8iKMJ1JC49BDPJg61byeHNL3cbAn805VuMK V7ZaYdd504y2g16xyV28Dybm1r/6++2xQE2VW6+azy3CfemjCnJPcsUax7wdwTFrj80n 4LzylbgWD+8yJi1xWEgC324O0oAfKZwd6jNplBeIcqhULfoswbkYtfJ6vVYGfl5wqBQS +j2YUkvsYV/2Ix4c89e8+WnXDIItJTJMLf6HiSmsSyANWZgJrup/YW1aLRyzbsePUeEk PH+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=jXyLrVQboiyXreitDqCD2l8CeiTyOxo9VUTF85dGZyU=; b=gWqW/K9H6z4FiFuFeb3dfvCDr2RjnIX7o5yRfROkja8L4RB8+p5Zz5P17VbDGgh7by 2WilpXAhaiqRqxmSgTVyHo4H2K5UKEyVMS35ntegFLwhWfGX3+1hq3d7Yu0I3f4DKXHe gJ+0EipMeZaZwzIuaLQZTI9yLLQIEfPHxaknDIQy93vm5K2Sav3hH4USm70k/yvqt6NO HYpkrR5U0y7CYjXhxsFEWhJmuMRJEV9OC+6jYE9I8gj4EMQB0QE9D8rGFKQfXDAL2SHQ qXxmi5C5H5bB4mHEJkXugHXOk0/z4Wi4+1jRViroaqTKNhsD7utio0IbxYWzqfohCK8D YPrw== X-Gm-Message-State: AA+aEWayjlyBrVR89iLwPsAos9MoIAvUrNVWRQ4e9J4b3CAHJvOzheOw IKC/5gg6n7eNZ3jPMy5O3j8= X-Google-Smtp-Source: AFSGD/XVzaSSGIYEroHkMyUqaEuKi+xUyPK82LLOiH617UO60o3UScEt/uVdwJrCAnPRKdkOLE+G1w== X-Received: by 2002:a17:902:365:: with SMTP id 92mr36012023pld.327.1543422728078; Wed, 28 Nov 2018 08:32:08 -0800 (PST) Received: from ubuntu.localdomain (c-76-102-76-212.hsd1.ca.comcast.net. [76.102.76.212]) by smtp.gmail.com with ESMTPSA id v9sm1201512pfg.144.2018.11.28.08.32.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Nov 2018 08:32:07 -0800 (PST) From: Darrell Ball To: dlu998@gmail.com, dev@openvswitch.org Date: Wed, 28 Nov 2018 08:31:50 -0800 Message-Id: <1543422714-100901-2-git-send-email-dlu998@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1543422714-100901-1-git-send-email-dlu998@gmail.com> References: <1543422714-100901-1-git-send-email-dlu998@gmail.com> X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [patch v2 1/5] conntrack: Stop exporting internal datastructures. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org Remove the exporting of the main internal conntrack datastructure. These are made static. Also stop passing around a pointer parameter to all the internal datastructures; only one or two is used for a given code path and these can be referenced directly and passed specifically where appropriate. Signed-off-by: Darrell Ball --- lib/conntrack-private.h | 29 +++ lib/conntrack.c | 543 +++++++++++++++++++++++++----------------------- lib/conntrack.h | 106 ++-------- lib/dpif-netdev.c | 51 ++--- tests/test-conntrack.c | 26 ++- 5 files changed, 348 insertions(+), 407 deletions(-) diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h index a344801..27ece38 100644 --- a/lib/conntrack-private.h +++ b/lib/conntrack-private.h @@ -119,6 +119,35 @@ enum ct_conn_type { CT_CONN_TYPE_UN_NAT, }; +/* Locking: + * + * The connections are kept in different buckets, which are completely + * independent. The connection bucket is determined by the hash of its key. + * + * Each bucket has two locks. Acquisition order is, from outermost to + * innermost: + * + * cleanup_mutex + * lock + * + * */ +struct conntrack_bucket { + /* Protects 'connections' and 'exp_lists'. Used in the fast path */ + struct ct_lock lock; + /* Contains the connections in the bucket, indexed by 'struct conn_key' */ + struct hmap connections OVS_GUARDED; + /* For each possible timeout we have a list of connections. When the + * timeout of a connection is updated, we move it to the back of the list. + * Since the connection in a list have the same relative timeout, the list + * will be ordered, with the oldest connections to the front. */ + struct ovs_list exp_lists[N_CT_TM] OVS_GUARDED; + + /* Protects 'next_cleanup'. Used to make sure that there's only one thread + * performing the cleanup. */ + struct ovs_mutex cleanup_mutex; + long long next_cleanup OVS_GUARDED; +}; + struct ct_l4_proto { struct conn *(*new_conn)(struct conntrack_bucket *, struct dp_packet *pkt, long long now); diff --git a/lib/conntrack.c b/lib/conntrack.c index 974f985..07ab0d0 100644 --- a/lib/conntrack.c +++ b/lib/conntrack.c @@ -76,9 +76,44 @@ enum ct_alg_ctl_type { CT_ALG_CTL_SIP, }; -static bool conn_key_extract(struct conntrack *, struct dp_packet *, - ovs_be16 dl_type, struct conn_lookup_ctx *, - uint16_t zone); +#define CONNTRACK_BUCKETS_SHIFT 8 +#define CONNTRACK_BUCKETS (1 << CONNTRACK_BUCKETS_SHIFT) +/* Independent buckets containing the connections */ +struct conntrack_bucket buckets[CONNTRACK_BUCKETS]; +/* Salt for hashing a connection key. */ +uint32_t hash_basis; +/* The thread performing periodic cleanup of the connection + * tracker */ +pthread_t clean_thread; +/* Latch to destroy the 'clean_thread' */ +struct latch clean_thread_exit; +/* Number of connections currently in the connection tracker. */ +atomic_count n_conn; +/* Connections limit. When this limit is reached, no new connection + * will be accepted. */ +atomic_uint n_conn_limit; +/* The following resources are referenced during nat connection + * creation and deletion. */ +struct hmap nat_conn_keys OVS_GUARDED; +/* Hash table for alg expectations. Expectations are created + * by control connections to help create data connections. */ +struct hmap alg_expectations OVS_GUARDED; +/* Used to lookup alg expectations from the control context. */ +struct hindex alg_expectation_refs OVS_GUARDED; +/* Expiry list for alg expectations. */ +struct ovs_list alg_exp_list OVS_GUARDED; +/* This lock is used during NAT connection creation and deletion; + * it is taken after a bucket lock and given back before that + * bucket unlock. + * This lock is similarly used to guard alg_expectations and + * alg_expectation_refs. If a bucket lock is also held during + * the normal code flow, then is must be taken first and released + * last. + */ +struct ct_rwlock resources_lock; + +static bool conn_key_extract(struct dp_packet *, ovs_be16 dl_type, + struct conn_lookup_ctx *, uint16_t zone); static uint32_t conn_key_hash(const struct conn_key *, uint32_t basis); static void conn_key_reverse(struct conn_key *); static void conn_key_lookup(struct conntrack_bucket *ctb, @@ -101,23 +136,22 @@ static void set_label(struct dp_packet *, struct conn *, static void *clean_thread_main(void *f_); static struct nat_conn_key_node * -nat_conn_keys_lookup(struct hmap *nat_conn_keys, +nat_conn_keys_lookup(struct hmap *nat_conn_keys_, const struct conn_key *key, uint32_t basis); static bool -nat_conn_keys_insert(struct hmap *nat_conn_keys, +nat_conn_keys_insert(struct hmap *nat_conn_keys_, const struct conn *nat_conn, uint32_t hash_basis); static void -nat_conn_keys_remove(struct hmap *nat_conn_keys, +nat_conn_keys_remove(struct hmap *nat_conn_keys_, const struct conn_key *key, uint32_t basis); static bool -nat_select_range_tuple(struct conntrack *ct, const struct conn *conn, - struct conn *nat_conn); +nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn); static uint8_t reverse_icmp_type(uint8_t type); @@ -139,8 +173,7 @@ repl_ftp_v4_addr(struct dp_packet *pkt, ovs_be32 v4_addr_rep, size_t addr_offset_from_ftp_data_start); static enum ftp_ctl_pkt -process_ftp_ctl_v4(struct conntrack *ct, - struct dp_packet *pkt, +process_ftp_ctl_v4(struct dp_packet *pkt, const struct conn *conn_for_expectation, ovs_be32 *v4_addr_rep, char **ftp_data_v4_start, @@ -151,8 +184,7 @@ detect_ftp_ctl_type(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt); static void -expectation_clean(struct conntrack *ct, const struct conn_key *master_key, - uint32_t basis); +expectation_clean(const struct conn_key *master_key, uint32_t basis); static struct ct_l4_proto *l4_protos[] = { [IPPROTO_TCP] = &ct_proto_tcp, @@ -162,21 +194,18 @@ static struct ct_l4_proto *l4_protos[] = { }; static void -handle_ftp_ctl(struct conntrack *ct, const struct conn_lookup_ctx *ctx, - struct dp_packet *pkt, +handle_ftp_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, const struct conn *conn_for_expectation, long long now, enum ftp_ctl_pkt ftp_ctl, bool nat); static void -handle_tftp_ctl(struct conntrack *ct, - const struct conn_lookup_ctx *ctx OVS_UNUSED, +handle_tftp_ctl(const struct conn_lookup_ctx *ctx OVS_UNUSED, struct dp_packet *pkt, const struct conn *conn_for_expectation, long long now OVS_UNUSED, enum ftp_ctl_pkt ftp_ctl OVS_UNUSED, bool nat OVS_UNUSED); -typedef void (*alg_helper)(struct conntrack *ct, - const struct conn_lookup_ctx *ctx, +typedef void (*alg_helper)(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, const struct conn *conn_for_expectation, long long now, enum ftp_ctl_pkt ftp_ctl, @@ -307,20 +336,20 @@ ct_print_conn_info(const struct conn *c, const char *log_msg, /* Initializes the connection tracker 'ct'. The caller is responsible for * calling 'conntrack_destroy()', when the instance is not needed anymore */ void -conntrack_init(struct conntrack *ct) +conntrack_init(void) { long long now = time_msec(); - ct_rwlock_init(&ct->resources_lock); - ct_rwlock_wrlock(&ct->resources_lock); - hmap_init(&ct->nat_conn_keys); - hmap_init(&ct->alg_expectations); - hindex_init(&ct->alg_expectation_refs); - ovs_list_init(&ct->alg_exp_list); - ct_rwlock_unlock(&ct->resources_lock); + ct_rwlock_init(&resources_lock); + ct_rwlock_wrlock(&resources_lock); + hmap_init(&nat_conn_keys); + hmap_init(&alg_expectations); + hindex_init(&alg_expectation_refs); + ovs_list_init(&alg_exp_list); + ct_rwlock_unlock(&resources_lock); for (unsigned i = 0; i < CONNTRACK_BUCKETS; i++) { - struct conntrack_bucket *ctb = &ct->buckets[i]; + struct conntrack_bucket *ctb = &buckets[i]; ct_lock_init(&ctb->lock); ct_lock_lock(&ctb->lock); @@ -334,29 +363,29 @@ conntrack_init(struct conntrack *ct) ctb->next_cleanup = now + CT_TM_MIN; ovs_mutex_unlock(&ctb->cleanup_mutex); } - ct->hash_basis = random_uint32(); - atomic_count_init(&ct->n_conn, 0); - atomic_init(&ct->n_conn_limit, DEFAULT_N_CONN_LIMIT); - latch_init(&ct->clean_thread_exit); - ct->clean_thread = ovs_thread_create("ct_clean", clean_thread_main, ct); + hash_basis = random_uint32(); + atomic_count_init(&n_conn, 0); + atomic_init(&n_conn_limit, DEFAULT_N_CONN_LIMIT); + latch_init(&clean_thread_exit); + clean_thread = ovs_thread_create("ct_clean", clean_thread_main, NULL); } /* Destroys the connection tracker 'ct' and frees all the allocated memory. */ void -conntrack_destroy(struct conntrack *ct) +conntrack_destroy(void) { - latch_set(&ct->clean_thread_exit); - pthread_join(ct->clean_thread, NULL); - latch_destroy(&ct->clean_thread_exit); + latch_set(&clean_thread_exit); + pthread_join(clean_thread, NULL); + latch_destroy(&clean_thread_exit); for (unsigned i = 0; i < CONNTRACK_BUCKETS; i++) { - struct conntrack_bucket *ctb = &ct->buckets[i]; + struct conntrack_bucket *ctb = &buckets[i]; struct conn *conn; ovs_mutex_destroy(&ctb->cleanup_mutex); ct_lock_lock(&ctb->lock); HMAP_FOR_EACH_POP (conn, node, &ctb->connections) { if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { - atomic_count_dec(&ct->n_conn); + atomic_count_dec(&n_conn); } delete_conn(conn); } @@ -364,23 +393,23 @@ conntrack_destroy(struct conntrack *ct) ct_lock_unlock(&ctb->lock); ct_lock_destroy(&ctb->lock); } - ct_rwlock_wrlock(&ct->resources_lock); + ct_rwlock_wrlock(&resources_lock); struct nat_conn_key_node *nat_conn_key_node; - HMAP_FOR_EACH_POP (nat_conn_key_node, node, &ct->nat_conn_keys) { + HMAP_FOR_EACH_POP (nat_conn_key_node, node, &nat_conn_keys) { free(nat_conn_key_node); } - hmap_destroy(&ct->nat_conn_keys); + hmap_destroy(&nat_conn_keys); struct alg_exp_node *alg_exp_node; - HMAP_FOR_EACH_POP (alg_exp_node, node, &ct->alg_expectations) { + HMAP_FOR_EACH_POP (alg_exp_node, node, &alg_expectations) { free(alg_exp_node); } - ovs_list_poison(&ct->alg_exp_list); - hmap_destroy(&ct->alg_expectations); - hindex_destroy(&ct->alg_expectation_refs); - ct_rwlock_unlock(&ct->resources_lock); - ct_rwlock_destroy(&ct->resources_lock); + ovs_list_poison(&alg_exp_list); + hmap_destroy(&alg_expectations); + hindex_destroy(&alg_expectation_refs); + ct_rwlock_unlock(&resources_lock); + ct_rwlock_destroy(&resources_lock); } static unsigned hash_to_bucket(uint32_t hash) @@ -513,14 +542,14 @@ alg_src_ip_wc(enum ct_alg_ctl_type alg_ctl_type) } static void -handle_alg_ctl(struct conntrack *ct, const struct conn_lookup_ctx *ctx, - struct dp_packet *pkt, enum ct_alg_ctl_type ct_alg_ctl, - const struct conn *conn, long long now, bool nat, +handle_alg_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, + enum ct_alg_ctl_type ct_alg_ctl, const struct conn *conn, + long long now, bool nat, const struct conn *conn_for_expectation) { /* ALG control packet handling with expectation creation. */ if (OVS_UNLIKELY(alg_helpers[ct_alg_ctl] && conn && conn->alg)) { - alg_helpers[ct_alg_ctl](ct, ctx, pkt, conn_for_expectation, now, + alg_helpers[ct_alg_ctl](ctx, pkt, conn_for_expectation, now, CT_FTP_CTL_INTEREST, nat); } } @@ -743,79 +772,78 @@ un_nat_packet(struct dp_packet *pkt, const struct conn *conn, * and a hash would have already been needed. Hence, this function * is just intended for code clarity. */ static struct conn * -conn_lookup(struct conntrack *ct, const struct conn_key *key, long long now) +conn_lookup(const struct conn_key *key, long long now) { struct conn_lookup_ctx ctx; ctx.conn = NULL; ctx.key = *key; - ctx.hash = conn_key_hash(key, ct->hash_basis); + ctx.hash = conn_key_hash(key, hash_basis); unsigned bucket = hash_to_bucket(ctx.hash); - conn_key_lookup(&ct->buckets[bucket], &ctx, now); + conn_key_lookup(&buckets[bucket], &ctx, now); return ctx.conn; } static void -conn_seq_skew_set(struct conntrack *ct, const struct conn_key *key, - long long now, int seq_skew, bool seq_skew_dir) +conn_seq_skew_set(const struct conn_key *key, long long now, int seq_skew, + bool seq_skew_dir) { - unsigned bucket = hash_to_bucket(conn_key_hash(key, ct->hash_basis)); - ct_lock_lock(&ct->buckets[bucket].lock); - struct conn *conn = conn_lookup(ct, key, now); + unsigned bucket = hash_to_bucket(conn_key_hash(key, hash_basis)); + ct_lock_lock(&buckets[bucket].lock); + struct conn *conn = conn_lookup(key, now); if (conn && seq_skew) { conn->seq_skew = seq_skew; conn->seq_skew_dir = seq_skew_dir; } - ct_lock_unlock(&ct->buckets[bucket].lock); + ct_lock_unlock(&buckets[bucket].lock); } static void -nat_clean(struct conntrack *ct, struct conn *conn, - struct conntrack_bucket *ctb) +nat_clean(struct conn *conn, struct conntrack_bucket *ctb) OVS_REQUIRES(ctb->lock) { - ct_rwlock_wrlock(&ct->resources_lock); - nat_conn_keys_remove(&ct->nat_conn_keys, &conn->rev_key, ct->hash_basis); - ct_rwlock_unlock(&ct->resources_lock); + ct_rwlock_wrlock(&resources_lock); + nat_conn_keys_remove(&nat_conn_keys, &conn->rev_key, hash_basis); + ct_rwlock_unlock(&resources_lock); ct_lock_unlock(&ctb->lock); unsigned bucket_rev_conn = - hash_to_bucket(conn_key_hash(&conn->rev_key, ct->hash_basis)); - ct_lock_lock(&ct->buckets[bucket_rev_conn].lock); - ct_rwlock_wrlock(&ct->resources_lock); + hash_to_bucket(conn_key_hash(&conn->rev_key, hash_basis)); + ct_lock_lock(&buckets[bucket_rev_conn].lock); + ct_rwlock_wrlock(&resources_lock); long long now = time_msec(); - struct conn *rev_conn = conn_lookup(ct, &conn->rev_key, now); + struct conn *rev_conn = conn_lookup(&conn->rev_key, now); struct nat_conn_key_node *nat_conn_key_node = - nat_conn_keys_lookup(&ct->nat_conn_keys, &conn->rev_key, - ct->hash_basis); + nat_conn_keys_lookup(&nat_conn_keys, &conn->rev_key, hash_basis); /* In the unlikely event, rev conn was recreated, then skip * rev_conn cleanup. */ if (rev_conn && (!nat_conn_key_node || conn_key_cmp(&nat_conn_key_node->value, &rev_conn->rev_key))) { - hmap_remove(&ct->buckets[bucket_rev_conn].connections, - &rev_conn->node); + hmap_remove(&buckets[bucket_rev_conn].connections, &rev_conn->node); free(rev_conn); } delete_conn(conn); - ct_rwlock_unlock(&ct->resources_lock); - ct_lock_unlock(&ct->buckets[bucket_rev_conn].lock); + ct_rwlock_unlock(&resources_lock); + ct_lock_unlock(&buckets[bucket_rev_conn].lock); ct_lock_lock(&ctb->lock); } +/* Must be called with 'CT_CONN_TYPE_DEFAULT' 'conn_type'. */ static void -conn_clean(struct conntrack *ct, struct conn *conn, - struct conntrack_bucket *ctb) +conn_clean(struct conn *conn, struct conntrack_bucket *ctb) OVS_REQUIRES(ctb->lock) { + ovs_assert(conn->conn_type == CT_CONN_TYPE_DEFAULT); + if (conn->alg) { - expectation_clean(ct, &conn->key, ct->hash_basis); + expectation_clean(&conn->key, hash_basis); } ovs_list_remove(&conn->exp_node); hmap_remove(&ctb->connections, &conn->node); - atomic_count_dec(&ct->n_conn); + atomic_count_dec(&n_conn); if (conn->nat_info) { - nat_clean(ct, conn, ctb); + nat_clean(conn, ctb); } else { delete_conn(conn); } @@ -843,8 +871,8 @@ ct_verify_helper(const char *helper, enum ct_alg_ctl_type ct_alg_ctl) /* This function is called with the bucket lock held. */ static struct conn * -conn_not_found(struct conntrack *ct, struct dp_packet *pkt, - struct conn_lookup_ctx *ctx, bool commit, long long now, +conn_not_found(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, + bool commit, long long now, const struct nat_action_info_t *nat_action_info, struct conn *conn_for_un_nat_copy, const char *helper, @@ -865,16 +893,16 @@ conn_not_found(struct conntrack *ct, struct dp_packet *pkt, } if (commit) { - unsigned int n_conn_limit; - atomic_read_relaxed(&ct->n_conn_limit, &n_conn_limit); + unsigned int n_conn_limit_; + atomic_read_relaxed(&n_conn_limit, &n_conn_limit_); - if (atomic_count_get(&ct->n_conn) >= n_conn_limit) { + if (atomic_count_get(&n_conn) >= n_conn_limit_) { COVERAGE_INC(conntrack_full); return nc; } unsigned bucket = hash_to_bucket(ctx->hash); - nc = new_conn(&ct->buckets[bucket], pkt, &ctx->key, now); + nc = new_conn(&buckets[bucket], pkt, &ctx->key, now); ctx->conn = nc; nc->rev_key = nc->key; conn_key_reverse(&nc->rev_key); @@ -902,11 +930,11 @@ conn_not_found(struct conntrack *ct, struct dp_packet *pkt, nc->nat_info->nat_action = NAT_ACTION_DST; } *conn_for_un_nat_copy = *nc; - ct_rwlock_wrlock(&ct->resources_lock); - bool new_insert = nat_conn_keys_insert(&ct->nat_conn_keys, + ct_rwlock_wrlock(&resources_lock); + bool new_insert = nat_conn_keys_insert(&nat_conn_keys, conn_for_un_nat_copy, - ct->hash_basis); - ct_rwlock_unlock(&ct->resources_lock); + hash_basis); + ct_rwlock_unlock(&resources_lock); if (!new_insert) { char *log_msg = xasprintf("Pre-existing alg " "nat_conn_key"); @@ -916,8 +944,8 @@ conn_not_found(struct conntrack *ct, struct dp_packet *pkt, } } else { *conn_for_un_nat_copy = *nc; - ct_rwlock_wrlock(&ct->resources_lock); - bool nat_res = nat_select_range_tuple(ct, nc, + ct_rwlock_wrlock(&resources_lock); + bool nat_res = nat_select_range_tuple(nc, conn_for_un_nat_copy); if (!nat_res) { @@ -927,15 +955,15 @@ conn_not_found(struct conntrack *ct, struct dp_packet *pkt, /* Update nc with nat adjustments made to * conn_for_un_nat_copy by nat_select_range_tuple(). */ *nc = *conn_for_un_nat_copy; - ct_rwlock_unlock(&ct->resources_lock); + ct_rwlock_unlock(&resources_lock); } conn_for_un_nat_copy->conn_type = CT_CONN_TYPE_UN_NAT; conn_for_un_nat_copy->nat_info = NULL; conn_for_un_nat_copy->alg = NULL; nat_packet(pkt, nc, ctx->icmp_related); } - hmap_insert(&ct->buckets[bucket].connections, &nc->node, ctx->hash); - atomic_count_inc(&ct->n_conn); + hmap_insert(&buckets[bucket].connections, &nc->node, ctx->hash); + atomic_count_inc(&n_conn); } return nc; @@ -953,7 +981,7 @@ nat_res_exhaustion: * memset() serves to document that conn_for_un_nat_copy is from * this point on unused. */ memset(conn_for_un_nat_copy, 0, sizeof *conn_for_un_nat_copy); - ct_rwlock_unlock(&ct->resources_lock); + ct_rwlock_unlock(&resources_lock); static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); VLOG_WARN_RL(&rl, "Unable to NAT due to tuple space exhaustion - " "if DoS attack, use firewalling and/or zone partitioning."); @@ -961,10 +989,9 @@ nat_res_exhaustion: } static bool -conn_update_state(struct conntrack *ct, struct dp_packet *pkt, - struct conn_lookup_ctx *ctx, struct conn **conn, - long long now, unsigned bucket) - OVS_REQUIRES(ct->buckets[bucket].lock) +conn_update_state(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, + struct conn **conn, long long now, unsigned bucket) + OVS_REQUIRES(buckets[bucket].lock) { bool create_new_conn = false; @@ -978,7 +1005,7 @@ conn_update_state(struct conntrack *ct, struct dp_packet *pkt, pkt->md.ct_state |= CS_RELATED; } - enum ct_update_res res = conn_update(*conn, &ct->buckets[bucket], + enum ct_update_res res = conn_update(*conn, &buckets[bucket], pkt, ctx->reply, now); switch (res) { @@ -993,7 +1020,7 @@ conn_update_state(struct conntrack *ct, struct dp_packet *pkt, pkt->md.ct_state = CS_INVALID; break; case CT_UPDATE_NEW: - conn_clean(ct, *conn, &ct->buckets[bucket]); + conn_clean(*conn, &buckets[bucket]); create_new_conn = true; break; default: @@ -1004,20 +1031,20 @@ conn_update_state(struct conntrack *ct, struct dp_packet *pkt, } static void -create_un_nat_conn(struct conntrack *ct, struct conn *conn_for_un_nat_copy, - long long now, bool alg_un_nat) +create_un_nat_conn(struct conn *conn_for_un_nat_copy, long long now, + bool alg_un_nat) { struct conn *nc = xmemdup(conn_for_un_nat_copy, sizeof *nc); nc->key = conn_for_un_nat_copy->rev_key; nc->rev_key = conn_for_un_nat_copy->key; - uint32_t un_nat_hash = conn_key_hash(&nc->key, ct->hash_basis); + uint32_t un_nat_hash = conn_key_hash(&nc->key, hash_basis); unsigned un_nat_conn_bucket = hash_to_bucket(un_nat_hash); - ct_lock_lock(&ct->buckets[un_nat_conn_bucket].lock); - struct conn *rev_conn = conn_lookup(ct, &nc->key, now); + ct_lock_lock(&buckets[un_nat_conn_bucket].lock); + struct conn *rev_conn = conn_lookup(&nc->key, now); if (alg_un_nat) { if (!rev_conn) { - hmap_insert(&ct->buckets[un_nat_conn_bucket].connections, + hmap_insert(&buckets[un_nat_conn_bucket].connections, &nc->node, un_nat_hash); } else { char *log_msg = xasprintf("Unusual condition for un_nat conn " @@ -1027,14 +1054,14 @@ create_un_nat_conn(struct conntrack *ct, struct conn *conn_for_un_nat_copy, free(nc); } } else { - ct_rwlock_rdlock(&ct->resources_lock); + ct_rwlock_rdlock(&resources_lock); struct nat_conn_key_node *nat_conn_key_node = - nat_conn_keys_lookup(&ct->nat_conn_keys, &nc->key, ct->hash_basis); + nat_conn_keys_lookup(&nat_conn_keys, &nc->key, hash_basis); if (nat_conn_key_node && !conn_key_cmp(&nat_conn_key_node->value, &nc->rev_key) && !rev_conn) { - hmap_insert(&ct->buckets[un_nat_conn_bucket].connections, - &nc->node, un_nat_hash); + hmap_insert(&buckets[un_nat_conn_bucket].connections, &nc->node, + un_nat_hash); } else { char *log_msg = xasprintf("Unusual condition for un_nat conn " "create: nat_conn_key_node/rev_conn " @@ -1043,9 +1070,9 @@ create_un_nat_conn(struct conntrack *ct, struct conn *conn_for_un_nat_copy, free(log_msg); free(nc); } - ct_rwlock_unlock(&ct->resources_lock); + ct_rwlock_unlock(&resources_lock); } - ct_lock_unlock(&ct->buckets[un_nat_conn_bucket].lock); + ct_lock_unlock(&buckets[un_nat_conn_bucket].lock); } static void @@ -1069,11 +1096,10 @@ handle_nat(struct dp_packet *pkt, struct conn *conn, } static bool -check_orig_tuple(struct conntrack *ct, struct dp_packet *pkt, - struct conn_lookup_ctx *ctx_in, long long now, - unsigned *bucket, struct conn **conn, +check_orig_tuple(struct dp_packet *pkt, struct conn_lookup_ctx *ctx_in, + long long now, unsigned *bucket, struct conn **conn, const struct nat_action_info_t *nat_action_info) - OVS_REQUIRES(ct->buckets[*bucket].lock) + OVS_REQUIRES(buckets[(*bucket)].lock) { if ((ctx_in->key.dl_type == htons(ETH_TYPE_IP) && !pkt->md.ct_orig_tuple.ipv4.ipv4_proto) || @@ -1084,7 +1110,7 @@ check_orig_tuple(struct conntrack *ct, struct dp_packet *pkt, return false; } - ct_lock_unlock(&ct->buckets[*bucket].lock); + ct_lock_unlock(&buckets[(*bucket)].lock); struct conn_lookup_ctx ctx; memset(&ctx, 0 , sizeof ctx); ctx.conn = NULL; @@ -1123,10 +1149,10 @@ check_orig_tuple(struct conntrack *ct, struct dp_packet *pkt, ctx.key.dl_type = ctx_in->key.dl_type; ctx.key.zone = pkt->md.ct_zone; - ctx.hash = conn_key_hash(&ctx.key, ct->hash_basis); + ctx.hash = conn_key_hash(&ctx.key, hash_basis); *bucket = hash_to_bucket(ctx.hash); - ct_lock_lock(&ct->buckets[*bucket].lock); - conn_key_lookup(&ct->buckets[*bucket], &ctx, now); + ct_lock_lock(&buckets[(*bucket)].lock); + conn_key_lookup(&buckets[(*bucket)], &ctx, now); *conn = ctx.conn; return *conn ? true : false; } @@ -1138,27 +1164,27 @@ is_un_nat_conn_valid(const struct conn *un_nat_conn) } static bool -conn_update_state_alg(struct conntrack *ct, struct dp_packet *pkt, - struct conn_lookup_ctx *ctx, struct conn *conn, +conn_update_state_alg(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, + struct conn *conn, const struct nat_action_info_t *nat_action_info, enum ct_alg_ctl_type ct_alg_ctl, long long now, unsigned bucket, bool *create_new_conn) - OVS_REQUIRES(ct->buckets[bucket].lock) + OVS_REQUIRES(buckets[bucket].lock) { if (is_ftp_ctl(ct_alg_ctl)) { /* Keep sequence tracking in sync with the source of the * sequence skew. */ if (ctx->reply != conn->seq_skew_dir) { - handle_ftp_ctl(ct, ctx, pkt, conn, now, CT_FTP_CTL_OTHER, + handle_ftp_ctl(ctx, pkt, conn, now, CT_FTP_CTL_OTHER, !!nat_action_info); - *create_new_conn = conn_update_state(ct, pkt, ctx, &conn, now, + *create_new_conn = conn_update_state(pkt, ctx, &conn, now, bucket); } else { - *create_new_conn = conn_update_state(ct, pkt, ctx, &conn, now, + *create_new_conn = conn_update_state(pkt, ctx, &conn, now, bucket); if (*create_new_conn == false) { - handle_ftp_ctl(ct, ctx, pkt, conn, now, CT_FTP_CTL_OTHER, + handle_ftp_ctl(ctx, pkt, conn, now, CT_FTP_CTL_OTHER, !!nat_action_info); } } @@ -1168,8 +1194,7 @@ conn_update_state_alg(struct conntrack *ct, struct dp_packet *pkt, } static void -process_one(struct conntrack *ct, struct dp_packet *pkt, - struct conn_lookup_ctx *ctx, uint16_t zone, +process_one(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, uint16_t zone, bool force, bool commit, long long now, const uint32_t *setmark, const struct ovs_key_ct_labels *setlabel, const struct nat_action_info_t *nat_action_info, @@ -1177,13 +1202,13 @@ process_one(struct conntrack *ct, struct dp_packet *pkt, { struct conn *conn; unsigned bucket = hash_to_bucket(ctx->hash); - ct_lock_lock(&ct->buckets[bucket].lock); - conn_key_lookup(&ct->buckets[bucket], ctx, now); + ct_lock_lock(&buckets[bucket].lock); + conn_key_lookup(&buckets[bucket], ctx, now); conn = ctx->conn; /* Delete found entry if in wrong direction. 'force' implies commit. */ if (conn && force && ctx->reply) { - conn_clean(ct, conn, &ct->buckets[bucket]); + conn_clean(conn, &buckets[bucket]); conn = NULL; } @@ -1195,13 +1220,13 @@ process_one(struct conntrack *ct, struct dp_packet *pkt, struct conn_lookup_ctx ctx2; ctx2.conn = NULL; ctx2.key = conn->rev_key; - ctx2.hash = conn_key_hash(&conn->rev_key, ct->hash_basis); + ctx2.hash = conn_key_hash(&conn->rev_key, hash_basis); - ct_lock_unlock(&ct->buckets[bucket].lock); + ct_lock_unlock(&buckets[bucket].lock); bucket = hash_to_bucket(ctx2.hash); - ct_lock_lock(&ct->buckets[bucket].lock); - conn_key_lookup(&ct->buckets[bucket], &ctx2, now); + ct_lock_lock(&buckets[bucket].lock); + conn_key_lookup(&buckets[bucket], &ctx2, now); if (ctx2.conn) { conn = ctx2.conn; @@ -1210,7 +1235,7 @@ process_one(struct conntrack *ct, struct dp_packet *pkt, * between unlock of the rev_conn and lock of the forward conn; * nothing to do. */ pkt->md.ct_state |= CS_TRACKED | CS_INVALID; - ct_lock_unlock(&ct->buckets[bucket].lock); + ct_lock_unlock(&buckets[bucket].lock); return; } } @@ -1224,20 +1249,20 @@ process_one(struct conntrack *ct, struct dp_packet *pkt, helper); if (OVS_LIKELY(conn)) { - if (OVS_LIKELY(!conn_update_state_alg(ct, pkt, ctx, conn, + if (OVS_LIKELY(!conn_update_state_alg(pkt, ctx, conn, nat_action_info, ct_alg_ctl, now, bucket, &create_new_conn))) { - create_new_conn = conn_update_state(ct, pkt, ctx, &conn, now, + create_new_conn = conn_update_state(pkt, ctx, &conn, now, bucket); } if (nat_action_info && !create_new_conn) { handle_nat(pkt, conn, zone, ctx->reply, ctx->icmp_related); } - } else if (check_orig_tuple(ct, pkt, ctx, now, &bucket, &conn, - nat_action_info)) { - create_new_conn = conn_update_state(ct, pkt, ctx, &conn, now, bucket); + } else if (check_orig_tuple(pkt, ctx, now, &bucket, &conn, + nat_action_info)) { + create_new_conn = conn_update_state(pkt, ctx, &conn, now, bucket); } else { if (ctx->icmp_related) { /* An icmp related conn should always be found; no new @@ -1253,17 +1278,16 @@ process_one(struct conntrack *ct, struct dp_packet *pkt, if (OVS_UNLIKELY(create_new_conn)) { - ct_rwlock_rdlock(&ct->resources_lock); - alg_exp = expectation_lookup(&ct->alg_expectations, &ctx->key, - ct->hash_basis, + ct_rwlock_rdlock(&resources_lock); + alg_exp = expectation_lookup(&alg_expectations, &ctx->key, hash_basis, alg_src_ip_wc(ct_alg_ctl)); if (alg_exp) { alg_exp_entry = *alg_exp; alg_exp = &alg_exp_entry; } - ct_rwlock_unlock(&ct->resources_lock); + ct_rwlock_unlock(&resources_lock); - conn = conn_not_found(ct, pkt, ctx, commit, now, nat_action_info, + conn = conn_not_found(pkt, ctx, commit, now, nat_action_info, &conn_for_un_nat_copy, helper, alg_exp, ct_alg_ctl); } @@ -1283,13 +1307,13 @@ process_one(struct conntrack *ct, struct dp_packet *pkt, conn_for_expectation = *conn; } - ct_lock_unlock(&ct->buckets[bucket].lock); + ct_lock_unlock(&buckets[bucket].lock); if (is_un_nat_conn_valid(&conn_for_un_nat_copy)) { - create_un_nat_conn(ct, &conn_for_un_nat_copy, now, !!alg_exp); + create_un_nat_conn(&conn_for_un_nat_copy, now, !!alg_exp); } - handle_alg_ctl(ct, ctx, pkt, ct_alg_ctl, conn, now, !!nat_action_info, + handle_alg_ctl(ctx, pkt, ct_alg_ctl, conn, now, !!nat_action_info, &conn_for_expectation); } @@ -1302,8 +1326,8 @@ process_one(struct conntrack *ct, struct dp_packet *pkt, * elements array containing a value and a mask to set the connection mark. * 'setlabel' behaves similarly for the connection label.*/ int -conntrack_execute(struct conntrack *ct, struct dp_packet_batch *pkt_batch, - ovs_be16 dl_type, bool force, bool commit, uint16_t zone, +conntrack_execute(struct dp_packet_batch *pkt_batch, ovs_be16 dl_type, + bool force, bool commit, uint16_t zone, const uint32_t *setmark, const struct ovs_key_ct_labels *setlabel, ovs_be16 tp_src, ovs_be16 tp_dst, const char *helper, @@ -1315,12 +1339,12 @@ conntrack_execute(struct conntrack *ct, struct dp_packet_batch *pkt_batch, struct conn_lookup_ctx ctx; DP_PACKET_BATCH_FOR_EACH (i, packet, pkt_batch) { - if (!conn_key_extract(ct, packet, dl_type, &ctx, zone)) { + if (!conn_key_extract(packet, dl_type, &ctx, zone)) { packet->md.ct_state = CS_INVALID; write_ct_md(packet, zone, NULL, NULL, NULL); continue; } - process_one(ct, packet, &ctx, zone, force, commit, now, setmark, + process_one(packet, &ctx, zone, force, commit, now, setmark, setlabel, nat_action_info, tp_src, tp_dst, helper); } @@ -1373,8 +1397,7 @@ set_label(struct dp_packet *pkt, struct conn *conn, * LLONG_MAX if 'ctb' is empty. The return value might be smaller than 'now', * if 'limit' is reached */ static long long -sweep_bucket(struct conntrack *ct, struct conntrack_bucket *ctb, - long long now, size_t limit) +sweep_bucket(struct conntrack_bucket *ctb, long long now, size_t limit) OVS_REQUIRES(ctb->lock) { struct conn *conn, *next; @@ -1393,7 +1416,7 @@ sweep_bucket(struct conntrack *ct, struct conntrack_bucket *ctb, } break; } - conn_clean(ct, conn, ctb); + conn_clean(conn, ctb); count++; } } @@ -1406,16 +1429,16 @@ sweep_bucket(struct conntrack *ct, struct conntrack_bucket *ctb, * 'now', meaning that an internal limit has been reached, and some expired * connections have not been deleted. */ static long long -conntrack_clean(struct conntrack *ct, long long now) +conntrack_clean(long long now) { long long next_wakeup = now + CT_TM_MIN; - unsigned int n_conn_limit; + unsigned int n_conn_limit_; size_t clean_count = 0; - atomic_read_relaxed(&ct->n_conn_limit, &n_conn_limit); + atomic_read_relaxed(&n_conn_limit, &n_conn_limit_); for (unsigned i = 0; i < CONNTRACK_BUCKETS; i++) { - struct conntrack_bucket *ctb = &ct->buckets[i]; + struct conntrack_bucket *ctb = &buckets[i]; size_t prev_count; long long min_exp; @@ -1430,8 +1453,8 @@ conntrack_clean(struct conntrack *ct, long long now) * limit to 10% of the global limit equally split among buckets. If * the bucket is busier than the others, we limit to 10% of its * current size. */ - min_exp = sweep_bucket(ct, ctb, now, - MAX(prev_count/10, n_conn_limit/(CONNTRACK_BUCKETS*10))); + min_exp = sweep_bucket(ctb, now, + MAX(prev_count / 10, n_conn_limit_ / (CONNTRACK_BUCKETS * 10))); clean_count += prev_count - hmap_count(&ctb->connections); if (min_exp > now) { @@ -1478,21 +1501,19 @@ next_bucket: #define CT_CLEAN_MIN_INTERVAL 200 /* 0.2 seconds */ static void * -clean_thread_main(void *f_) +clean_thread_main(void *f_ OVS_UNUSED) { - struct conntrack *ct = f_; - - while (!latch_is_set(&ct->clean_thread_exit)) { + while (!latch_is_set(&clean_thread_exit)) { long long next_wake; long long now = time_msec(); - next_wake = conntrack_clean(ct, now); + next_wake = conntrack_clean(now); if (next_wake < now) { poll_timer_wait_until(now + CT_CLEAN_MIN_INTERVAL); } else { poll_timer_wait_until(MAX(next_wake, now + CT_CLEAN_INTERVAL)); } - latch_wait(&ct->clean_thread_exit); + latch_wait(&clean_thread_exit); poll_block(); } @@ -1898,7 +1919,7 @@ extract_l4(struct conn_key *key, const void *data, size_t size, bool *related, } static bool -conn_key_extract(struct conntrack *ct, struct dp_packet *pkt, ovs_be16 dl_type, +conn_key_extract(struct dp_packet *pkt, ovs_be16 dl_type, struct conn_lookup_ctx *ctx, uint16_t zone) { const struct eth_header *l2 = dp_packet_eth(pkt); @@ -1972,7 +1993,7 @@ conn_key_extract(struct conntrack *ct, struct dp_packet *pkt, ovs_be16 dl_type, /* Validate the checksum only when hwol is not supported. */ if (extract_l4(&ctx->key, l4, tail - l4, &ctx->icmp_related, l3, !hwol_good_l4_csum)) { - ctx->hash = conn_key_hash(&ctx->key, ct->hash_basis); + ctx->hash = conn_key_hash(&ctx->key, hash_basis); return true; } } @@ -2113,8 +2134,7 @@ nat_range_hash(const struct conn *conn, uint32_t basis) } static bool -nat_select_range_tuple(struct conntrack *ct, const struct conn *conn, - struct conn *nat_conn) +nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn) { enum { MIN_NAT_EPHEMERAL_PORT = 1024, MAX_NAT_EPHEMERAL_PORT = 65535 }; @@ -2122,7 +2142,7 @@ nat_select_range_tuple(struct conntrack *ct, const struct conn *conn, uint16_t min_port; uint16_t max_port; uint16_t first_port; - uint32_t hash = nat_range_hash(conn, ct->hash_basis); + uint32_t hash = nat_range_hash(conn, hash_basis); if ((conn->nat_info->nat_action & NAT_ACTION_SRC) && (!(conn->nat_info->nat_action & NAT_ACTION_SRC_PORT))) { @@ -2191,8 +2211,8 @@ nat_select_range_tuple(struct conntrack *ct, const struct conn *conn, nat_conn->rev_key.src.port = htons(port); } - bool new_insert = nat_conn_keys_insert(&ct->nat_conn_keys, nat_conn, - ct->hash_basis); + bool new_insert = nat_conn_keys_insert(&nat_conn_keys, nat_conn, + hash_basis); if (new_insert) { return true; } else if (!all_ports_tried) { @@ -2235,16 +2255,16 @@ nat_select_range_tuple(struct conntrack *ct, const struct conn *conn, return false; } -/* This function must be called with the ct->resources lock taken. */ +/* This function must be called with the resources lock taken. */ static struct nat_conn_key_node * -nat_conn_keys_lookup(struct hmap *nat_conn_keys, +nat_conn_keys_lookup(struct hmap *nat_conn_keys_, const struct conn_key *key, uint32_t basis) { struct nat_conn_key_node *nat_conn_key_node; HMAP_FOR_EACH_WITH_HASH (nat_conn_key_node, node, - conn_key_hash(key, basis), nat_conn_keys) { + conn_key_hash(key, basis), nat_conn_keys_) { if (!conn_key_cmp(&nat_conn_key_node->key, key)) { return nat_conn_key_node; } @@ -2252,37 +2272,38 @@ nat_conn_keys_lookup(struct hmap *nat_conn_keys, return NULL; } -/* This function must be called with the ct->resources lock taken. */ +/* This function must be called with the resources lock taken. */ static bool -nat_conn_keys_insert(struct hmap *nat_conn_keys, const struct conn *nat_conn, +nat_conn_keys_insert(struct hmap *nat_conn_keys_, const struct conn *nat_conn, uint32_t basis) { struct nat_conn_key_node *nat_conn_key_node = - nat_conn_keys_lookup(nat_conn_keys, &nat_conn->rev_key, basis); + nat_conn_keys_lookup(nat_conn_keys_, &nat_conn->rev_key, basis); if (!nat_conn_key_node) { - struct nat_conn_key_node *nat_conn_key = xzalloc(sizeof *nat_conn_key); + struct nat_conn_key_node *nat_conn_key = + xzalloc(sizeof *nat_conn_key); nat_conn_key->key = nat_conn->rev_key; nat_conn_key->value = nat_conn->key; - hmap_insert(nat_conn_keys, &nat_conn_key->node, + hmap_insert(nat_conn_keys_, &nat_conn_key->node, conn_key_hash(&nat_conn_key->key, basis)); return true; } return false; } -/* This function must be called with the ct->resources write lock taken. */ +/* This function must be called with the resources write lock taken. */ static void -nat_conn_keys_remove(struct hmap *nat_conn_keys, +nat_conn_keys_remove(struct hmap *nat_conn_keys_, const struct conn_key *key, uint32_t basis) { struct nat_conn_key_node *nat_conn_key_node; HMAP_FOR_EACH_WITH_HASH (nat_conn_key_node, node, - conn_key_hash(key, basis), nat_conn_keys) { + conn_key_hash(key, basis), nat_conn_keys_) { if (!conn_key_cmp(&nat_conn_key_node->key, key)) { - hmap_remove(nat_conn_keys, &nat_conn_key_node->node); + hmap_remove(nat_conn_keys_, &nat_conn_key_node->node); free(nat_conn_key_node); return; } @@ -2476,8 +2497,8 @@ conn_to_ct_dpif_entry(const struct conn *conn, struct ct_dpif_entry *entry, } int -conntrack_dump_start(struct conntrack *ct, struct conntrack_dump *dump, - const uint16_t *pzone, int *ptot_bkts) +conntrack_dump_start(struct conntrack_dump *dump, const uint16_t *pzone, + int *ptot_bkts) { memset(dump, 0, sizeof(*dump)); @@ -2486,7 +2507,6 @@ conntrack_dump_start(struct conntrack *ct, struct conntrack_dump *dump, dump->filter_zone = true; } - dump->ct = ct; *ptot_bkts = CONNTRACK_BUCKETS; return 0; } @@ -2494,17 +2514,16 @@ conntrack_dump_start(struct conntrack *ct, struct conntrack_dump *dump, int conntrack_dump_next(struct conntrack_dump *dump, struct ct_dpif_entry *entry) { - struct conntrack *ct = dump->ct; long long now = time_msec(); while (dump->bucket < CONNTRACK_BUCKETS) { struct hmap_node *node; - ct_lock_lock(&ct->buckets[dump->bucket].lock); + ct_lock_lock(&buckets[dump->bucket].lock); for (;;) { struct conn *conn; - node = hmap_at_position(&ct->buckets[dump->bucket].connections, + node = hmap_at_position(&buckets[dump->bucket].connections, &dump->bucket_pos); if (!node) { break; @@ -2518,7 +2537,7 @@ conntrack_dump_next(struct conntrack_dump *dump, struct ct_dpif_entry *entry) /* Else continue, until we find an entry in the appropriate zone * or the bucket has been scanned completely. */ } - ct_lock_unlock(&ct->buckets[dump->bucket].lock); + ct_lock_unlock(&buckets[dump->bucket].lock); if (!node) { memset(&dump->bucket_pos, 0, sizeof dump->bucket_pos); @@ -2537,71 +2556,71 @@ conntrack_dump_done(struct conntrack_dump *dump OVS_UNUSED) } int -conntrack_flush(struct conntrack *ct, const uint16_t *zone) +conntrack_flush(const uint16_t *zone) { for (unsigned i = 0; i < CONNTRACK_BUCKETS; i++) { struct conn *conn, *next; - ct_lock_lock(&ct->buckets[i].lock); - HMAP_FOR_EACH_SAFE (conn, next, node, &ct->buckets[i].connections) { + ct_lock_lock(&buckets[i].lock); + HMAP_FOR_EACH_SAFE (conn, next, node, &buckets[i].connections) { if ((!zone || *zone == conn->key.zone) && (conn->conn_type == CT_CONN_TYPE_DEFAULT)) { - conn_clean(ct, conn, &ct->buckets[i]); + conn_clean(conn, &buckets[i]); } } - ct_lock_unlock(&ct->buckets[i].lock); + ct_lock_unlock(&buckets[i].lock); } return 0; } int -conntrack_flush_tuple(struct conntrack *ct, const struct ct_dpif_tuple *tuple, - uint16_t zone) +conntrack_flush_tuple(const struct ct_dpif_tuple *tuple, uint16_t zone) { struct conn_lookup_ctx ctx; int error = 0; memset(&ctx, 0, sizeof(ctx)); tuple_to_conn_key(tuple, zone, &ctx.key); - ctx.hash = conn_key_hash(&ctx.key, ct->hash_basis); + ctx.hash = conn_key_hash(&ctx.key, hash_basis); unsigned bucket = hash_to_bucket(ctx.hash); - ct_lock_lock(&ct->buckets[bucket].lock); - conn_key_lookup(&ct->buckets[bucket], &ctx, time_msec()); - if (ctx.conn) { - conn_clean(ct, ctx.conn, &ct->buckets[bucket]); + ct_lock_lock(&buckets[bucket].lock); + conn_key_lookup(&buckets[bucket], &ctx, time_msec()); + if (ctx.conn && ctx.conn->conn_type == CT_CONN_TYPE_DEFAULT) { + conn_clean(ctx.conn, &buckets[bucket]); } else { + VLOG_WARN("Must flush tuple using the original pre-NATed tuple"); error = ENOENT; } - ct_lock_unlock(&ct->buckets[bucket].lock); + ct_lock_unlock(&buckets[bucket].lock); return error; } int -conntrack_set_maxconns(struct conntrack *ct, uint32_t maxconns) +conntrack_set_maxconns(uint32_t maxconns) { - atomic_store_relaxed(&ct->n_conn_limit, maxconns); + atomic_store_relaxed(&n_conn_limit, maxconns); return 0; } int -conntrack_get_maxconns(struct conntrack *ct, uint32_t *maxconns) +conntrack_get_maxconns(uint32_t *maxconns) { - atomic_read_relaxed(&ct->n_conn_limit, maxconns); + atomic_read_relaxed(&n_conn_limit, maxconns); return 0; } int -conntrack_get_nconns(struct conntrack *ct, uint32_t *nconns) +conntrack_get_nconns(uint32_t *nconns) { - *nconns = atomic_count_get(&ct->n_conn); + *nconns = atomic_count_get(&n_conn); return 0; } -/* This function must be called with the ct->resources read lock taken. */ +/* This function must be called with the resources read lock taken. */ static struct alg_exp_node * -expectation_lookup(struct hmap *alg_expectations, const struct conn_key *key, +expectation_lookup(struct hmap *alg_expectations_, const struct conn_key *key, uint32_t basis, bool src_ip_wc) { struct conn_key check_key = *key; @@ -2615,7 +2634,7 @@ expectation_lookup(struct hmap *alg_expectations, const struct conn_key *key, HMAP_FOR_EACH_WITH_HASH (alg_exp_node, node, conn_key_hash(&check_key, basis), - alg_expectations) { + alg_expectations_) { if (!conn_key_cmp(&alg_exp_node->key, &check_key)) { return alg_exp_node; } @@ -2623,25 +2642,25 @@ expectation_lookup(struct hmap *alg_expectations, const struct conn_key *key, return NULL; } -/* This function must be called with the ct->resources write lock taken. */ +/* This function must be called with the resources write lock taken. */ static void -expectation_remove(struct hmap *alg_expectations, +expectation_remove(struct hmap *alg_expectations_, const struct conn_key *key, uint32_t basis) { struct alg_exp_node *alg_exp_node; HMAP_FOR_EACH_WITH_HASH (alg_exp_node, node, conn_key_hash(key, basis), - alg_expectations) { + alg_expectations_) { if (!conn_key_cmp(&alg_exp_node->key, key)) { - hmap_remove(alg_expectations, &alg_exp_node->node); + hmap_remove(alg_expectations_, &alg_exp_node->node); break; } } } -/* This function must be called with the ct->resources read lock taken. */ +/* This function must be called with the resources read lock taken. */ static struct alg_exp_node * -expectation_ref_lookup_unique(const struct hindex *alg_expectation_refs, +expectation_ref_lookup_unique(const struct hindex *alg_expectation_refs_, const struct conn_key *master_key, const struct conn_key *alg_exp_key, uint32_t basis) @@ -2650,7 +2669,7 @@ expectation_ref_lookup_unique(const struct hindex *alg_expectation_refs, HINDEX_FOR_EACH_WITH_HASH (alg_exp_node, node_ref, conn_key_hash(master_key, basis), - alg_expectation_refs) { + alg_expectation_refs_) { if (!conn_key_cmp(&alg_exp_node->master_key, master_key) && !conn_key_cmp(&alg_exp_node->key, alg_exp_key)) { return alg_exp_node; @@ -2659,44 +2678,42 @@ expectation_ref_lookup_unique(const struct hindex *alg_expectation_refs, return NULL; } -/* This function must be called with the ct->resources write lock taken. */ +/* This function must be called with the resources write lock taken. */ static void -expectation_ref_create(struct hindex *alg_expectation_refs, +expectation_ref_create(struct hindex *alg_expectation_refs_, struct alg_exp_node *alg_exp_node, uint32_t basis) { - if (!expectation_ref_lookup_unique(alg_expectation_refs, + if (!expectation_ref_lookup_unique(alg_expectation_refs_, &alg_exp_node->master_key, &alg_exp_node->key, basis)) { - hindex_insert(alg_expectation_refs, &alg_exp_node->node_ref, + hindex_insert(alg_expectation_refs_, &alg_exp_node->node_ref, conn_key_hash(&alg_exp_node->master_key, basis)); } } static void -expectation_clean(struct conntrack *ct, const struct conn_key *master_key, - uint32_t basis) +expectation_clean(const struct conn_key *master_key, uint32_t basis) { - ct_rwlock_wrlock(&ct->resources_lock); + ct_rwlock_wrlock(&resources_lock); struct alg_exp_node *node, *next; HINDEX_FOR_EACH_WITH_HASH_SAFE (node, next, node_ref, conn_key_hash(master_key, basis), - &ct->alg_expectation_refs) { + &alg_expectation_refs) { if (!conn_key_cmp(&node->master_key, master_key)) { - expectation_remove(&ct->alg_expectations, &node->key, basis); - hindex_remove(&ct->alg_expectation_refs, &node->node_ref); + expectation_remove(&alg_expectations, &node->key, basis); + hindex_remove(&alg_expectation_refs, &node->node_ref); free(node); } } - ct_rwlock_unlock(&ct->resources_lock); + ct_rwlock_unlock(&resources_lock); } static void -expectation_create(struct conntrack *ct, ovs_be16 dst_port, - const struct conn *master_conn, bool reply, bool src_ip_wc, - bool skip_nat) +expectation_create(ovs_be16 dst_port, const struct conn *master_conn, + bool reply, bool src_ip_wc, bool skip_nat) { struct ct_addr src_addr; struct ct_addr dst_addr; @@ -2739,21 +2756,21 @@ expectation_create(struct conntrack *ct, ovs_be16 dst_port, /* Take the write lock here because it is almost 100% * likely that the lookup will fail and * expectation_create() will be called below. */ - ct_rwlock_wrlock(&ct->resources_lock); + ct_rwlock_wrlock(&resources_lock); struct alg_exp_node *alg_exp = expectation_lookup( - &ct->alg_expectations, &alg_exp_node->key, ct->hash_basis, src_ip_wc); + &alg_expectations, &alg_exp_node->key, hash_basis, src_ip_wc); if (alg_exp) { free(alg_exp_node); - ct_rwlock_unlock(&ct->resources_lock); + ct_rwlock_unlock(&resources_lock); return; } alg_exp_node->alg_nat_repl_addr = alg_nat_repl_addr; - hmap_insert(&ct->alg_expectations, &alg_exp_node->node, - conn_key_hash(&alg_exp_node->key, ct->hash_basis)); - expectation_ref_create(&ct->alg_expectation_refs, alg_exp_node, - ct->hash_basis); - ct_rwlock_unlock(&ct->resources_lock); + hmap_insert(&alg_expectations, &alg_exp_node->node, + conn_key_hash(&alg_exp_node->key, hash_basis)); + expectation_ref_create(&alg_expectation_refs, alg_exp_node, + hash_basis); + ct_rwlock_unlock(&resources_lock); } static uint8_t @@ -2881,8 +2898,7 @@ detect_ftp_ctl_type(const struct conn_lookup_ctx *ctx, } static enum ftp_ctl_pkt -process_ftp_ctl_v4(struct conntrack *ct, - struct dp_packet *pkt, +process_ftp_ctl_v4(struct dp_packet *pkt, const struct conn *conn_for_expectation, ovs_be32 *v4_addr_rep, char **ftp_data_v4_start, @@ -3011,7 +3027,7 @@ process_ftp_ctl_v4(struct conntrack *ct, return CT_FTP_CTL_INVALID; } - expectation_create(ct, port, conn_for_expectation, + expectation_create(port, conn_for_expectation, !!(pkt->md.ct_state & CS_REPLY_DIR), false, false); return CT_FTP_CTL_INTEREST; } @@ -3026,8 +3042,7 @@ skip_ipv6_digits(char *str) } static enum ftp_ctl_pkt -process_ftp_ctl_v6(struct conntrack *ct, - struct dp_packet *pkt, +process_ftp_ctl_v6(struct dp_packet *pkt, const struct conn *conn_for_expectation, struct ct_addr *v6_addr_rep, char **ftp_data_start, @@ -3114,7 +3129,7 @@ process_ftp_ctl_v6(struct conntrack *ct, OVS_NOT_REACHED(); } - expectation_create(ct, port, conn_for_expectation, + expectation_create(port, conn_for_expectation, !!(pkt->md.ct_state & CS_REPLY_DIR), false, false); return CT_FTP_CTL_INTEREST; } @@ -3162,8 +3177,7 @@ repl_ftp_v6_addr(struct dp_packet *pkt, struct ct_addr v6_addr_rep, } static void -handle_ftp_ctl(struct conntrack *ct, const struct conn_lookup_ctx *ctx, - struct dp_packet *pkt, +handle_ftp_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, const struct conn *conn_for_expectation, long long now, enum ftp_ctl_pkt ftp_ctl, bool nat) { @@ -3192,13 +3206,13 @@ handle_ftp_ctl(struct conntrack *ct, const struct conn_lookup_ctx *ctx, } else if (ftp_ctl == CT_FTP_CTL_INTEREST) { enum ftp_ctl_pkt rc; if (ctx->key.dl_type == htons(ETH_TYPE_IPV6)) { - rc = process_ftp_ctl_v6(ct, pkt, conn_for_expectation, - &v6_addr_rep, &ftp_data_start, + rc = process_ftp_ctl_v6(pkt, conn_for_expectation, &v6_addr_rep, + &ftp_data_start, &addr_offset_from_ftp_data_start, &addr_size, &mode); } else { - rc = process_ftp_ctl_v4(ct, pkt, conn_for_expectation, - &v4_addr_rep, &ftp_data_start, + rc = process_ftp_ctl_v4(pkt, conn_for_expectation, &v4_addr_rep, + &ftp_data_start, &addr_offset_from_ftp_data_start); } if (rc == CT_FTP_CTL_INVALID) { @@ -3217,7 +3231,7 @@ handle_ftp_ctl(struct conntrack *ct, const struct conn_lookup_ctx *ctx, ip_len = ntohs(nh6->ip6_ctlun.ip6_un1.ip6_un1_plen); ip_len += seq_skew; nh6->ip6_ctlun.ip6_un1.ip6_un1_plen = htons(ip_len); - conn_seq_skew_set(ct, &conn_for_expectation->key, now, + conn_seq_skew_set(&conn_for_expectation->key, now, seq_skew, ctx->reply); } } else { @@ -3229,7 +3243,7 @@ handle_ftp_ctl(struct conntrack *ct, const struct conn_lookup_ctx *ctx, l3_hdr->ip_csum = recalc_csum16(l3_hdr->ip_csum, l3_hdr->ip_tot_len, htons(ip_len)); l3_hdr->ip_tot_len = htons(ip_len); - conn_seq_skew_set(ct, &conn_for_expectation->key, now, + conn_seq_skew_set(&conn_for_expectation->key, now, seq_skew, ctx->reply); } } @@ -3286,14 +3300,13 @@ handle_ftp_ctl(struct conntrack *ct, const struct conn_lookup_ctx *ctx, } static void -handle_tftp_ctl(struct conntrack *ct, - const struct conn_lookup_ctx *ctx OVS_UNUSED, +handle_tftp_ctl(const struct conn_lookup_ctx *ctx OVS_UNUSED, struct dp_packet *pkt, const struct conn *conn_for_expectation, long long now OVS_UNUSED, enum ftp_ctl_pkt ftp_ctl OVS_UNUSED, bool nat OVS_UNUSED) { - expectation_create(ct, conn_for_expectation->key.src.port, + expectation_create(conn_for_expectation->key.src.port, conn_for_expectation, !!(pkt->md.ct_state & CS_REPLY_DIR), false, false); } diff --git a/lib/conntrack.h b/lib/conntrack.h index e3a5dcc..80ba80e 100644 --- a/lib/conntrack.h +++ b/lib/conntrack.h @@ -38,21 +38,17 @@ * Usage * ===== * - * struct conntrack ct; - * * Initialization: * - * conntrack_init(&ct); + * conntrack_init(); * * It is necessary to periodically issue a call to * - * conntrack_run(&ct); - * * to allow the module to clean up expired connections. * * To send a group of packets through the connection tracker: * - * conntrack_execute(&ct, pkts, n_pkts, ...); + * conntrack_execute(pkt_batch, ...); * * Thread-safety * ============= @@ -62,8 +58,6 @@ struct dp_packet_batch; -struct conntrack; - struct ct_addr { union { ovs_16aligned_be32 ipv4; @@ -88,11 +82,10 @@ struct nat_action_info_t { uint16_t nat_action; }; -void conntrack_init(struct conntrack *); -void conntrack_destroy(struct conntrack *); - -int conntrack_execute(struct conntrack *ct, struct dp_packet_batch *pkt_batch, - ovs_be16 dl_type, bool force, bool commit, uint16_t zone, +void conntrack_init(void); +void conntrack_destroy(void); +int conntrack_execute(struct dp_packet_batch *pkt_batch, ovs_be16 dl_type, + bool force, bool commit, uint16_t zone, const uint32_t *setmark, const struct ovs_key_ct_labels *setlabel, ovs_be16 tp_src, ovs_be16 tp_dst, const char *helper, @@ -111,17 +104,15 @@ struct conntrack_dump { struct ct_dpif_entry; struct ct_dpif_tuple; -int conntrack_dump_start(struct conntrack *, struct conntrack_dump *, +int conntrack_dump_start(struct conntrack_dump *, const uint16_t *pzone, int *); int conntrack_dump_next(struct conntrack_dump *, struct ct_dpif_entry *); int conntrack_dump_done(struct conntrack_dump *); - -int conntrack_flush(struct conntrack *, const uint16_t *zone); -int conntrack_flush_tuple(struct conntrack *, const struct ct_dpif_tuple *, - uint16_t zone); -int conntrack_set_maxconns(struct conntrack *ct, uint32_t maxconns); -int conntrack_get_maxconns(struct conntrack *ct, uint32_t *maxconns); -int conntrack_get_nconns(struct conntrack *ct, uint32_t *nconns); +int conntrack_flush(const uint16_t *zone); +int conntrack_flush_tuple(const struct ct_dpif_tuple *, uint16_t zone); +int conntrack_set_maxconns(uint32_t maxconns); +int conntrack_get_maxconns(uint32_t *maxconns); +int conntrack_get_nconns(uint32_t *nconns); /* 'struct ct_lock' is a wrapper for an adaptive mutex. It's useful to try * different types of locks (e.g. spinlocks) */ @@ -222,77 +213,4 @@ enum ct_timeout { N_CT_TM }; -/* Locking: - * - * The connections are kept in different buckets, which are completely - * independent. The connection bucket is determined by the hash of its key. - * - * Each bucket has two locks. Acquisition order is, from outermost to - * innermost: - * - * cleanup_mutex - * lock - * - * */ -struct conntrack_bucket { - /* Protects 'connections' and 'exp_lists'. Used in the fast path */ - struct ct_lock lock; - /* Contains the connections in the bucket, indexed by 'struct conn_key' */ - struct hmap connections OVS_GUARDED; - /* For each possible timeout we have a list of connections. When the - * timeout of a connection is updated, we move it to the back of the list. - * Since the connection in a list have the same relative timeout, the list - * will be ordered, with the oldest connections to the front. */ - struct ovs_list exp_lists[N_CT_TM] OVS_GUARDED; - - /* Protects 'next_cleanup'. Used to make sure that there's only one thread - * performing the cleanup. */ - struct ovs_mutex cleanup_mutex; - long long next_cleanup OVS_GUARDED; -}; - -#define CONNTRACK_BUCKETS_SHIFT 8 -#define CONNTRACK_BUCKETS (1 << CONNTRACK_BUCKETS_SHIFT) - -struct conntrack { - /* Independent buckets containing the connections */ - struct conntrack_bucket buckets[CONNTRACK_BUCKETS]; - - /* Salt for hashing a connection key. */ - uint32_t hash_basis; - - /* The thread performing periodic cleanup of the connection - * tracker */ - pthread_t clean_thread; - /* Latch to destroy the 'clean_thread' */ - struct latch clean_thread_exit; - - /* Number of connections currently in the connection tracker. */ - atomic_count n_conn; - /* Connections limit. When this limit is reached, no new connection - * will be accepted. */ - atomic_uint n_conn_limit; - - /* The following resources are referenced during nat connection - * creation and deletion. */ - struct hmap nat_conn_keys OVS_GUARDED; - /* Hash table for alg expectations. Expectations are created - * by control connections to help create data connections. */ - struct hmap alg_expectations OVS_GUARDED; - /* Used to lookup alg expectations from the control context. */ - struct hindex alg_expectation_refs OVS_GUARDED; - /* Expiry list for alg expectations. */ - struct ovs_list alg_exp_list OVS_GUARDED; - /* This lock is used during NAT connection creation and deletion; - * it is taken after a bucket lock and given back before that - * bucket unlock. - * This lock is similarly used to guard alg_expectations and - * alg_expectation_refs. If a bucket lock is also held during - * the normal code flow, then is must be taken first and released - * last. - */ - struct ct_rwlock resources_lock; - -}; - #endif /* conntrack.h */ diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 1564db9..8bc750c 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -366,8 +366,6 @@ struct dp_netdev { char *pmd_cmask; uint64_t last_tnl_conf_seq; - - struct conntrack conntrack; }; static void meter_lock(const struct dp_netdev *dp, uint32_t meter_id) @@ -1495,7 +1493,7 @@ create_dp_netdev(const char *name, const struct dpif_class *class, dp->upcall_aux = NULL; dp->upcall_cb = NULL; - conntrack_init(&dp->conntrack); + conntrack_init(); atomic_init(&dp->emc_insert_min, DEFAULT_EM_FLOW_INSERT_MIN); atomic_init(&dp->tx_flush_interval, DEFAULT_TX_FLUSH_INTERVAL); @@ -1613,7 +1611,7 @@ dp_netdev_free(struct dp_netdev *dp) ovs_mutex_destroy(&dp->non_pmd_mutex); ovsthread_key_delete(dp->per_pmd_key); - conntrack_destroy(&dp->conntrack); + conntrack_destroy(); seq_destroy(dp->reconfigure_seq); @@ -6786,8 +6784,8 @@ dp_execute_cb(void *aux_, struct dp_packet_batch *packets_, VLOG_WARN_RL(&rl, "NAT specified without commit."); } - conntrack_execute(&dp->conntrack, packets_, aux->flow->dl_type, force, - commit, zone, setmark, setlabel, aux->flow->tp_src, + conntrack_execute(packets_, aux->flow->dl_type, force, commit, zone, + setmark, setlabel, aux->flow->tp_src, aux->flow->tp_dst, helper, nat_action_info_ref, pmd->ctx.now / 1000); break; @@ -6836,23 +6834,16 @@ dp_netdev_execute_actions(struct dp_netdev_pmd_thread *pmd, struct dp_netdev_ct_dump { struct ct_dpif_dump_state up; struct conntrack_dump dump; - struct conntrack *ct; - struct dp_netdev *dp; }; static int -dpif_netdev_ct_dump_start(struct dpif *dpif, struct ct_dpif_dump_state **dump_, +dpif_netdev_ct_dump_start(struct dpif *dpif OVS_UNUSED, + struct ct_dpif_dump_state **dump_, const uint16_t *pzone, int *ptot_bkts) { - struct dp_netdev *dp = get_dp_netdev(dpif); - struct dp_netdev_ct_dump *dump; - - dump = xzalloc(sizeof *dump); - dump->dp = dp; - dump->ct = &dp->conntrack; - - conntrack_dump_start(&dp->conntrack, &dump->dump, pzone, ptot_bkts); + struct dp_netdev_ct_dump *dump = xzalloc(sizeof *dump); + conntrack_dump_start(&dump->dump, pzone, ptot_bkts); *dump_ = &dump->up; return 0; @@ -6887,39 +6878,31 @@ dpif_netdev_ct_dump_done(struct dpif *dpif OVS_UNUSED, } static int -dpif_netdev_ct_flush(struct dpif *dpif, const uint16_t *zone, +dpif_netdev_ct_flush(struct dpif *dpif OVS_UNUSED, const uint16_t *zone, const struct ct_dpif_tuple *tuple) { - struct dp_netdev *dp = get_dp_netdev(dpif); - if (tuple) { - return conntrack_flush_tuple(&dp->conntrack, tuple, zone ? *zone : 0); + return conntrack_flush_tuple(tuple, zone ? *zone : 0); } - return conntrack_flush(&dp->conntrack, zone); + return conntrack_flush(zone); } static int -dpif_netdev_ct_set_maxconns(struct dpif *dpif, uint32_t maxconns) +dpif_netdev_ct_set_maxconns(struct dpif *dpif OVS_UNUSED, uint32_t maxconns) { - struct dp_netdev *dp = get_dp_netdev(dpif); - - return conntrack_set_maxconns(&dp->conntrack, maxconns); + return conntrack_set_maxconns(maxconns); } static int -dpif_netdev_ct_get_maxconns(struct dpif *dpif, uint32_t *maxconns) +dpif_netdev_ct_get_maxconns(struct dpif *dpif OVS_UNUSED, uint32_t *maxconns) { - struct dp_netdev *dp = get_dp_netdev(dpif); - - return conntrack_get_maxconns(&dp->conntrack, maxconns); + return conntrack_get_maxconns(maxconns); } static int -dpif_netdev_ct_get_nconns(struct dpif *dpif, uint32_t *nconns) +dpif_netdev_ct_get_nconns(struct dpif *dpif OVS_UNUSED, uint32_t *nconns) { - struct dp_netdev *dp = get_dp_netdev(dpif); - - return conntrack_get_nconns(&dp->conntrack, nconns); + return conntrack_get_nconns(nconns); } const struct dpif_class dpif_netdev_class = { diff --git a/tests/test-conntrack.c b/tests/test-conntrack.c index 24d0bb4..b16d756 100644 --- a/tests/test-conntrack.c +++ b/tests/test-conntrack.c @@ -72,7 +72,6 @@ struct thread_aux { unsigned tid; }; -static struct conntrack ct; static unsigned long n_threads, n_pkts, batch_size; static bool change_conn = false; static struct ovs_barrier barrier; @@ -89,8 +88,8 @@ ct_thread_main(void *aux_) pkt_batch = prepare_packets(batch_size, change_conn, aux->tid, &dl_type); ovs_barrier_block(&barrier); for (i = 0; i < n_pkts; i += batch_size) { - conntrack_execute(&ct, pkt_batch, dl_type, false, true, 0, NULL, NULL, - 0, 0, NULL, NULL, now); + conntrack_execute(pkt_batch, dl_type, false, true, 0, NULL, NULL, 0, + 0, NULL, NULL, now); } ovs_barrier_block(&barrier); destroy_packets(pkt_batch); @@ -124,7 +123,7 @@ test_benchmark(struct ovs_cmdl_context *ctx) threads = xcalloc(n_threads, sizeof *threads); ovs_barrier_init(&barrier, n_threads + 1); - conntrack_init(&ct); + conntrack_init(); /* Create threads */ for (i = 0; i < n_threads; i++) { @@ -144,14 +143,13 @@ test_benchmark(struct ovs_cmdl_context *ctx) xpthread_join(threads[i].thread, NULL); } - conntrack_destroy(&ct); + conntrack_destroy(); ovs_barrier_destroy(&barrier); free(threads); } static void -pcap_batch_execute_conntrack(struct conntrack *ct_, - struct dp_packet_batch *pkt_batch) +pcap_batch_execute_conntrack(struct dp_packet_batch *pkt_batch) { struct dp_packet_batch new_batch; ovs_be16 dl_type = htons(0); @@ -173,16 +171,16 @@ pcap_batch_execute_conntrack(struct conntrack *ct_, } if (flow.dl_type != dl_type) { - conntrack_execute(ct_, &new_batch, dl_type, false, true, 0, - NULL, NULL, 0, 0, NULL, NULL, now); + conntrack_execute(&new_batch, dl_type, false, true, 0, NULL, NULL, + 0, 0, NULL, NULL, now); dp_packet_batch_init(&new_batch); } new_batch.packets[new_batch.count++] = packet;; } if (!dp_packet_batch_is_empty(&new_batch)) { - conntrack_execute(ct_, &new_batch, dl_type, false, true, 0, NULL, NULL, - 0, 0, NULL, NULL, now); + conntrack_execute(&new_batch, dl_type, false, true, 0, NULL, NULL, 0, + 0, NULL, NULL, now); } } @@ -211,7 +209,7 @@ test_pcap(struct ovs_cmdl_context *ctx) fatal_signal_init(); - conntrack_init(&ct); + conntrack_init(); total_count = 0; for (;;) { struct dp_packet *packet; @@ -229,7 +227,7 @@ test_pcap(struct ovs_cmdl_context *ctx) if (!batch->count) { break; } - pcap_batch_execute_conntrack(&ct, batch); + pcap_batch_execute_conntrack(batch); DP_PACKET_BATCH_FOR_EACH (i, packet, batch) { struct ds ds = DS_EMPTY_INITIALIZER; @@ -244,7 +242,7 @@ test_pcap(struct ovs_cmdl_context *ctx) dp_packet_delete_batch(batch, true); } - conntrack_destroy(&ct); + conntrack_destroy(); ovs_pcap_close(pcap); } From patchwork Wed Nov 28 16:31:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Darrell Ball X-Patchwork-Id: 1004627 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="SSCWikGU"; dkim-atps=neutral Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 434mT34mR5z9s3Z for ; Thu, 29 Nov 2018 03:34:31 +1100 (AEDT) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 5772DC2B; Wed, 28 Nov 2018 16:32:15 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id C2C1CC21 for ; Wed, 28 Nov 2018 16:32:14 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 57B4F782 for ; Wed, 28 Nov 2018 16:32:11 +0000 (UTC) Received: by mail-pg1-f178.google.com with SMTP id 17so9734067pgg.1 for ; Wed, 28 Nov 2018 08:32:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=IzwxfeaGQZo853CRIiGSthAQQseJbkPk40kGIDaiTIM=; b=SSCWikGUONhQW/w4+2ksGHI47Jy8jn62FLUr+Nbx5KUZBTIpEQpKtxt2aGPV+kemPf oEOTRc9xJVfowG43nmdldGx7xJRgmwAhpnrWy0kPi/Th1J1MqYutVcCS9PCl+So6tFYV 4Yhb0SuP+eKwDDLRv7/KLSKBFGnIansG5CASv5l/Xb0mRXKtQDIFSacH/F5coFLWlClA K988EkfiHPWX9jVE6o/lr71kxiM1Ou4RP8rmse4hFevUjo/0UeFoJ9KNyLFxcmSeEIan zibkvJcCAZ/h08uiJ0iY3GO3rVaVDs+MnWOM8s1rDYlcpubmt2EC3PtBhNjDpsU+CrFZ 2diQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=IzwxfeaGQZo853CRIiGSthAQQseJbkPk40kGIDaiTIM=; b=tRk+XCwPsCwRHvaqaKd2GX+R3NodNNeYcK9j87grGsd+dvwpvzhLhyMVLTkfGqfAwS w3lW18wB0kTWs3X5c8mHWXGSJpx0i1FFjY+P5/S3uCxgQEFpAHfnRUdBQlr2axDe8sVj 0aNzIbGPFOl2PTzDyQP6QvJIn2hNHwGxZZaVKMa6XhQ0cc/9gs2iDnY3HgN+r9X9ykdx HyfJPiH5org7ojQXFTQVEzw6OyWZBITOKdtXDURc0Xw3twdwf1YQ+9ElIM1+RT29yyjK aG47+BiLkz2KxAKaoHeodNgCAd0putbJZivEXGT9BAGW8EaJ2ThP7qKfKe8S8LnAAAjH AxDQ== X-Gm-Message-State: AA+aEWZnhGXOmfrA/PWP5tsuaj6gIrhvp6IS+2frBEhvRDkqEKUCcWx/ +PiCbX91dfEBXOFmTaL2jOo= X-Google-Smtp-Source: AFSGD/UIDti38r0t/hhh7rlnCqC0sV/9SraOq3SRvtJQfAHx6XeU6goGTnzR/Th4nHCoUSJcT2Fa7g== X-Received: by 2002:a63:9e58:: with SMTP id r24mr34708592pgo.264.1543422729644; Wed, 28 Nov 2018 08:32:09 -0800 (PST) Received: from ubuntu.localdomain (c-76-102-76-212.hsd1.ca.comcast.net. [76.102.76.212]) by smtp.gmail.com with ESMTPSA id v9sm1201512pfg.144.2018.11.28.08.32.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Nov 2018 08:32:08 -0800 (PST) From: Darrell Ball To: dlu998@gmail.com, dev@openvswitch.org Date: Wed, 28 Nov 2018 08:31:51 -0800 Message-Id: <1543422714-100901-3-git-send-email-dlu998@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1543422714-100901-1-git-send-email-dlu998@gmail.com> References: <1543422714-100901-1-git-send-email-dlu998@gmail.com> X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [patch v2 2/5] conntrack: Add rcu support. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org For performance and code simplification reasons, add rcu support for conntrack. The array of hmaps is replaced by a cmap as part of this conversion. Using a single map also simplifies the handling of NAT and allows the removal of the nat_conn map and friends. Per connection entry locks are introduced, which are needed in a few code paths. A subsequent patch will move the connection entry lock to the protocol specific layer. Signed-off-by: Darrell Ball --- lib/conntrack-icmp.c | 23 +- lib/conntrack-other.c | 13 +- lib/conntrack-private.h | 120 +++--- lib/conntrack-tcp.c | 21 +- lib/conntrack.c | 964 +++++++++++++++++++----------------------------- lib/conntrack.h | 106 +----- 6 files changed, 471 insertions(+), 776 deletions(-) diff --git a/lib/conntrack-icmp.c b/lib/conntrack-icmp.c index 40fd1d8..fd10985 100644 --- a/lib/conntrack-icmp.c +++ b/lib/conntrack-icmp.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2015, 2016 Nicira, Inc. + * Copyright (c) 2015-2018 Nicira, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. @@ -46,16 +46,13 @@ conn_icmp_cast(const struct conn *conn) } static enum ct_update_res -icmp_conn_update(struct conn *conn_, struct conntrack_bucket *ctb, - struct dp_packet *pkt OVS_UNUSED, bool reply, long long now) +icmp_conn_update(struct conn *conn_, struct dp_packet *pkt OVS_UNUSED, + bool reply, long long now) { struct conn_icmp *conn = conn_icmp_cast(conn_); - if (reply && conn->state != ICMPS_REPLY) { - conn->state = ICMPS_REPLY; - } - - conn_update_expiration(ctb, &conn->up, icmp_timeouts[conn->state], now); + conn->state = reply ? ICMPS_REPLY : ICMPS_FIRST; + conn_update_expiration(&conn->up, icmp_timeouts[conn->state], now); return CT_UPDATE_VALID; } @@ -79,15 +76,11 @@ icmp6_valid_new(struct dp_packet *pkt) } static struct conn * -icmp_new_conn(struct conntrack_bucket *ctb, struct dp_packet *pkt OVS_UNUSED, - long long now) +icmp_new_conn(struct dp_packet *pkt OVS_UNUSED, long long now) { - struct conn_icmp *conn; - - conn = xzalloc(sizeof *conn); + struct conn_icmp *conn = xzalloc(sizeof *conn); conn->state = ICMPS_FIRST; - - conn_init_expiration(ctb, &conn->up, icmp_timeouts[conn->state], now); + conn_init_expiration(&conn->up, icmp_timeouts[conn->state], now); return &conn->up; } diff --git a/lib/conntrack-other.c b/lib/conntrack-other.c index 2920889..813be88 100644 --- a/lib/conntrack-other.c +++ b/lib/conntrack-other.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2015, 2016 Nicira, Inc. + * Copyright (c) 2015-2018 Nicira, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. @@ -43,8 +43,8 @@ conn_other_cast(const struct conn *conn) } static enum ct_update_res -other_conn_update(struct conn *conn_, struct conntrack_bucket *ctb, - struct dp_packet *pkt OVS_UNUSED, bool reply, long long now) +other_conn_update(struct conn *conn_, struct dp_packet *pkt OVS_UNUSED, + bool reply, long long now) { struct conn_other *conn = conn_other_cast(conn_); @@ -54,7 +54,7 @@ other_conn_update(struct conn *conn_, struct conntrack_bucket *ctb, conn->state = OTHERS_MULTIPLE; } - conn_update_expiration(ctb, &conn->up, other_timeouts[conn->state], now); + conn_update_expiration(&conn->up, other_timeouts[conn->state], now); return CT_UPDATE_VALID; } @@ -66,15 +66,14 @@ other_valid_new(struct dp_packet *pkt OVS_UNUSED) } static struct conn * -other_new_conn(struct conntrack_bucket *ctb, struct dp_packet *pkt OVS_UNUSED, - long long now) +other_new_conn(struct dp_packet *pkt OVS_UNUSED, long long now) { struct conn_other *conn; conn = xzalloc(sizeof *conn); conn->state = OTHERS_FIRST; - conn_init_expiration(ctb, &conn->up, other_timeouts[conn->state], now); + conn_init_expiration(&conn->up, other_timeouts[conn->state], now); return &conn->up; } diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h index 27ece38..3d838e4 100644 --- a/lib/conntrack-private.h +++ b/lib/conntrack-private.h @@ -21,6 +21,7 @@ #include #include +#include "cmap.h" #include "conntrack.h" #include "ct-dpif.h" #include "openvswitch/hmap.h" @@ -51,18 +52,11 @@ BUILD_ASSERT_DECL(sizeof(struct ct_endpoint) == sizeof(struct ct_addr) + 4); struct conn_key { struct ct_endpoint src; struct ct_endpoint dst; - ovs_be16 dl_type; uint16_t zone; uint8_t nw_proto; }; -struct nat_conn_key_node { - struct hmap_node node; - struct conn_key key; - struct conn_key value; -}; - /* This is used for alg expectations; an expectation is a * context created in preparation for establishing a data * connection. The expectation is created by the control @@ -87,27 +81,43 @@ struct alg_exp_node { bool nat_rpl_dst; }; +struct OVS_LOCKABLE ct_ce_lock { + struct ovs_mutex lock; +}; + struct conn { struct conn_key key; struct conn_key rev_key; /* Only used for orig_tuple support. */ struct conn_key master_key; + struct ct_ce_lock lock; long long expiration; struct ovs_list exp_node; - struct hmap_node node; + struct cmap_node cm_node; ovs_u128 label; - /* XXX: consider flattening. */ struct nat_action_info_t *nat_info; char *alg; + struct conn *nat_conn; int seq_skew; uint32_t mark; + /* See ct_conn_type. */ uint8_t conn_type; - /* TCP sequence skew due to NATTing of FTP control messages. */ - uint8_t seq_skew_dir; + /* Update expiry list id of which there are 'N_CT_TM' possible values. + * This field is used to signal an update to the specified list. The + * value 'NO_UPD_EXP_LIST' is used to indicate no update to any list. */ + uint8_t exp_list_id; + /* TCP sequence skew direction due to NATTing of FTP control messages; + * true if reply direction. */ + bool seq_skew_dir; /* True if alg data connection. */ - uint8_t alg_related; + bool alg_related; + /* Inserted into the cmap; handle theoretical expiry list race; although + * such a race would probably mean a system meltdown. */ + bool inserted; }; +#define NO_UPD_EXP_LIST 255 + enum ct_update_res { CT_UPDATE_INVALID, CT_UPDATE_VALID, @@ -119,68 +129,70 @@ enum ct_conn_type { CT_CONN_TYPE_UN_NAT, }; -/* Locking: - * - * The connections are kept in different buckets, which are completely - * independent. The connection bucket is determined by the hash of its key. - * - * Each bucket has two locks. Acquisition order is, from outermost to - * innermost: - * - * cleanup_mutex - * lock - * - * */ -struct conntrack_bucket { - /* Protects 'connections' and 'exp_lists'. Used in the fast path */ - struct ct_lock lock; - /* Contains the connections in the bucket, indexed by 'struct conn_key' */ - struct hmap connections OVS_GUARDED; - /* For each possible timeout we have a list of connections. When the - * timeout of a connection is updated, we move it to the back of the list. - * Since the connection in a list have the same relative timeout, the list - * will be ordered, with the oldest connections to the front. */ - struct ovs_list exp_lists[N_CT_TM] OVS_GUARDED; - - /* Protects 'next_cleanup'. Used to make sure that there's only one thread - * performing the cleanup. */ - struct ovs_mutex cleanup_mutex; - long long next_cleanup OVS_GUARDED; -}; +extern struct ct_l4_proto ct_proto_tcp; +extern struct ct_l4_proto ct_proto_other; +extern struct ct_l4_proto ct_proto_icmp4; +extern struct ct_l4_proto ct_proto_icmp6; struct ct_l4_proto { - struct conn *(*new_conn)(struct conntrack_bucket *, struct dp_packet *pkt, - long long now); + struct conn *(*new_conn)(struct dp_packet *pkt, long long now); bool (*valid_new)(struct dp_packet *pkt); enum ct_update_res (*conn_update)(struct conn *conn, - struct conntrack_bucket *, struct dp_packet *pkt, bool reply, long long now); void (*conn_get_protoinfo)(const struct conn *, struct ct_dpif_protoinfo *); }; -extern struct ct_l4_proto ct_proto_tcp; -extern struct ct_l4_proto ct_proto_other; -extern struct ct_l4_proto ct_proto_icmp4; -extern struct ct_l4_proto ct_proto_icmp6; +/* Timeouts: all the possible timeout states passed to update_expiration() + * are listed here. The name will be prefix by CT_TM_ and the value is in + * milliseconds */ +#define CT_TIMEOUTS \ + CT_TIMEOUT(TCP_FIRST_PACKET, 30 * 1000) \ + CT_TIMEOUT(TCP_OPENING, 30 * 1000) \ + CT_TIMEOUT(TCP_ESTABLISHED, 24 * 60 * 60 * 1000) \ + CT_TIMEOUT(TCP_CLOSING, 15 * 60 * 1000) \ + CT_TIMEOUT(TCP_FIN_WAIT, 45 * 1000) \ + CT_TIMEOUT(TCP_CLOSED, 30 * 1000) \ + CT_TIMEOUT(OTHER_FIRST, 60 * 1000) \ + CT_TIMEOUT(OTHER_MULTIPLE, 60 * 1000) \ + CT_TIMEOUT(OTHER_BIDIR, 30 * 1000) \ + CT_TIMEOUT(ICMP_FIRST, 60 * 1000) \ + CT_TIMEOUT(ICMP_REPLY, 30 * 1000) + +/* The smallest of the above values: it is used as an upper bound for the + * interval between two rounds of cleanup of expired entries */ +#define CT_TM_MIN (30 * 1000) + +#define CT_TIMEOUT(NAME, VAL) BUILD_ASSERT_DECL(VAL >= CT_TM_MIN); + CT_TIMEOUTS +#undef CT_TIMEOUT + +enum ct_timeout { +#define CT_TIMEOUT(NAME, VALUE) CT_TM_##NAME, + CT_TIMEOUTS +#undef CT_TIMEOUT + N_CT_TM +}; extern long long ct_timeout_val[]; +extern struct ovs_list cm_exp_lists[N_CT_TM]; +/* ct_lock must be held. */ static inline void -conn_init_expiration(struct conntrack_bucket *ctb, struct conn *conn, - enum ct_timeout tm, long long now) +conn_init_expiration(struct conn *conn, enum ct_timeout tm, long long now) { conn->expiration = now + ct_timeout_val[tm]; - ovs_list_push_back(&ctb->exp_lists[tm], &conn->exp_node); + conn->exp_list_id = NO_UPD_EXP_LIST; + ovs_list_push_back(&cm_exp_lists[tm], &conn->exp_node); } +/* The conn entry lock must be held. */ static inline void -conn_update_expiration(struct conntrack_bucket *ctb, struct conn *conn, - enum ct_timeout tm, long long now) +conn_update_expiration(struct conn *conn, enum ct_timeout tm, long long now) { - ovs_list_remove(&conn->exp_node); - conn_init_expiration(ctb, conn, tm, now); + conn->expiration = now + ct_timeout_val[tm]; + conn->exp_list_id = tm; } static inline uint32_t diff --git a/lib/conntrack-tcp.c b/lib/conntrack-tcp.c index 86d313d..19fdf1d 100644 --- a/lib/conntrack-tcp.c +++ b/lib/conntrack-tcp.c @@ -145,8 +145,8 @@ tcp_get_wscale(const struct tcp_header *tcp) } static enum ct_update_res -tcp_conn_update(struct conn *conn_, struct conntrack_bucket *ctb, - struct dp_packet *pkt, bool reply, long long now) +tcp_conn_update(struct conn *conn_, struct dp_packet *pkt, bool reply, + long long now) { struct conn_tcp *conn = conn_tcp_cast(conn_); struct tcp_header *tcp = dp_packet_l4(pkt); @@ -317,18 +317,18 @@ tcp_conn_update(struct conn *conn_, struct conntrack_bucket *ctb, if (src->state >= CT_DPIF_TCPS_FIN_WAIT_2 && dst->state >= CT_DPIF_TCPS_FIN_WAIT_2) { - conn_update_expiration(ctb, &conn->up, CT_TM_TCP_CLOSED, now); + conn_update_expiration(&conn->up, CT_TM_TCP_CLOSED, now); } else if (src->state >= CT_DPIF_TCPS_CLOSING && dst->state >= CT_DPIF_TCPS_CLOSING) { - conn_update_expiration(ctb, &conn->up, CT_TM_TCP_FIN_WAIT, now); + conn_update_expiration(&conn->up, CT_TM_TCP_FIN_WAIT, now); } else if (src->state < CT_DPIF_TCPS_ESTABLISHED || dst->state < CT_DPIF_TCPS_ESTABLISHED) { - conn_update_expiration(ctb, &conn->up, CT_TM_TCP_OPENING, now); + conn_update_expiration(&conn->up, CT_TM_TCP_OPENING, now); } else if (src->state >= CT_DPIF_TCPS_CLOSING || dst->state >= CT_DPIF_TCPS_CLOSING) { - conn_update_expiration(ctb, &conn->up, CT_TM_TCP_CLOSING, now); + conn_update_expiration(&conn->up, CT_TM_TCP_CLOSING, now); } else { - conn_update_expiration(ctb, &conn->up, CT_TM_TCP_ESTABLISHED, now); + conn_update_expiration(&conn->up, CT_TM_TCP_ESTABLISHED, now); } } else if ((dst->state < CT_DPIF_TCPS_SYN_SENT || dst->state >= CT_DPIF_TCPS_FIN_WAIT_2 @@ -412,8 +412,7 @@ tcp_valid_new(struct dp_packet *pkt) } static struct conn * -tcp_new_conn(struct conntrack_bucket *ctb, struct dp_packet *pkt, - long long now) +tcp_new_conn(struct dp_packet *pkt, long long now) { struct conn_tcp* newconn = NULL; struct tcp_header *tcp = dp_packet_l4(pkt); @@ -448,9 +447,7 @@ tcp_new_conn(struct conntrack_bucket *ctb, struct dp_packet *pkt, dst->max_win = 1; src->state = CT_DPIF_TCPS_SYN_SENT; dst->state = CT_DPIF_TCPS_CLOSED; - - conn_init_expiration(ctb, &newconn->up, CT_TM_TCP_FIRST_PACKET, - now); + conn_init_expiration(&newconn->up, CT_TM_TCP_FIRST_PACKET, now); return &newconn->up; } diff --git a/lib/conntrack.c b/lib/conntrack.c index 07ab0d0..8eb73a9 100644 --- a/lib/conntrack.c +++ b/lib/conntrack.c @@ -76,79 +76,67 @@ enum ct_alg_ctl_type { CT_ALG_CTL_SIP, }; -#define CONNTRACK_BUCKETS_SHIFT 8 -#define CONNTRACK_BUCKETS (1 << CONNTRACK_BUCKETS_SHIFT) -/* Independent buckets containing the connections */ -struct conntrack_bucket buckets[CONNTRACK_BUCKETS]; +struct OVS_LOCKABLE ct_rwlock { + struct ovs_rwlock lock; +}; + +/* This lock is used to guard alg_expectations and + * alg_expectation_refs. */ +static struct ct_rwlock resources_lock; + +/* Hash table for alg expectations. Expectations are created + * by control connections to help create data connections. */ +static struct hmap alg_expectations OVS_GUARDED_BY(resources_lock); +/* Only needed to be able to cleanup expectations from non-control + * connection context; otherwise a pointer to the expectation from + * the control connection would suffice. */ +static struct hindex alg_expectation_refs OVS_GUARDED_BY(resources_lock); + +struct OVS_LOCKABLE ct_lock { + struct ovs_mutex lock; +}; + +static struct ct_lock ct_lock; +static struct cmap cm_conns OVS_GUARDED_BY(ct_lock); +struct ovs_list cm_exp_lists[N_CT_TM] OVS_GUARDED_BY(ct_lock); /* Salt for hashing a connection key. */ -uint32_t hash_basis; +static uint32_t hash_basis; /* The thread performing periodic cleanup of the connection * tracker */ -pthread_t clean_thread; +static pthread_t clean_thread; /* Latch to destroy the 'clean_thread' */ -struct latch clean_thread_exit; +static struct latch clean_thread_exit; + /* Number of connections currently in the connection tracker. */ -atomic_count n_conn; +static atomic_count n_conn; /* Connections limit. When this limit is reached, no new connection * will be accepted. */ -atomic_uint n_conn_limit; -/* The following resources are referenced during nat connection - * creation and deletion. */ -struct hmap nat_conn_keys OVS_GUARDED; -/* Hash table for alg expectations. Expectations are created - * by control connections to help create data connections. */ -struct hmap alg_expectations OVS_GUARDED; -/* Used to lookup alg expectations from the control context. */ -struct hindex alg_expectation_refs OVS_GUARDED; -/* Expiry list for alg expectations. */ -struct ovs_list alg_exp_list OVS_GUARDED; -/* This lock is used during NAT connection creation and deletion; - * it is taken after a bucket lock and given back before that - * bucket unlock. - * This lock is similarly used to guard alg_expectations and - * alg_expectation_refs. If a bucket lock is also held during - * the normal code flow, then is must be taken first and released - * last. - */ -struct ct_rwlock resources_lock; +static atomic_uint n_conn_limit; + +/* Lock acquisition order: If multiple locks are taken, then the order is + * 'ct_lock', then conn entry lock and then 'resources_lock' and release + * happens in the reverse order. */ static bool conn_key_extract(struct dp_packet *, ovs_be16 dl_type, struct conn_lookup_ctx *, uint16_t zone); static uint32_t conn_key_hash(const struct conn_key *, uint32_t basis); static void conn_key_reverse(struct conn_key *); -static void conn_key_lookup(struct conntrack_bucket *ctb, - struct conn_lookup_ctx *ctx, - long long now); static bool valid_new(struct dp_packet *pkt, struct conn_key *); -static struct conn *new_conn(struct conntrack_bucket *, struct dp_packet *pkt, - struct conn_key *, long long now); -static void delete_conn(struct conn *); -static enum ct_update_res conn_update(struct conn *, - struct conntrack_bucket *ctb, - struct dp_packet *, bool reply, +static struct conn *new_conn(struct dp_packet *pkt, struct conn_key *, + long long now); +static enum ct_update_res conn_update(struct dp_packet *pkt, + struct conn *conn, + struct conn_lookup_ctx *ctx, long long now); +static void delete_conn(struct conn *); +static void delete_conn_one(struct conn *conn); static bool conn_expired(struct conn *, long long now); static void set_mark(struct dp_packet *, struct conn *, uint32_t val, uint32_t mask); static void set_label(struct dp_packet *, struct conn *, const struct ovs_key_ct_labels *val, const struct ovs_key_ct_labels *mask); -static void *clean_thread_main(void *f_); - -static struct nat_conn_key_node * -nat_conn_keys_lookup(struct hmap *nat_conn_keys_, - const struct conn_key *key, - uint32_t basis); - -static bool -nat_conn_keys_insert(struct hmap *nat_conn_keys_, - const struct conn *nat_conn, - uint32_t hash_basis); - -static void -nat_conn_keys_remove(struct hmap *nat_conn_keys_, - const struct conn_key *key, - uint32_t basis); +static void *clean_thread_main(void *); static bool nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn); @@ -184,7 +172,7 @@ detect_ftp_ctl_type(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt); static void -expectation_clean(const struct conn_key *master_key, uint32_t basis); +expectation_clean(const struct conn_key *master_key); static struct ct_l4_proto *l4_protos[] = { [IPPROTO_TCP] = &ct_proto_tcp, @@ -279,6 +267,50 @@ conn_key_cmp(const struct conn_key *key1, const struct conn_key *key2) } static void +conn_key_lookup(const struct conn_key *key, uint32_t hash, long long now, + struct conn **conn_out, bool *reply) +{ + struct conn *conn; + *conn_out = NULL; + + CMAP_FOR_EACH_WITH_HASH (conn, cm_node, hash, &cm_conns) { + if (!conn_key_cmp(&conn->key, key) && !conn_expired(conn, now)) { + *conn_out = conn; + *reply = false; + break; + } + if (!conn_key_cmp(&conn->rev_key, key) && !conn_expired(conn, now)) { + *conn_out = conn; + *reply = true; + break; + } + } +} + +static bool +conn_available(const struct conn_key *key, uint32_t hash, long long now) +{ + struct conn *conn; + bool found = false; + + CMAP_FOR_EACH_WITH_HASH (conn, cm_node, hash, &cm_conns) { + if (!conn_key_cmp(&conn->key, key) + && !conn_expired(conn, now)) { + found = true; + break; + } + + if (!conn_key_cmp(&conn->rev_key, key) + && !conn_expired(conn, now)) { + found = true; + break; + } + } + + return !found; +} + +static void ct_print_conn_info(const struct conn *c, const char *log_msg, enum vlog_level vll, bool force, bool rl_on) { @@ -338,31 +370,20 @@ ct_print_conn_info(const struct conn *c, const char *log_msg, void conntrack_init(void) { - long long now = time_msec(); - - ct_rwlock_init(&resources_lock); - ct_rwlock_wrlock(&resources_lock); - hmap_init(&nat_conn_keys); + ovs_rwlock_init(&resources_lock.lock); + ovs_rwlock_wrlock(&resources_lock.lock); hmap_init(&alg_expectations); hindex_init(&alg_expectation_refs); - ovs_list_init(&alg_exp_list); - ct_rwlock_unlock(&resources_lock); + ovs_rwlock_unlock(&resources_lock.lock); - for (unsigned i = 0; i < CONNTRACK_BUCKETS; i++) { - struct conntrack_bucket *ctb = &buckets[i]; - - ct_lock_init(&ctb->lock); - ct_lock_lock(&ctb->lock); - hmap_init(&ctb->connections); - for (unsigned j = 0; j < ARRAY_SIZE(ctb->exp_lists); j++) { - ovs_list_init(&ctb->exp_lists[j]); - } - ct_lock_unlock(&ctb->lock); - ovs_mutex_init(&ctb->cleanup_mutex); - ovs_mutex_lock(&ctb->cleanup_mutex); - ctb->next_cleanup = now + CT_TM_MIN; - ovs_mutex_unlock(&ctb->cleanup_mutex); + ovs_mutex_init_adaptive(&ct_lock.lock); + ovs_mutex_lock(&ct_lock.lock); + cmap_init(&cm_conns); + for (unsigned i = 0; i < ARRAY_SIZE(cm_exp_lists); i++) { + ovs_list_init(&cm_exp_lists[i]); } + ovs_mutex_unlock(&ct_lock.lock); + hash_basis = random_uint32(); atomic_count_init(&n_conn, 0); atomic_init(&n_conn_limit, DEFAULT_N_CONN_LIMIT); @@ -370,56 +391,76 @@ conntrack_init(void) clean_thread = ovs_thread_create("ct_clean", clean_thread_main, NULL); } +/* Must be called with 'conn' of 'conn_type' CT_CONN_TYPE_DEFAULT. Also + * removes the associated nat 'conn' from the lookup datastructures. */ +static void +conn_clean(struct conn *conn) + OVS_NO_THREAD_SAFETY_ANALYSIS +{ + ovs_assert(conn->conn_type == CT_CONN_TYPE_DEFAULT); + + if (conn->alg) { + expectation_clean(&conn->key); + } + + uint32_t hash = conn_key_hash(&conn->key, hash_basis); + cmap_remove(&cm_conns, &conn->cm_node, hash); + ovs_list_remove(&conn->exp_node); + if (conn->nat_conn) { + hash = conn_key_hash(&conn->nat_conn->key, hash_basis); + cmap_remove(&cm_conns, &conn->nat_conn->cm_node, hash); + } + ovsrcu_postpone(delete_conn, conn); + atomic_count_dec(&n_conn); +} + +static void +conn_clean_one(struct conn *conn) + OVS_NO_THREAD_SAFETY_ANALYSIS +{ + if (conn->alg) { + expectation_clean(&conn->key); + } + + uint32_t hash = conn_key_hash(&conn->key, hash_basis); + cmap_remove(&cm_conns, &conn->cm_node, hash); + ovs_list_remove(&conn->exp_node); + if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { + atomic_count_dec(&n_conn); + } + ovsrcu_postpone(delete_conn_one, conn); +} + /* Destroys the connection tracker 'ct' and frees all the allocated memory. */ void conntrack_destroy(void) + OVS_NO_THREAD_SAFETY_ANALYSIS { + struct conn *conn; latch_set(&clean_thread_exit); pthread_join(clean_thread, NULL); latch_destroy(&clean_thread_exit); - for (unsigned i = 0; i < CONNTRACK_BUCKETS; i++) { - struct conntrack_bucket *ctb = &buckets[i]; - struct conn *conn; - ovs_mutex_destroy(&ctb->cleanup_mutex); - ct_lock_lock(&ctb->lock); - HMAP_FOR_EACH_POP (conn, node, &ctb->connections) { - if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { - atomic_count_dec(&n_conn); - } - delete_conn(conn); - } - hmap_destroy(&ctb->connections); - ct_lock_unlock(&ctb->lock); - ct_lock_destroy(&ctb->lock); + ovs_mutex_lock(&ct_lock.lock); + CMAP_FOR_EACH (conn, cm_node, &cm_conns) { + conn_clean_one(conn); } - ct_rwlock_wrlock(&resources_lock); - struct nat_conn_key_node *nat_conn_key_node; - HMAP_FOR_EACH_POP (nat_conn_key_node, node, &nat_conn_keys) { - free(nat_conn_key_node); - } - hmap_destroy(&nat_conn_keys); + cmap_destroy(&cm_conns); + ovs_mutex_unlock(&ct_lock.lock); + ovs_mutex_destroy(&ct_lock.lock); + ovs_rwlock_wrlock(&resources_lock.lock); struct alg_exp_node *alg_exp_node; HMAP_FOR_EACH_POP (alg_exp_node, node, &alg_expectations) { free(alg_exp_node); } - ovs_list_poison(&alg_exp_list); hmap_destroy(&alg_expectations); hindex_destroy(&alg_expectation_refs); - ct_rwlock_unlock(&resources_lock); - ct_rwlock_destroy(&resources_lock); + ovs_rwlock_unlock(&resources_lock.lock); + ovs_rwlock_destroy(&resources_lock.lock); } -static unsigned hash_to_bucket(uint32_t hash) -{ - /* Extracts the most significant bits in hash. The least significant bits - * are already used internally by the hmap implementation. */ - BUILD_ASSERT(CONNTRACK_BUCKETS_SHIFT < 32 && CONNTRACK_BUCKETS_SHIFT >= 1); - - return (hash >> (32 - CONNTRACK_BUCKETS_SHIFT)) % CONNTRACK_BUCKETS; -} static void write_ct_md(struct dp_packet *pkt, uint16_t zone, const struct conn *conn, @@ -544,13 +585,14 @@ alg_src_ip_wc(enum ct_alg_ctl_type alg_ctl_type) static void handle_alg_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, enum ct_alg_ctl_type ct_alg_ctl, const struct conn *conn, - long long now, bool nat, - const struct conn *conn_for_expectation) + long long now, bool nat) { /* ALG control packet handling with expectation creation. */ if (OVS_UNLIKELY(alg_helpers[ct_alg_ctl] && conn && conn->alg)) { - alg_helpers[ct_alg_ctl](ctx, pkt, conn_for_expectation, now, - CT_FTP_CTL_INTEREST, nat); + ovs_mutex_lock(&conn->lock.lock); + alg_helpers[ct_alg_ctl](ctx, pkt, conn, now, CT_FTP_CTL_INTEREST, + nat); + ovs_mutex_unlock(&conn->lock.lock); } } @@ -767,86 +809,19 @@ un_nat_packet(struct dp_packet *pkt, const struct conn *conn, } } -/* Typical usage of this helper is in non per-packet code; - * this is because the bucket lock needs to be held for lookup - * and a hash would have already been needed. Hence, this function - * is just intended for code clarity. */ -static struct conn * -conn_lookup(const struct conn_key *key, long long now) -{ - struct conn_lookup_ctx ctx; - ctx.conn = NULL; - ctx.key = *key; - ctx.hash = conn_key_hash(key, hash_basis); - unsigned bucket = hash_to_bucket(ctx.hash); - conn_key_lookup(&buckets[bucket], &ctx, now); - return ctx.conn; -} - static void conn_seq_skew_set(const struct conn_key *key, long long now, int seq_skew, bool seq_skew_dir) { - unsigned bucket = hash_to_bucket(conn_key_hash(key, hash_basis)); - ct_lock_lock(&buckets[bucket].lock); - struct conn *conn = conn_lookup(key, now); + struct conn *conn; + bool reply; + uint32_t hash = conn_key_hash(key, hash_basis); + conn_key_lookup(key, hash, now, &conn, &reply); + if (conn && seq_skew) { conn->seq_skew = seq_skew; conn->seq_skew_dir = seq_skew_dir; } - ct_lock_unlock(&buckets[bucket].lock); -} - -static void -nat_clean(struct conn *conn, struct conntrack_bucket *ctb) - OVS_REQUIRES(ctb->lock) -{ - ct_rwlock_wrlock(&resources_lock); - nat_conn_keys_remove(&nat_conn_keys, &conn->rev_key, hash_basis); - ct_rwlock_unlock(&resources_lock); - ct_lock_unlock(&ctb->lock); - unsigned bucket_rev_conn = - hash_to_bucket(conn_key_hash(&conn->rev_key, hash_basis)); - ct_lock_lock(&buckets[bucket_rev_conn].lock); - ct_rwlock_wrlock(&resources_lock); - long long now = time_msec(); - struct conn *rev_conn = conn_lookup(&conn->rev_key, now); - struct nat_conn_key_node *nat_conn_key_node = - nat_conn_keys_lookup(&nat_conn_keys, &conn->rev_key, hash_basis); - - /* In the unlikely event, rev conn was recreated, then skip - * rev_conn cleanup. */ - if (rev_conn && (!nat_conn_key_node || - conn_key_cmp(&nat_conn_key_node->value, - &rev_conn->rev_key))) { - hmap_remove(&buckets[bucket_rev_conn].connections, &rev_conn->node); - free(rev_conn); - } - - delete_conn(conn); - ct_rwlock_unlock(&resources_lock); - ct_lock_unlock(&buckets[bucket_rev_conn].lock); - ct_lock_lock(&ctb->lock); -} - -/* Must be called with 'CT_CONN_TYPE_DEFAULT' 'conn_type'. */ -static void -conn_clean(struct conn *conn, struct conntrack_bucket *ctb) - OVS_REQUIRES(ctb->lock) -{ - ovs_assert(conn->conn_type == CT_CONN_TYPE_DEFAULT); - - if (conn->alg) { - expectation_clean(&conn->key, hash_basis); - } - ovs_list_remove(&conn->exp_node); - hmap_remove(&ctb->connections, &conn->node); - atomic_count_dec(&n_conn); - if (conn->nat_info) { - nat_clean(conn, ctb); - } else { - delete_conn(conn); - } } static bool @@ -869,17 +844,15 @@ ct_verify_helper(const char *helper, enum ct_alg_ctl_type ct_alg_ctl) } } -/* This function is called with the bucket lock held. */ static struct conn * conn_not_found(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, bool commit, long long now, const struct nat_action_info_t *nat_action_info, - struct conn *conn_for_un_nat_copy, - const char *helper, - const struct alg_exp_node *alg_exp, + const char *helper, const struct alg_exp_node *alg_exp, enum ct_alg_ctl_type ct_alg_ctl) { struct conn *nc = NULL; + struct conn *nat_conn = NULL; if (!valid_new(pkt, &ctx->key)) { pkt->md.ct_state = CS_INVALID; @@ -901,8 +874,7 @@ conn_not_found(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, return nc; } - unsigned bucket = hash_to_bucket(ctx->hash); - nc = new_conn(&buckets[bucket], pkt, &ctx->key, now); + nc = new_conn(pkt, &ctx->key, now); ctx->conn = nc; nc->rev_key = nc->key; conn_key_reverse(&nc->rev_key); @@ -921,6 +893,8 @@ conn_not_found(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, if (nat_action_info) { nc->nat_info = xmemdup(nat_action_info, sizeof *nc->nat_info); + nat_conn = xzalloc(sizeof *nat_conn); + if (alg_exp) { if (alg_exp->nat_rpl_dst) { nc->rev_key.dst.addr = alg_exp->alg_nat_repl_addr; @@ -929,59 +903,50 @@ conn_not_found(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, nc->rev_key.src.addr = alg_exp->alg_nat_repl_addr; nc->nat_info->nat_action = NAT_ACTION_DST; } - *conn_for_un_nat_copy = *nc; - ct_rwlock_wrlock(&resources_lock); - bool new_insert = nat_conn_keys_insert(&nat_conn_keys, - conn_for_un_nat_copy, - hash_basis); - ct_rwlock_unlock(&resources_lock); - if (!new_insert) { - char *log_msg = xasprintf("Pre-existing alg " - "nat_conn_key"); - ct_print_conn_info(conn_for_un_nat_copy, log_msg, VLL_INFO, - true, false); - free(log_msg); - } + *nat_conn = *nc; } else { - *conn_for_un_nat_copy = *nc; - ct_rwlock_wrlock(&resources_lock); - bool nat_res = nat_select_range_tuple(nc, - conn_for_un_nat_copy); + *nat_conn = *nc; + bool nat_res = nat_select_range_tuple(nc, nat_conn); if (!nat_res) { goto nat_res_exhaustion; } - /* Update nc with nat adjustments made to - * conn_for_un_nat_copy by nat_select_range_tuple(). */ - *nc = *conn_for_un_nat_copy; - ct_rwlock_unlock(&resources_lock); + /* Update nc with nat adjustments. */ + *nc = *nat_conn; } - conn_for_un_nat_copy->conn_type = CT_CONN_TYPE_UN_NAT; - conn_for_un_nat_copy->nat_info = NULL; - conn_for_un_nat_copy->alg = NULL; nat_packet(pkt, nc, ctx->icmp_related); - } - hmap_insert(&buckets[bucket].connections, &nc->node, ctx->hash); + + nat_conn->key = nc->rev_key; + nat_conn->rev_key = nc->key; + nat_conn->conn_type = CT_CONN_TYPE_UN_NAT; + nat_conn->nat_info = NULL; + nat_conn->alg = NULL; + nat_conn->nat_conn = NULL; + uint32_t nat_hash = conn_key_hash(&nat_conn->key, + hash_basis); + cmap_insert(&cm_conns, &nat_conn->cm_node, nat_hash); + } + + nc->nat_conn = nat_conn; + ovs_mutex_init_adaptive(&nc->lock.lock); + nc->conn_type = CT_CONN_TYPE_DEFAULT; + cmap_insert(&cm_conns, &nc->cm_node, ctx->hash); + nc->inserted = true; atomic_count_inc(&n_conn); } return nc; - /* This would be a user error or a DOS attack. - * A user error is prevented by allocating enough - * combinations of NAT addresses when combined with - * ephemeral ports. A DOS attack should be protected - * against with firewall rules or a separate firewall. - * Also using zone partitioning can limit DoS impact. */ + /* This would be a user error or a DOS attack. A user error is prevented + * by allocating enough combinations of NAT addresses when combined with + * ephemeral ports. A DOS attack should be protected against with + * firewall rules or a separate firewall. Also using zone partitioning + * can limit DoS impact. */ nat_res_exhaustion: + free(nat_conn); ovs_list_remove(&nc->exp_node); delete_conn(nc); - /* conn_for_un_nat_copy is a local variable in process_one; this - * memset() serves to document that conn_for_un_nat_copy is from - * this point on unused. */ - memset(conn_for_un_nat_copy, 0, sizeof *conn_for_un_nat_copy); - ct_rwlock_unlock(&resources_lock); static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); VLOG_WARN_RL(&rl, "Unable to NAT due to tuple space exhaustion - " "if DoS attack, use firewalling and/or zone partitioning."); @@ -990,9 +955,10 @@ nat_res_exhaustion: static bool conn_update_state(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, - struct conn **conn, long long now, unsigned bucket) - OVS_REQUIRES(buckets[bucket].lock) + struct conn *conn, long long now) { + ovs_assert(conn->conn_type == CT_CONN_TYPE_DEFAULT); + bool create_new_conn = false; if (ctx->icmp_related) { @@ -1001,12 +967,11 @@ conn_update_state(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, pkt->md.ct_state |= CS_REPLY_DIR; } } else { - if ((*conn)->alg_related) { + if (conn->alg_related) { pkt->md.ct_state |= CS_RELATED; } - enum ct_update_res res = conn_update(*conn, &buckets[bucket], - pkt, ctx->reply, now); + enum ct_update_res res = conn_update(pkt, conn, ctx, now); switch (res) { case CT_UPDATE_VALID: @@ -1020,7 +985,9 @@ conn_update_state(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, pkt->md.ct_state = CS_INVALID; break; case CT_UPDATE_NEW: - conn_clean(*conn, &buckets[bucket]); + ovs_mutex_lock(&ct_lock.lock); + conn_clean(conn); + ovs_mutex_unlock(&ct_lock.lock); create_new_conn = true; break; default: @@ -1031,51 +998,6 @@ conn_update_state(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, } static void -create_un_nat_conn(struct conn *conn_for_un_nat_copy, long long now, - bool alg_un_nat) -{ - struct conn *nc = xmemdup(conn_for_un_nat_copy, sizeof *nc); - nc->key = conn_for_un_nat_copy->rev_key; - nc->rev_key = conn_for_un_nat_copy->key; - uint32_t un_nat_hash = conn_key_hash(&nc->key, hash_basis); - unsigned un_nat_conn_bucket = hash_to_bucket(un_nat_hash); - ct_lock_lock(&buckets[un_nat_conn_bucket].lock); - struct conn *rev_conn = conn_lookup(&nc->key, now); - - if (alg_un_nat) { - if (!rev_conn) { - hmap_insert(&buckets[un_nat_conn_bucket].connections, - &nc->node, un_nat_hash); - } else { - char *log_msg = xasprintf("Unusual condition for un_nat conn " - "create for alg: rev_conn %p", rev_conn); - ct_print_conn_info(nc, log_msg, VLL_INFO, true, false); - free(log_msg); - free(nc); - } - } else { - ct_rwlock_rdlock(&resources_lock); - - struct nat_conn_key_node *nat_conn_key_node = - nat_conn_keys_lookup(&nat_conn_keys, &nc->key, hash_basis); - if (nat_conn_key_node && !conn_key_cmp(&nat_conn_key_node->value, - &nc->rev_key) && !rev_conn) { - hmap_insert(&buckets[un_nat_conn_bucket].connections, &nc->node, - un_nat_hash); - } else { - char *log_msg = xasprintf("Unusual condition for un_nat conn " - "create: nat_conn_key_node/rev_conn " - "%p/%p", nat_conn_key_node, rev_conn); - ct_print_conn_info(nc, log_msg, VLL_INFO, true, false); - free(log_msg); - free(nc); - } - ct_rwlock_unlock(&resources_lock); - } - ct_lock_unlock(&buckets[un_nat_conn_bucket].lock); -} - -static void handle_nat(struct dp_packet *pkt, struct conn *conn, uint16_t zone, bool reply, bool related) { @@ -1097,9 +1019,8 @@ handle_nat(struct dp_packet *pkt, struct conn *conn, static bool check_orig_tuple(struct dp_packet *pkt, struct conn_lookup_ctx *ctx_in, - long long now, unsigned *bucket, struct conn **conn, + long long now, struct conn **conn, const struct nat_action_info_t *nat_action_info) - OVS_REQUIRES(buckets[(*bucket)].lock) { if ((ctx_in->key.dl_type == htons(ETH_TYPE_IP) && !pkt->md.ct_orig_tuple.ipv4.ipv4_proto) || @@ -1110,57 +1031,48 @@ check_orig_tuple(struct dp_packet *pkt, struct conn_lookup_ctx *ctx_in, return false; } - ct_lock_unlock(&buckets[(*bucket)].lock); - struct conn_lookup_ctx ctx; - memset(&ctx, 0 , sizeof ctx); - ctx.conn = NULL; + struct conn_key key; + memset(&key, 0 , sizeof key); if (ctx_in->key.dl_type == htons(ETH_TYPE_IP)) { - ctx.key.src.addr.ipv4_aligned = pkt->md.ct_orig_tuple.ipv4.ipv4_src; - ctx.key.dst.addr.ipv4_aligned = pkt->md.ct_orig_tuple.ipv4.ipv4_dst; + key.src.addr.ipv4_aligned = pkt->md.ct_orig_tuple.ipv4.ipv4_src; + key.dst.addr.ipv4_aligned = pkt->md.ct_orig_tuple.ipv4.ipv4_dst; if (ctx_in->key.nw_proto == IPPROTO_ICMP) { - ctx.key.src.icmp_id = ctx_in->key.src.icmp_id; - ctx.key.dst.icmp_id = ctx_in->key.dst.icmp_id; + key.src.icmp_id = ctx_in->key.src.icmp_id; + key.dst.icmp_id = ctx_in->key.dst.icmp_id; uint16_t src_port = ntohs(pkt->md.ct_orig_tuple.ipv4.src_port); - ctx.key.src.icmp_type = (uint8_t) src_port; - ctx.key.dst.icmp_type = reverse_icmp_type(ctx.key.src.icmp_type); + key.src.icmp_type = (uint8_t) src_port; + key.dst.icmp_type = reverse_icmp_type(key.src.icmp_type); } else { - ctx.key.src.port = pkt->md.ct_orig_tuple.ipv4.src_port; - ctx.key.dst.port = pkt->md.ct_orig_tuple.ipv4.dst_port; + key.src.port = pkt->md.ct_orig_tuple.ipv4.src_port; + key.dst.port = pkt->md.ct_orig_tuple.ipv4.dst_port; } - ctx.key.nw_proto = pkt->md.ct_orig_tuple.ipv4.ipv4_proto; + key.nw_proto = pkt->md.ct_orig_tuple.ipv4.ipv4_proto; } else { - ctx.key.src.addr.ipv6_aligned = pkt->md.ct_orig_tuple.ipv6.ipv6_src; - ctx.key.dst.addr.ipv6_aligned = pkt->md.ct_orig_tuple.ipv6.ipv6_dst; + key.src.addr.ipv6_aligned = pkt->md.ct_orig_tuple.ipv6.ipv6_src; + key.dst.addr.ipv6_aligned = pkt->md.ct_orig_tuple.ipv6.ipv6_dst; if (ctx_in->key.nw_proto == IPPROTO_ICMPV6) { - ctx.key.src.icmp_id = ctx_in->key.src.icmp_id; - ctx.key.dst.icmp_id = ctx_in->key.dst.icmp_id; + key.src.icmp_id = ctx_in->key.src.icmp_id; + key.dst.icmp_id = ctx_in->key.dst.icmp_id; uint16_t src_port = ntohs(pkt->md.ct_orig_tuple.ipv6.src_port); - ctx.key.src.icmp_type = (uint8_t) src_port; - ctx.key.dst.icmp_type = reverse_icmp6_type(ctx.key.src.icmp_type); + key.src.icmp_type = (uint8_t) src_port; + key.dst.icmp_type = reverse_icmp6_type(key.src.icmp_type); } else { - ctx.key.src.port = pkt->md.ct_orig_tuple.ipv6.src_port; - ctx.key.dst.port = pkt->md.ct_orig_tuple.ipv6.dst_port; + key.src.port = pkt->md.ct_orig_tuple.ipv6.src_port; + key.dst.port = pkt->md.ct_orig_tuple.ipv6.dst_port; } - ctx.key.nw_proto = pkt->md.ct_orig_tuple.ipv6.ipv6_proto; + key.nw_proto = pkt->md.ct_orig_tuple.ipv6.ipv6_proto; } - ctx.key.dl_type = ctx_in->key.dl_type; - ctx.key.zone = pkt->md.ct_zone; - ctx.hash = conn_key_hash(&ctx.key, hash_basis); - *bucket = hash_to_bucket(ctx.hash); - ct_lock_lock(&buckets[(*bucket)].lock); - conn_key_lookup(&buckets[(*bucket)], &ctx, now); - *conn = ctx.conn; - return *conn ? true : false; -} + key.dl_type = ctx_in->key.dl_type; + key.zone = pkt->md.ct_zone; + uint32_t hash = conn_key_hash(&key, hash_basis); + bool reply; + conn_key_lookup(&key, hash, now, conn, &reply); -static bool -is_un_nat_conn_valid(const struct conn *un_nat_conn) -{ - return un_nat_conn->conn_type == CT_CONN_TYPE_UN_NAT; + return *conn ? true : false; } static bool @@ -1168,25 +1080,28 @@ conn_update_state_alg(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, struct conn *conn, const struct nat_action_info_t *nat_action_info, enum ct_alg_ctl_type ct_alg_ctl, long long now, - unsigned bucket, bool *create_new_conn) - OVS_REQUIRES(buckets[bucket].lock) + bool *create_new_conn) { if (is_ftp_ctl(ct_alg_ctl)) { /* Keep sequence tracking in sync with the source of the * sequence skew. */ + ovs_mutex_lock(&conn->lock.lock); if (ctx->reply != conn->seq_skew_dir) { handle_ftp_ctl(ctx, pkt, conn, now, CT_FTP_CTL_OTHER, !!nat_action_info); - *create_new_conn = conn_update_state(pkt, ctx, &conn, now, - bucket); + /* conn_update_state locks for unrelated fields, so unlock. */ + ovs_mutex_unlock(&conn->lock.lock); + *create_new_conn = conn_update_state(pkt, ctx, conn, now); } else { - *create_new_conn = conn_update_state(pkt, ctx, &conn, now, - bucket); - + /* conn_update_state locks for unrelated fields, so unlock. */ + ovs_mutex_unlock(&conn->lock.lock); + *create_new_conn = conn_update_state(pkt, ctx, conn, now); + ovs_mutex_lock(&conn->lock.lock); if (*create_new_conn == false) { handle_ftp_ctl(ctx, pkt, conn, now, CT_FTP_CTL_OTHER, !!nat_action_info); } + ovs_mutex_unlock(&conn->lock.lock); } return true; } @@ -1195,74 +1110,57 @@ conn_update_state_alg(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, static void process_one(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, uint16_t zone, - bool force, bool commit, long long now, const uint32_t *setmark, + bool force, bool commit, long long now, + const uint32_t *setmark, const struct ovs_key_ct_labels *setlabel, const struct nat_action_info_t *nat_action_info, ovs_be16 tp_src, ovs_be16 tp_dst, const char *helper) { - struct conn *conn; - unsigned bucket = hash_to_bucket(ctx->hash); - ct_lock_lock(&buckets[bucket].lock); - conn_key_lookup(&buckets[bucket], ctx, now); - conn = ctx->conn; + bool create_new_conn = false; + conn_key_lookup(&ctx->key, ctx->hash, now, &ctx->conn, &ctx->reply); + struct conn *conn = ctx->conn; /* Delete found entry if in wrong direction. 'force' implies commit. */ if (conn && force && ctx->reply) { - conn_clean(conn, &buckets[bucket]); + ovs_mutex_lock(&ct_lock.lock); + conn_clean(conn); + ovs_mutex_unlock(&ct_lock.lock); conn = NULL; } if (OVS_LIKELY(conn)) { if (conn->conn_type == CT_CONN_TYPE_UN_NAT) { - ctx->reply = true; + struct conn *rev_conn = conn; /* Save for debugging. */ + uint32_t hash = conn_key_hash(&conn->rev_key, hash_basis); + conn_key_lookup(&ctx->key, hash, now, &conn, &ctx->reply); - struct conn_lookup_ctx ctx2; - ctx2.conn = NULL; - ctx2.key = conn->rev_key; - ctx2.hash = conn_key_hash(&conn->rev_key, hash_basis); - - ct_lock_unlock(&buckets[bucket].lock); - bucket = hash_to_bucket(ctx2.hash); - - ct_lock_lock(&buckets[bucket].lock); - conn_key_lookup(&buckets[bucket], &ctx2, now); - - if (ctx2.conn) { - conn = ctx2.conn; - } else { - /* It is a race condition where conn has timed out and removed - * between unlock of the rev_conn and lock of the forward conn; - * nothing to do. */ + if (!conn) { pkt->md.ct_state |= CS_TRACKED | CS_INVALID; - ct_lock_unlock(&buckets[bucket].lock); + char *log_msg = xasprintf("Missing master conn %p", rev_conn); + ct_print_conn_info(conn, log_msg, VLL_INFO, true, true); + free(log_msg); return; } } } - bool create_new_conn = false; - struct conn conn_for_un_nat_copy; - conn_for_un_nat_copy.conn_type = CT_CONN_TYPE_DEFAULT; - enum ct_alg_ctl_type ct_alg_ctl = get_alg_ctl_type(pkt, tp_src, tp_dst, helper); if (OVS_LIKELY(conn)) { if (OVS_LIKELY(!conn_update_state_alg(pkt, ctx, conn, nat_action_info, - ct_alg_ctl, now, bucket, + ct_alg_ctl, now, &create_new_conn))) { - create_new_conn = conn_update_state(pkt, ctx, &conn, now, - bucket); + + create_new_conn = conn_update_state(pkt, ctx, conn, now); } if (nat_action_info && !create_new_conn) { handle_nat(pkt, conn, zone, ctx->reply, ctx->icmp_related); } - - } else if (check_orig_tuple(pkt, ctx, now, &bucket, &conn, - nat_action_info)) { - create_new_conn = conn_update_state(pkt, ctx, &conn, now, bucket); + } else if (check_orig_tuple(pkt, ctx, now, &conn, nat_action_info)) { + create_new_conn = conn_update_state(pkt, ctx, conn, now); } else { if (ctx->icmp_related) { /* An icmp related conn should always be found; no new @@ -1277,19 +1175,20 @@ process_one(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, uint16_t zone, struct alg_exp_node alg_exp_entry; if (OVS_UNLIKELY(create_new_conn)) { - - ct_rwlock_rdlock(&resources_lock); - alg_exp = expectation_lookup(&alg_expectations, &ctx->key, hash_basis, + ovs_rwlock_rdlock(&resources_lock.lock); + alg_exp = expectation_lookup(&alg_expectations, &ctx->key, + hash_basis, alg_src_ip_wc(ct_alg_ctl)); if (alg_exp) { alg_exp_entry = *alg_exp; alg_exp = &alg_exp_entry; } - ct_rwlock_unlock(&resources_lock); + ovs_rwlock_unlock(&resources_lock.lock); + ovs_mutex_lock(&ct_lock.lock); conn = conn_not_found(pkt, ctx, commit, now, nat_action_info, - &conn_for_un_nat_copy, helper, alg_exp, - ct_alg_ctl); + helper, alg_exp, ct_alg_ctl); + ovs_mutex_unlock(&ct_lock.lock); } write_ct_md(pkt, zone, conn, &ctx->key, alg_exp); @@ -1302,23 +1201,11 @@ process_one(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, uint16_t zone, set_label(pkt, conn, &setlabel[0], &setlabel[1]); } - struct conn conn_for_expectation; - if (OVS_UNLIKELY((ct_alg_ctl != CT_ALG_CTL_NONE) && conn)) { - conn_for_expectation = *conn; - } - - ct_lock_unlock(&buckets[bucket].lock); - - if (is_un_nat_conn_valid(&conn_for_un_nat_copy)) { - create_un_nat_conn(&conn_for_un_nat_copy, now, !!alg_exp); - } - - handle_alg_ctl(ctx, pkt, ct_alg_ctl, conn, now, !!nat_action_info, - &conn_for_expectation); + handle_alg_ctl(ctx, pkt, ct_alg_ctl, conn, now, !!nat_action_info); } /* Sends the packets in '*pkt_batch' through the connection tracker 'ct'. All - * the packets should have the same 'dl_type' (IPv4 or IPv6) and should have + * the packets must have the same 'dl_type' (IPv4 or IPv6) and should have * the l3 and and l4 offset properly set. * * If 'commit' is true, the packets are allowed to create new entries in the @@ -1334,12 +1221,12 @@ conntrack_execute(struct dp_packet_batch *pkt_batch, ovs_be16 dl_type, const struct nat_action_info_t *nat_action_info, long long now) { - struct dp_packet *packet; struct conn_lookup_ctx ctx; DP_PACKET_BATCH_FOR_EACH (i, packet, pkt_batch) { - if (!conn_key_extract(packet, dl_type, &ctx, zone)) { + if (packet->md.ct_state == CS_INVALID + || !conn_key_extract(packet, dl_type, &ctx, zone)) { packet->md.ct_state = CS_INVALID; write_ct_md(packet, zone, NULL, NULL, NULL); continue; @@ -1392,35 +1279,57 @@ set_label(struct dp_packet *pkt, struct conn *conn, } -/* Delete the expired connections from 'ctb', up to 'limit'. Returns the - * earliest expiration time among the remaining connections in 'ctb'. Returns - * LLONG_MAX if 'ctb' is empty. The return value might be smaller than 'now', - * if 'limit' is reached */ +/* Delete the expired connections, up to 'limit'. Returns the earliest + * expiration time among the remaining connections in all expiration lists. + * Returns LLONG_MAX if all expiration lists are empty. The return value + * might be smaller than 'now',if 'limit' is reached */ static long long -sweep_bucket(struct conntrack_bucket *ctb, long long now, size_t limit) - OVS_REQUIRES(ctb->lock) +ct_sweep(long long now, size_t limit) { struct conn *conn, *next; long long min_expiration = LLONG_MAX; size_t count = 0; + ovs_mutex_lock(&ct_lock.lock); + for (unsigned i = 0; i < N_CT_TM; i++) { - LIST_FOR_EACH_SAFE (conn, next, exp_node, &ctb->exp_lists[i]) { + LIST_FOR_EACH_SAFE (conn, next, exp_node, &cm_exp_lists[i]) { if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { - if (!conn_expired(conn, now) || count >= limit) { + ovs_mutex_lock(&conn->lock.lock); + if (conn->exp_list_id != NO_UPD_EXP_LIST) { + ovs_list_remove(&conn->exp_node); + ovs_list_push_back(&cm_exp_lists[conn->exp_list_id], + &conn->exp_node); + conn->exp_list_id = NO_UPD_EXP_LIST; + ovs_mutex_unlock(&conn->lock.lock); + } else if (!conn_expired(conn, now) || count >= limit) { + /* Not looking at conn changable fields. */ + ovs_mutex_unlock(&conn->lock.lock); min_expiration = MIN(min_expiration, conn->expiration); if (count >= limit) { /* Do not check other lists. */ COVERAGE_INC(conntrack_long_cleanup); - return min_expiration; + goto out; } break; + } else { + /* Not looking at conn changable fields. */ + ovs_mutex_unlock(&conn->lock.lock); + if (conn->inserted) { + conn_clean(conn); + } else { + break; + } } - conn_clean(conn, ctb); count++; } } } + +out: + VLOG_DBG("conntrack cleanup %"PRIuSIZE" entries in %lld msec", count, + time_msec() - now); + ovs_mutex_unlock(&ct_lock.lock); return min_expiration; } @@ -1431,50 +1340,11 @@ sweep_bucket(struct conntrack_bucket *ctb, long long now, size_t limit) static long long conntrack_clean(long long now) { - long long next_wakeup = now + CT_TM_MIN; unsigned int n_conn_limit_; - size_t clean_count = 0; - atomic_read_relaxed(&n_conn_limit, &n_conn_limit_); - for (unsigned i = 0; i < CONNTRACK_BUCKETS; i++) { - struct conntrack_bucket *ctb = &buckets[i]; - size_t prev_count; - long long min_exp; - - ovs_mutex_lock(&ctb->cleanup_mutex); - if (ctb->next_cleanup > now) { - goto next_bucket; - } - - ct_lock_lock(&ctb->lock); - prev_count = hmap_count(&ctb->connections); - /* If the connections are well distributed among buckets, we want to - * limit to 10% of the global limit equally split among buckets. If - * the bucket is busier than the others, we limit to 10% of its - * current size. */ - min_exp = sweep_bucket(ctb, now, - MAX(prev_count / 10, n_conn_limit_ / (CONNTRACK_BUCKETS * 10))); - clean_count += prev_count - hmap_count(&ctb->connections); - - if (min_exp > now) { - /* We call hmap_shrink() only if sweep_bucket() managed to delete - * every expired connection. */ - hmap_shrink(&ctb->connections); - } - - ct_lock_unlock(&ctb->lock); - - ctb->next_cleanup = MIN(min_exp, now + CT_TM_MIN); - -next_bucket: - next_wakeup = MIN(next_wakeup, ctb->next_cleanup); - ovs_mutex_unlock(&ctb->cleanup_mutex); - } - - VLOG_DBG("conntrack cleanup %"PRIuSIZE" entries in %lld msec", - clean_count, time_msec() - now); - + long long min_exp = ct_sweep(now, n_conn_limit_ / 50); + long long next_wakeup = MIN(min_exp, now + CT_TM_MIN); return next_wakeup; } @@ -1492,16 +1362,16 @@ next_bucket: * are coping with the current cleanup tasks, then we wait at least * 5 seconds to do further cleanup. * - * - We don't want to keep the buckets locked too long, as we might prevent + * - We don't want to keep the map locked too long, as we might prevent * traffic from flowing. CT_CLEAN_MIN_INTERVAL ensures that if cleanup is - * behind, there is at least some 200ms blocks of time when buckets will be + * behind, there is at least some 200ms blocks of time when the map will be * left alone, so the datapath can operate unhindered. */ #define CT_CLEAN_INTERVAL 5000 /* 5 seconds */ #define CT_CLEAN_MIN_INTERVAL 200 /* 0.2 seconds */ static void * -clean_thread_main(void *f_ OVS_UNUSED) +clean_thread_main(void *f OVS_UNUSED) { while (!latch_is_set(&clean_thread_exit)) { long long next_wake; @@ -2192,7 +2062,9 @@ nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn) uint16_t port = first_port; bool all_ports_tried = false; - bool original_ports_tried = false; + /* For DNAT, we don't use ephemeral ports. */ + bool ephemeral_ports_tried = conn->nat_info->nat_action & NAT_ACTION_DST + ? true : false; struct ct_addr first_addr = ct_addr; while (true) { @@ -2211,8 +2083,10 @@ nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn) nat_conn->rev_key.src.port = htons(port); } - bool new_insert = nat_conn_keys_insert(&nat_conn_keys, nat_conn, - hash_basis); + uint32_t conn_hash = conn_key_hash(&nat_conn->rev_key, hash_basis); + bool new_insert = conn_available(&nat_conn->rev_key, conn_hash, + time_msec()); + if (new_insert) { return true; } else if (!all_ports_tried) { @@ -2238,13 +2112,14 @@ nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn) ct_addr = conn->nat_info->min_addr; } if (!memcmp(&ct_addr, &first_addr, sizeof ct_addr)) { - if (!original_ports_tried) { - original_ports_tried = true; + if (ephemeral_ports_tried) { + break; + } else { + ephemeral_ports_tried = true; ct_addr = conn->nat_info->min_addr; + first_addr = ct_addr; min_port = MIN_NAT_EPHEMERAL_PORT; max_port = MAX_NAT_EPHEMERAL_PORT; - } else { - break; } } first_port = min_port; @@ -2255,95 +2130,6 @@ nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn) return false; } -/* This function must be called with the resources lock taken. */ -static struct nat_conn_key_node * -nat_conn_keys_lookup(struct hmap *nat_conn_keys_, - const struct conn_key *key, - uint32_t basis) -{ - struct nat_conn_key_node *nat_conn_key_node; - - HMAP_FOR_EACH_WITH_HASH (nat_conn_key_node, node, - conn_key_hash(key, basis), nat_conn_keys_) { - if (!conn_key_cmp(&nat_conn_key_node->key, key)) { - return nat_conn_key_node; - } - } - return NULL; -} - -/* This function must be called with the resources lock taken. */ -static bool -nat_conn_keys_insert(struct hmap *nat_conn_keys_, const struct conn *nat_conn, - uint32_t basis) -{ - struct nat_conn_key_node *nat_conn_key_node = - nat_conn_keys_lookup(nat_conn_keys_, &nat_conn->rev_key, basis); - - if (!nat_conn_key_node) { - struct nat_conn_key_node *nat_conn_key = - xzalloc(sizeof *nat_conn_key); - nat_conn_key->key = nat_conn->rev_key; - nat_conn_key->value = nat_conn->key; - hmap_insert(nat_conn_keys_, &nat_conn_key->node, - conn_key_hash(&nat_conn_key->key, basis)); - return true; - } - return false; -} - -/* This function must be called with the resources write lock taken. */ -static void -nat_conn_keys_remove(struct hmap *nat_conn_keys_, - const struct conn_key *key, - uint32_t basis) -{ - struct nat_conn_key_node *nat_conn_key_node; - - HMAP_FOR_EACH_WITH_HASH (nat_conn_key_node, node, - conn_key_hash(key, basis), nat_conn_keys_) { - if (!conn_key_cmp(&nat_conn_key_node->key, key)) { - hmap_remove(nat_conn_keys_, &nat_conn_key_node->node); - free(nat_conn_key_node); - return; - } - } -} - -static void -conn_key_lookup(struct conntrack_bucket *ctb, struct conn_lookup_ctx *ctx, - long long now) - OVS_REQUIRES(ctb->lock) -{ - uint32_t hash = ctx->hash; - struct conn *conn; - - ctx->conn = NULL; - - HMAP_FOR_EACH_WITH_HASH (conn, node, hash, &ctb->connections) { - if (!conn_key_cmp(&conn->key, &ctx->key) - && !conn_expired(conn, now)) { - ctx->conn = conn; - ctx->reply = false; - break; - } - if (!conn_key_cmp(&conn->rev_key, &ctx->key) - && !conn_expired(conn, now)) { - ctx->conn = conn; - ctx->reply = true; - break; - } - } -} - -static enum ct_update_res -conn_update(struct conn *conn, struct conntrack_bucket *ctb, - struct dp_packet *pkt, bool reply, long long now) -{ - return l4_protos[conn->key.nw_proto]->conn_update(conn, ctb, pkt, - reply, now); -} - static bool conn_expired(struct conn *conn, long long now) { @@ -2360,10 +2146,9 @@ valid_new(struct dp_packet *pkt, struct conn_key *key) } static struct conn * -new_conn(struct conntrack_bucket *ctb, struct dp_packet *pkt, - struct conn_key *key, long long now) +new_conn(struct dp_packet *pkt, struct conn_key *key, long long now) { - struct conn *newconn = l4_protos[key->nw_proto]->new_conn(ctb, pkt, now); + struct conn *newconn = l4_protos[key->nw_proto]->new_conn(pkt, now); if (newconn) { newconn->key = *key; } @@ -2371,11 +2156,38 @@ new_conn(struct conntrack_bucket *ctb, struct dp_packet *pkt, return newconn; } +static enum ct_update_res +conn_update(struct dp_packet *pkt, struct conn *conn, + struct conn_lookup_ctx *ctx, long long now) +{ + enum ct_update_res update_res = + l4_protos[conn->key.nw_proto]->conn_update(conn, pkt, ctx->reply, + now); + return update_res; +} + static void delete_conn(struct conn *conn) { - free(conn->nat_info); - free(conn->alg); + if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { + free(conn->nat_info); + free(conn->alg); + ovs_mutex_destroy(&conn->lock.lock); + if (conn->nat_conn) { + free(conn->nat_conn); + } + free(conn); + } +} + +static void +delete_conn_one(struct conn *conn) +{ + if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { + free(conn->nat_info); + free(conn->alg); + ovs_mutex_destroy(&conn->lock.lock); + } free(conn); } @@ -2507,7 +2319,7 @@ conntrack_dump_start(struct conntrack_dump *dump, const uint16_t *pzone, dump->filter_zone = true; } - *ptot_bkts = CONNTRACK_BUCKETS; + *ptot_bkts = 1; /* Need to clean up the callers. */ return 0; } @@ -2516,36 +2328,21 @@ conntrack_dump_next(struct conntrack_dump *dump, struct ct_dpif_entry *entry) { long long now = time_msec(); - while (dump->bucket < CONNTRACK_BUCKETS) { - struct hmap_node *node; - - ct_lock_lock(&buckets[dump->bucket].lock); - for (;;) { - struct conn *conn; - - node = hmap_at_position(&buckets[dump->bucket].connections, - &dump->bucket_pos); - if (!node) { - break; - } - INIT_CONTAINER(conn, node, node); - if ((!dump->filter_zone || conn->key.zone == dump->zone) && - (conn->conn_type != CT_CONN_TYPE_UN_NAT)) { - conn_to_ct_dpif_entry(conn, entry, now, dump->bucket); - break; - } - /* Else continue, until we find an entry in the appropriate zone - * or the bucket has been scanned completely. */ + for (;;) { + struct cmap_node *cm_node = cmap_next_position(&cm_conns, + &dump->cm_pos); + if (!cm_node) { + break; } - ct_lock_unlock(&buckets[dump->bucket].lock); - - if (!node) { - memset(&dump->bucket_pos, 0, sizeof dump->bucket_pos); - dump->bucket++; - } else { + struct conn *conn; + INIT_CONTAINER(conn, cm_node, cm_node); + if ((!dump->filter_zone || conn->key.zone == dump->zone) && + (conn->conn_type != CT_CONN_TYPE_UN_NAT)) { + conn_to_ct_dpif_entry(conn, entry, now, 0); return 0; } } + return EOF; } @@ -2558,42 +2355,41 @@ conntrack_dump_done(struct conntrack_dump *dump OVS_UNUSED) int conntrack_flush(const uint16_t *zone) { - for (unsigned i = 0; i < CONNTRACK_BUCKETS; i++) { - struct conn *conn, *next; + struct conn *conn; - ct_lock_lock(&buckets[i].lock); - HMAP_FOR_EACH_SAFE (conn, next, node, &buckets[i].connections) { - if ((!zone || *zone == conn->key.zone) && - (conn->conn_type == CT_CONN_TYPE_DEFAULT)) { - conn_clean(conn, &buckets[i]); - } + ovs_mutex_lock(&ct_lock.lock); + + CMAP_FOR_EACH (conn, cm_node, &cm_conns) { + if (!zone || *zone == conn->key.zone) { + conn_clean_one(conn); } - ct_lock_unlock(&buckets[i].lock); } + ovs_mutex_unlock(&ct_lock.lock); + return 0; } int conntrack_flush_tuple(const struct ct_dpif_tuple *tuple, uint16_t zone) { - struct conn_lookup_ctx ctx; int error = 0; + struct conn_lookup_ctx ctx; memset(&ctx, 0, sizeof(ctx)); tuple_to_conn_key(tuple, zone, &ctx.key); ctx.hash = conn_key_hash(&ctx.key, hash_basis); - unsigned bucket = hash_to_bucket(ctx.hash); - ct_lock_lock(&buckets[bucket].lock); - conn_key_lookup(&buckets[bucket], &ctx, time_msec()); + ovs_mutex_lock(&ct_lock.lock); + conn_key_lookup(&ctx.key, ctx.hash, time_msec(), &ctx.conn, &ctx.reply); + if (ctx.conn && ctx.conn->conn_type == CT_CONN_TYPE_DEFAULT) { - conn_clean(ctx.conn, &buckets[bucket]); + conn_clean(ctx.conn); } else { VLOG_WARN("Must flush tuple using the original pre-NATed tuple"); error = ENOENT; } - ct_lock_unlock(&buckets[bucket].lock); + ovs_mutex_unlock(&ct_lock.lock); return error; } @@ -2693,22 +2489,22 @@ expectation_ref_create(struct hindex *alg_expectation_refs_, } static void -expectation_clean(const struct conn_key *master_key, uint32_t basis) +expectation_clean(const struct conn_key *master_key) { - ct_rwlock_wrlock(&resources_lock); + ovs_rwlock_wrlock(&resources_lock.lock); struct alg_exp_node *node, *next; HINDEX_FOR_EACH_WITH_HASH_SAFE (node, next, node_ref, - conn_key_hash(master_key, basis), + conn_key_hash(master_key, hash_basis), &alg_expectation_refs) { if (!conn_key_cmp(&node->master_key, master_key)) { - expectation_remove(&alg_expectations, &node->key, basis); + expectation_remove(&alg_expectations, &node->key, hash_basis); hindex_remove(&alg_expectation_refs, &node->node_ref); free(node); } } - ct_rwlock_unlock(&resources_lock); + ovs_rwlock_unlock(&resources_lock.lock); } static void @@ -2756,12 +2552,12 @@ expectation_create(ovs_be16 dst_port, const struct conn *master_conn, /* Take the write lock here because it is almost 100% * likely that the lookup will fail and * expectation_create() will be called below. */ - ct_rwlock_wrlock(&resources_lock); + ovs_rwlock_wrlock(&resources_lock.lock); struct alg_exp_node *alg_exp = expectation_lookup( &alg_expectations, &alg_exp_node->key, hash_basis, src_ip_wc); if (alg_exp) { free(alg_exp_node); - ct_rwlock_unlock(&resources_lock); + ovs_rwlock_unlock(&resources_lock.lock); return; } @@ -2770,7 +2566,7 @@ expectation_create(ovs_be16 dst_port, const struct conn *master_conn, conn_key_hash(&alg_exp_node->key, hash_basis)); expectation_ref_create(&alg_expectation_refs, alg_exp_node, hash_basis); - ct_rwlock_unlock(&resources_lock); + ovs_rwlock_unlock(&resources_lock.lock); } static uint8_t diff --git a/lib/conntrack.h b/lib/conntrack.h index 80ba80e..58981bd 100644 --- a/lib/conntrack.h +++ b/lib/conntrack.h @@ -19,6 +19,7 @@ #include +#include "cmap.h" #include "latch.h" #include "odp-netlink.h" #include "openvswitch/hmap.h" @@ -42,10 +43,6 @@ * * conntrack_init(); * - * It is necessary to periodically issue a call to - * - * to allow the module to clean up expired connections. - * * To send a group of packets through the connection tracker: * * conntrack_execute(pkt_batch, ...); @@ -94,9 +91,8 @@ int conntrack_execute(struct dp_packet_batch *pkt_batch, ovs_be16 dl_type, void conntrack_clear(struct dp_packet *packet); struct conntrack_dump { - struct conntrack *ct; unsigned bucket; - struct hmap_position bucket_pos; + struct cmap_position cm_pos; bool filter_zone; uint16_t zone; }; @@ -114,103 +110,5 @@ int conntrack_set_maxconns(uint32_t maxconns); int conntrack_get_maxconns(uint32_t *maxconns); int conntrack_get_nconns(uint32_t *nconns); -/* 'struct ct_lock' is a wrapper for an adaptive mutex. It's useful to try - * different types of locks (e.g. spinlocks) */ - -struct OVS_LOCKABLE ct_lock { - struct ovs_mutex lock; -}; - -struct OVS_LOCKABLE ct_rwlock { - struct ovs_rwlock lock; -}; - -static inline void ct_lock_init(struct ct_lock *lock) -{ - ovs_mutex_init_adaptive(&lock->lock); -} - -static inline void ct_lock_lock(struct ct_lock *lock) - OVS_ACQUIRES(lock) - OVS_NO_THREAD_SAFETY_ANALYSIS -{ - ovs_mutex_lock(&lock->lock); -} - -static inline void ct_lock_unlock(struct ct_lock *lock) - OVS_RELEASES(lock) - OVS_NO_THREAD_SAFETY_ANALYSIS -{ - ovs_mutex_unlock(&lock->lock); -} - -static inline void ct_lock_destroy(struct ct_lock *lock) -{ - ovs_mutex_destroy(&lock->lock); -} - -static inline void ct_rwlock_init(struct ct_rwlock *lock) -{ - ovs_rwlock_init(&lock->lock); -} - - -static inline void ct_rwlock_wrlock(struct ct_rwlock *lock) - OVS_ACQ_WRLOCK(lock) - OVS_NO_THREAD_SAFETY_ANALYSIS -{ - ovs_rwlock_wrlock(&lock->lock); -} - -static inline void ct_rwlock_rdlock(struct ct_rwlock *lock) - OVS_ACQ_RDLOCK(lock) - OVS_NO_THREAD_SAFETY_ANALYSIS -{ - ovs_rwlock_rdlock(&lock->lock); -} - -static inline void ct_rwlock_unlock(struct ct_rwlock *lock) - OVS_RELEASES(lock) - OVS_NO_THREAD_SAFETY_ANALYSIS -{ - ovs_rwlock_unlock(&lock->lock); -} - -static inline void ct_rwlock_destroy(struct ct_rwlock *lock) -{ - ovs_rwlock_destroy(&lock->lock); -} - - -/* Timeouts: all the possible timeout states passed to update_expiration() - * are listed here. The name will be prefix by CT_TM_ and the value is in - * milliseconds */ -#define CT_TIMEOUTS \ - CT_TIMEOUT(TCP_FIRST_PACKET, 30 * 1000) \ - CT_TIMEOUT(TCP_OPENING, 30 * 1000) \ - CT_TIMEOUT(TCP_ESTABLISHED, 24 * 60 * 60 * 1000) \ - CT_TIMEOUT(TCP_CLOSING, 15 * 60 * 1000) \ - CT_TIMEOUT(TCP_FIN_WAIT, 45 * 1000) \ - CT_TIMEOUT(TCP_CLOSED, 30 * 1000) \ - CT_TIMEOUT(OTHER_FIRST, 60 * 1000) \ - CT_TIMEOUT(OTHER_MULTIPLE, 60 * 1000) \ - CT_TIMEOUT(OTHER_BIDIR, 30 * 1000) \ - CT_TIMEOUT(ICMP_FIRST, 60 * 1000) \ - CT_TIMEOUT(ICMP_REPLY, 30 * 1000) - -/* The smallest of the above values: it is used as an upper bound for the - * interval between two rounds of cleanup of expired entries */ -#define CT_TM_MIN (30 * 1000) - -#define CT_TIMEOUT(NAME, VAL) BUILD_ASSERT_DECL(VAL >= CT_TM_MIN); - CT_TIMEOUTS -#undef CT_TIMEOUT - -enum ct_timeout { -#define CT_TIMEOUT(NAME, VALUE) CT_TM_##NAME, - CT_TIMEOUTS -#undef CT_TIMEOUT - N_CT_TM -}; #endif /* conntrack.h */ From patchwork Wed Nov 28 16:31:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Darrell Ball X-Patchwork-Id: 1004625 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="ceq7N8P5"; dkim-atps=neutral Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 434mR854n8z9s2P for ; Thu, 29 Nov 2018 03:32:52 +1100 (AEDT) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id EB10BC11; Wed, 28 Nov 2018 16:32:13 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id F1334B5D for ; Wed, 28 Nov 2018 16:32:11 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 4D974771 for ; Wed, 28 Nov 2018 16:32:11 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id z23so17637778plo.0 for ; Wed, 28 Nov 2018 08:32:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=GyfMcZ8UJW5pbhvonPG2frbmEPUDD06rmFOfCSPpZMY=; b=ceq7N8P51CMJIgUDpcmEjC9fdojJo6jvMVRJk2vATsYVnBK3zxNig8f9Qjs7dzz9P7 q+olPy6QW3tUY5ZwvmzkvrBDdPYood/8UtV10dleXygScFyAT6GhyxgNUMaK4X3c1o+v Vs1M1Gaq5NeXJE9+u3Xt70vQx60K+q3YwTx1YwQmmZJKhKgyEk0rePnwMnNO6PV+ZGgQ 1CfV+UULBhQ7CDuWN+JyaKK4nSGJpXeX4Kvz841gNXP8MWIjTEQYbk2wSQYlbOQQy9Yj KnqWphut6gHOgbRe7XzEoM+533QHVAloudpBt98dv+pMH/DTlPEdKi78R0+96BcLekMq 6Cfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=GyfMcZ8UJW5pbhvonPG2frbmEPUDD06rmFOfCSPpZMY=; b=sTFHcYFvtwxabUhhH/YnQSpdheZHHM7EqipNUL1xX41RHgGOQqrRrsK8Vxchd4jRbf D7SOrQH5QGFZujmaEvpAYCnAexTYw3S9+dGT1ZLUNlPk0dWSqOzwqVxtqqBYrNHOx2CT x4xJQ0hBeROQbSSavcFeOKYaB67WO8oLUgPgxdmxHNDMs+Oq3TrA8C+nGCmlLY1aU+lU 8oFdXvmaGMvM20/3SlSgA/J7f48R/eaJPPFST4UzYCgViROQ/owu4tBGnROMEAGvKoKy 0z2keB96tqz2ZK0A/x9P99PzCyw9FWF85NVKqi1Wk4O2SUEu7YKfQijTxEYhYOx2FNef LKQg== X-Gm-Message-State: AA+aEWYS2MD0C9UlwiQqocPvetCJRqYtRggknmLGXZwdbw3IJN7vknv8 MZAvBX5meVeDXwblMorhFcM= X-Google-Smtp-Source: AFSGD/X+seZFy7DjgyS8TgE66fHmTIOxr32X4OgIkYincqXmkXEpsYVFX05LFUJIdEoZE/4UFhH9Fw== X-Received: by 2002:a17:902:7d82:: with SMTP id a2mr2975270plm.163.1543422730811; Wed, 28 Nov 2018 08:32:10 -0800 (PST) Received: from ubuntu.localdomain (c-76-102-76-212.hsd1.ca.comcast.net. [76.102.76.212]) by smtp.gmail.com with ESMTPSA id v9sm1201512pfg.144.2018.11.28.08.32.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Nov 2018 08:32:10 -0800 (PST) From: Darrell Ball To: dlu998@gmail.com, dev@openvswitch.org Date: Wed, 28 Nov 2018 08:31:52 -0800 Message-Id: <1543422714-100901-4-git-send-email-dlu998@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1543422714-100901-1-git-send-email-dlu998@gmail.com> References: <1543422714-100901-1-git-send-email-dlu998@gmail.com> X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [patch v2 3/5] conntrack: Make 'conn' lock protocol specific. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org For performance reasons, make 'conn' lock protocol specific. Signed-off-by: Darrell Ball --- lib/conntrack-private.h | 8 +++---- lib/conntrack-tcp.c | 43 +++++++++++++++++++++++++++++++++---- lib/conntrack.c | 56 ++++++++++++++++++++++++++++++++++++------------- 3 files changed, 83 insertions(+), 24 deletions(-) diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h index 3d838e4..ac891cc 100644 --- a/lib/conntrack-private.h +++ b/lib/conntrack-private.h @@ -81,16 +81,11 @@ struct alg_exp_node { bool nat_rpl_dst; }; -struct OVS_LOCKABLE ct_ce_lock { - struct ovs_mutex lock; -}; - struct conn { struct conn_key key; struct conn_key rev_key; /* Only used for orig_tuple support. */ struct conn_key master_key; - struct ct_ce_lock lock; long long expiration; struct ovs_list exp_node; struct cmap_node cm_node; @@ -142,6 +137,9 @@ struct ct_l4_proto { long long now); void (*conn_get_protoinfo)(const struct conn *, struct ct_dpif_protoinfo *); + void (*conn_lock)(struct conn *); + void (*conn_unlock)(struct conn *); + void (*conn_destroy)(struct conn *); }; /* Timeouts: all the possible timeout states passed to update_expiration() diff --git a/lib/conntrack-tcp.c b/lib/conntrack-tcp.c index 19fdf1d..9805332 100644 --- a/lib/conntrack-tcp.c +++ b/lib/conntrack-tcp.c @@ -54,6 +54,7 @@ struct tcp_peer { struct conn_tcp { struct conn up; struct tcp_peer peer[2]; + struct ovs_mutex lock; }; enum { @@ -144,10 +145,34 @@ tcp_get_wscale(const struct tcp_header *tcp) return wscale; } +static void +tcp_conn_lock(struct conn *conn_) + OVS_NO_THREAD_SAFETY_ANALYSIS +{ + struct conn_tcp *conn = conn_tcp_cast(conn_); + ovs_mutex_lock(&conn->lock); +} + +static void +tcp_conn_unlock(struct conn *conn_) + OVS_NO_THREAD_SAFETY_ANALYSIS +{ + struct conn_tcp *conn = conn_tcp_cast(conn_); + ovs_mutex_unlock(&conn->lock); +} + +static void +tcp_conn_destroy(struct conn *conn_) +{ + struct conn_tcp *conn = conn_tcp_cast(conn_); + ovs_mutex_destroy(&conn->lock); +} + static enum ct_update_res tcp_conn_update(struct conn *conn_, struct dp_packet *pkt, bool reply, long long now) { + tcp_conn_lock(conn_); struct conn_tcp *conn = conn_tcp_cast(conn_); struct tcp_header *tcp = dp_packet_l4(pkt); /* The peer that sent 'pkt' */ @@ -156,20 +181,23 @@ tcp_conn_update(struct conn *conn_, struct dp_packet *pkt, bool reply, struct tcp_peer *dst = &conn->peer[reply ? 0 : 1]; uint8_t sws = 0, dws = 0; uint16_t tcp_flags = TCP_FLAGS(tcp->tcp_ctl); + enum ct_update_res rc = CT_UPDATE_VALID; uint16_t win = ntohs(tcp->tcp_winsz); uint32_t ack, end, seq, orig_seq; uint32_t p_len = tcp_payload_length(pkt); if (tcp_invalid_flags(tcp_flags)) { - return CT_UPDATE_INVALID; + rc = CT_UPDATE_INVALID; + goto out; } if (((tcp_flags & (TCP_SYN | TCP_ACK)) == TCP_SYN) && dst->state >= CT_DPIF_TCPS_FIN_WAIT_2 && src->state >= CT_DPIF_TCPS_FIN_WAIT_2) { src->state = dst->state = CT_DPIF_TCPS_CLOSED; - return CT_UPDATE_NEW; + rc = CT_UPDATE_NEW; + goto out; } if (src->wscale & CT_WSCALE_FLAG @@ -385,10 +413,13 @@ tcp_conn_update(struct conn *conn_, struct dp_packet *pkt, bool reply, src->state = dst->state = CT_DPIF_TCPS_TIME_WAIT; } } else { - return CT_UPDATE_INVALID; + rc = CT_UPDATE_INVALID; + goto out; } - return CT_UPDATE_VALID; +out: + tcp_conn_unlock(conn_); + return rc; } static bool @@ -448,6 +479,7 @@ tcp_new_conn(struct dp_packet *pkt, long long now) src->state = CT_DPIF_TCPS_SYN_SENT; dst->state = CT_DPIF_TCPS_CLOSED; conn_init_expiration(&newconn->up, CT_TM_TCP_FIRST_PACKET, now); + ovs_mutex_init_adaptive(&newconn->lock); return &newconn->up; } @@ -490,4 +522,7 @@ struct ct_l4_proto ct_proto_tcp = { .valid_new = tcp_valid_new, .conn_update = tcp_conn_update, .conn_get_protoinfo = tcp_conn_get_protoinfo, + .conn_lock = tcp_conn_lock, + .conn_unlock = tcp_conn_unlock, + .conn_destroy = tcp_conn_destroy, }; diff --git a/lib/conntrack.c b/lib/conntrack.c index 8eb73a9..c47a0b0 100644 --- a/lib/conntrack.c +++ b/lib/conntrack.c @@ -583,16 +583,43 @@ alg_src_ip_wc(enum ct_alg_ctl_type alg_ctl_type) } static void +conn_entry_lock(struct conn *conn) +{ + struct ct_l4_proto *class = l4_protos[conn->key.nw_proto]; + if (class->conn_lock) { + class->conn_lock(conn); + } +} + +static void +conn_entry_unlock(struct conn *conn) +{ + struct ct_l4_proto *class = l4_protos[conn->key.nw_proto]; + if (class->conn_unlock) { + class->conn_unlock(conn); + } +} + +static void +conn_entry_destroy(struct conn *conn) +{ + struct ct_l4_proto *class = l4_protos[conn->key.nw_proto]; + if (class->conn_destroy) { + class->conn_destroy(conn); + } +} + +static void handle_alg_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, - enum ct_alg_ctl_type ct_alg_ctl, const struct conn *conn, + enum ct_alg_ctl_type ct_alg_ctl, struct conn *conn, long long now, bool nat) { /* ALG control packet handling with expectation creation. */ if (OVS_UNLIKELY(alg_helpers[ct_alg_ctl] && conn && conn->alg)) { - ovs_mutex_lock(&conn->lock.lock); + conn_entry_lock(conn); alg_helpers[ct_alg_ctl](ctx, pkt, conn, now, CT_FTP_CTL_INTEREST, nat); - ovs_mutex_unlock(&conn->lock.lock); + conn_entry_unlock(conn); } } @@ -929,7 +956,6 @@ conn_not_found(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, } nc->nat_conn = nat_conn; - ovs_mutex_init_adaptive(&nc->lock.lock); nc->conn_type = CT_CONN_TYPE_DEFAULT; cmap_insert(&cm_conns, &nc->cm_node, ctx->hash); nc->inserted = true; @@ -1085,23 +1111,23 @@ conn_update_state_alg(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, if (is_ftp_ctl(ct_alg_ctl)) { /* Keep sequence tracking in sync with the source of the * sequence skew. */ - ovs_mutex_lock(&conn->lock.lock); + conn_entry_lock(conn); if (ctx->reply != conn->seq_skew_dir) { handle_ftp_ctl(ctx, pkt, conn, now, CT_FTP_CTL_OTHER, !!nat_action_info); /* conn_update_state locks for unrelated fields, so unlock. */ - ovs_mutex_unlock(&conn->lock.lock); + conn_entry_unlock(conn); *create_new_conn = conn_update_state(pkt, ctx, conn, now); } else { /* conn_update_state locks for unrelated fields, so unlock. */ - ovs_mutex_unlock(&conn->lock.lock); + conn_entry_unlock(conn); *create_new_conn = conn_update_state(pkt, ctx, conn, now); - ovs_mutex_lock(&conn->lock.lock); + conn_entry_lock(conn); if (*create_new_conn == false) { handle_ftp_ctl(ctx, pkt, conn, now, CT_FTP_CTL_OTHER, !!nat_action_info); } - ovs_mutex_unlock(&conn->lock.lock); + conn_entry_unlock(conn); } return true; } @@ -1295,16 +1321,16 @@ ct_sweep(long long now, size_t limit) for (unsigned i = 0; i < N_CT_TM; i++) { LIST_FOR_EACH_SAFE (conn, next, exp_node, &cm_exp_lists[i]) { if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { - ovs_mutex_lock(&conn->lock.lock); + conn_entry_lock(conn); if (conn->exp_list_id != NO_UPD_EXP_LIST) { ovs_list_remove(&conn->exp_node); ovs_list_push_back(&cm_exp_lists[conn->exp_list_id], &conn->exp_node); conn->exp_list_id = NO_UPD_EXP_LIST; - ovs_mutex_unlock(&conn->lock.lock); + conn_entry_unlock(conn); } else if (!conn_expired(conn, now) || count >= limit) { /* Not looking at conn changable fields. */ - ovs_mutex_unlock(&conn->lock.lock); + conn_entry_unlock(conn); min_expiration = MIN(min_expiration, conn->expiration); if (count >= limit) { /* Do not check other lists. */ @@ -1314,7 +1340,7 @@ ct_sweep(long long now, size_t limit) break; } else { /* Not looking at conn changable fields. */ - ovs_mutex_unlock(&conn->lock.lock); + conn_entry_unlock(conn); if (conn->inserted) { conn_clean(conn); } else { @@ -2172,7 +2198,7 @@ delete_conn(struct conn *conn) if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { free(conn->nat_info); free(conn->alg); - ovs_mutex_destroy(&conn->lock.lock); + conn_entry_destroy(conn); if (conn->nat_conn) { free(conn->nat_conn); } @@ -2186,7 +2212,7 @@ delete_conn_one(struct conn *conn) if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { free(conn->nat_info); free(conn->alg); - ovs_mutex_destroy(&conn->lock.lock); + conn_entry_destroy(conn); } free(conn); } From patchwork Wed Nov 28 16:31:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Darrell Ball X-Patchwork-Id: 1004628 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="TRQwIdky"; dkim-atps=neutral Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 434mV20Vrfz9s3Z for ; Thu, 29 Nov 2018 03:35:22 +1100 (AEDT) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id E7268C3F; Wed, 28 Nov 2018 16:32:15 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 07491C2A for ; Wed, 28 Nov 2018 16:32:15 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 737B5771 for ; Wed, 28 Nov 2018 16:32:13 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id e5so17630136plb.5 for ; Wed, 28 Nov 2018 08:32:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=oFGl2ZvCYpjJLNWSL9aFoQWiKYBEGF085pVN3cw16TM=; b=TRQwIdkyteqj5mSst0ARhJbaAJcN14XMZDMwHHapgt4a1eXfut+w7P9Nj8F5LZdMH9 BTLd0CFJMMZynxUGXR1xOKeb5NN6ulHsSawZ0hky/WrT4f4ZLz1l+rWBiiMTr68EZWlT nhYuY0YuxF0oHyCQn0UnnamVd0Ind+BWtILKjtcNPvZd+eUIPmdo4Xh0g5iiXJdctViz ApRQyUcTt4TxFzMzHktQCLXSb5uPgzifXys07B5QDJ9zRjpTQaISKurng65NCsynsAPg ZVDIQcopEiVhyG90cOhDxi13zH52FETZJKJ9TkewyIJ3L1S7LpELtmo04wGuy7jtYDEU IzUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=oFGl2ZvCYpjJLNWSL9aFoQWiKYBEGF085pVN3cw16TM=; b=ATY452RmvOU7k9/g9TXvvapsvqhNxJqw480LxcH4IvAv7koNFo1rWdWCCH4vJIbCd3 BN2o+5NwB4GUHq1Jsq7b9xUcAo9I+PqE+W9sWytxW0dhwSF7+NgdFfiLQTvU/zekPke9 /2WfKBqDuDAkm0V2xXJuxkqflbVyymM9MVosVT5L+Z5kb4TMtumfKPm49dIK0uGoSZgq EoED60xpE036TBF480k+yH2HUjxCy7t0aJhjsPLk9gxLp04jY92xFtC0elTEY4wwogVX PMCTxumXNudrD1wjaW2hBpgGrS4gsWQAJu6dFQqAwnm8tw1R1vLlTUu8y4fa5ZLMueiv NeUA== X-Gm-Message-State: AA+aEWYTFHx2ZlzrPwYYcUVRYEDI+X90Ec8RTUM26bi503Q+E+SDLfXk 8srAKZtR1gtaL7RaVGpD8yVaUZcZ X-Google-Smtp-Source: AFSGD/WZ6SSlDhrf2AMEk5whJG6YPaZK81E+tFg72GICv18OA2yDouZccCrilx6YaGzovrHDtzkMAg== X-Received: by 2002:a17:902:6e16:: with SMTP id u22mr37026606plk.175.1543422732826; Wed, 28 Nov 2018 08:32:12 -0800 (PST) Received: from ubuntu.localdomain (c-76-102-76-212.hsd1.ca.comcast.net. [76.102.76.212]) by smtp.gmail.com with ESMTPSA id v9sm1201512pfg.144.2018.11.28.08.32.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Nov 2018 08:32:12 -0800 (PST) From: Darrell Ball To: dlu998@gmail.com, dev@openvswitch.org Date: Wed, 28 Nov 2018 08:31:53 -0800 Message-Id: <1543422714-100901-5-git-send-email-dlu998@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1543422714-100901-1-git-send-email-dlu998@gmail.com> References: <1543422714-100901-1-git-send-email-dlu998@gmail.com> X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [patch v2 4/5] conntrack: Memory savings. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org Allocate memory for nat and algs as needed. Signed-off-by: Darrell Ball --- lib/conntrack-private.h | 29 +++++--- lib/conntrack.c | 189 ++++++++++++++++++++++++++++-------------------- 2 files changed, 127 insertions(+), 91 deletions(-) diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h index ac891cc..20d2d78 100644 --- a/lib/conntrack-private.h +++ b/lib/conntrack-private.h @@ -81,19 +81,31 @@ struct alg_exp_node { bool nat_rpl_dst; }; +struct conn_ext_nat { + struct nat_action_info_t *nat_info; + struct conn *nat_conn; +}; + +struct conn_ext_alg { + char *alg; + /* TCP sequence skew direction due to NATTing of FTP control messages; + * true if reply direction. */ + int seq_skew; + struct conn_key master_key; + /* True if alg data connection. */ + bool alg_related; + bool seq_skew_dir; +}; + struct conn { struct conn_key key; struct conn_key rev_key; - /* Only used for orig_tuple support. */ - struct conn_key master_key; long long expiration; struct ovs_list exp_node; struct cmap_node cm_node; ovs_u128 label; - struct nat_action_info_t *nat_info; - char *alg; - struct conn *nat_conn; - int seq_skew; + struct conn_ext_nat *ext_nat; + struct conn_ext_alg *ext_alg; uint32_t mark; /* See ct_conn_type. */ uint8_t conn_type; @@ -101,11 +113,6 @@ struct conn { * This field is used to signal an update to the specified list. The * value 'NO_UPD_EXP_LIST' is used to indicate no update to any list. */ uint8_t exp_list_id; - /* TCP sequence skew direction due to NATTing of FTP control messages; - * true if reply direction. */ - bool seq_skew_dir; - /* True if alg data connection. */ - bool alg_related; /* Inserted into the cmap; handle theoretical expiry list race; although * such a race would probably mean a system meltdown. */ bool inserted; diff --git a/lib/conntrack.c b/lib/conntrack.c index c47a0b0..9d10b14 100644 --- a/lib/conntrack.c +++ b/lib/conntrack.c @@ -399,16 +399,16 @@ conn_clean(struct conn *conn) { ovs_assert(conn->conn_type == CT_CONN_TYPE_DEFAULT); - if (conn->alg) { + if (conn->ext_alg && conn->ext_alg->alg) { expectation_clean(&conn->key); } uint32_t hash = conn_key_hash(&conn->key, hash_basis); cmap_remove(&cm_conns, &conn->cm_node, hash); ovs_list_remove(&conn->exp_node); - if (conn->nat_conn) { - hash = conn_key_hash(&conn->nat_conn->key, hash_basis); - cmap_remove(&cm_conns, &conn->nat_conn->cm_node, hash); + if (conn->ext_nat && conn->ext_nat->nat_conn) { + hash = conn_key_hash(&conn->ext_nat->nat_conn->key, hash_basis); + cmap_remove(&cm_conns, &conn->ext_nat->nat_conn->cm_node, hash); } ovsrcu_postpone(delete_conn, conn); atomic_count_dec(&n_conn); @@ -418,7 +418,7 @@ static void conn_clean_one(struct conn *conn) OVS_NO_THREAD_SAFETY_ANALYSIS { - if (conn->alg) { + if (conn->ext_alg && conn->ext_alg->alg) { expectation_clean(&conn->key); } @@ -473,8 +473,8 @@ write_ct_md(struct dp_packet *pkt, uint16_t zone, const struct conn *conn, /* Use the original direction tuple if we have it. */ if (conn) { - if (conn->alg_related) { - key = &conn->master_key; + if (conn->ext_alg && conn->ext_alg->alg_related) { + key = &conn->ext_alg->master_key; } else { key = &conn->key; } @@ -615,7 +615,8 @@ handle_alg_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, long long now, bool nat) { /* ALG control packet handling with expectation creation. */ - if (OVS_UNLIKELY(alg_helpers[ct_alg_ctl] && conn && conn->alg)) { + if (OVS_UNLIKELY(alg_helpers[ct_alg_ctl] && conn && conn->ext_alg && + conn->ext_alg->alg)) { conn_entry_lock(conn); alg_helpers[ct_alg_ctl](ctx, pkt, conn, now, CT_FTP_CTL_INTEREST, nat); @@ -626,7 +627,7 @@ handle_alg_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, static void pat_packet(struct dp_packet *pkt, const struct conn *conn) { - if (conn->nat_info->nat_action & NAT_ACTION_SRC) { + if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) { if (conn->key.nw_proto == IPPROTO_TCP) { struct tcp_header *th = dp_packet_l4(pkt); packet_set_tcp_port(pkt, conn->rev_key.dst.port, th->tcp_dst); @@ -634,7 +635,7 @@ pat_packet(struct dp_packet *pkt, const struct conn *conn) struct udp_header *uh = dp_packet_l4(pkt); packet_set_udp_port(pkt, conn->rev_key.dst.port, uh->udp_dst); } - } else if (conn->nat_info->nat_action & NAT_ACTION_DST) { + } else if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_DST) { if (conn->key.nw_proto == IPPROTO_TCP) { struct tcp_header *th = dp_packet_l4(pkt); packet_set_tcp_port(pkt, th->tcp_src, conn->rev_key.src.port); @@ -648,7 +649,7 @@ pat_packet(struct dp_packet *pkt, const struct conn *conn) static void nat_packet(struct dp_packet *pkt, const struct conn *conn, bool related) { - if (conn->nat_info->nat_action & NAT_ACTION_SRC) { + if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) { pkt->md.ct_state |= CS_SRC_NAT; if (conn->key.dl_type == htons(ETH_TYPE_IP)) { struct ip_header *nh = dp_packet_l3(pkt); @@ -664,7 +665,7 @@ nat_packet(struct dp_packet *pkt, const struct conn *conn, bool related) if (!related) { pat_packet(pkt, conn); } - } else if (conn->nat_info->nat_action & NAT_ACTION_DST) { + } else if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_DST) { pkt->md.ct_state |= CS_DST_NAT; if (conn->key.dl_type == htons(ETH_TYPE_IP)) { struct ip_header *nh = dp_packet_l3(pkt); @@ -686,7 +687,7 @@ nat_packet(struct dp_packet *pkt, const struct conn *conn, bool related) static void un_pat_packet(struct dp_packet *pkt, const struct conn *conn) { - if (conn->nat_info->nat_action & NAT_ACTION_SRC) { + if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) { if (conn->key.nw_proto == IPPROTO_TCP) { struct tcp_header *th = dp_packet_l4(pkt); packet_set_tcp_port(pkt, th->tcp_src, conn->key.src.port); @@ -694,7 +695,7 @@ un_pat_packet(struct dp_packet *pkt, const struct conn *conn) struct udp_header *uh = dp_packet_l4(pkt); packet_set_udp_port(pkt, uh->udp_src, conn->key.src.port); } - } else if (conn->nat_info->nat_action & NAT_ACTION_DST) { + } else if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_DST) { if (conn->key.nw_proto == IPPROTO_TCP) { struct tcp_header *th = dp_packet_l4(pkt); packet_set_tcp_port(pkt, conn->key.dst.port, th->tcp_dst); @@ -708,7 +709,7 @@ un_pat_packet(struct dp_packet *pkt, const struct conn *conn) static void reverse_pat_packet(struct dp_packet *pkt, const struct conn *conn) { - if (conn->nat_info->nat_action & NAT_ACTION_SRC) { + if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) { if (conn->key.nw_proto == IPPROTO_TCP) { struct tcp_header *th_in = dp_packet_l4(pkt); packet_set_tcp_port(pkt, conn->key.src.port, @@ -718,7 +719,7 @@ reverse_pat_packet(struct dp_packet *pkt, const struct conn *conn) packet_set_udp_port(pkt, conn->key.src.port, uh_in->udp_dst); } - } else if (conn->nat_info->nat_action & NAT_ACTION_DST) { + } else if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_DST) { if (conn->key.nw_proto == IPPROTO_TCP) { struct tcp_header *th_in = dp_packet_l4(pkt); packet_set_tcp_port(pkt, th_in->tcp_src, @@ -750,10 +751,10 @@ reverse_nat_packet(struct dp_packet *pkt, const struct conn *conn) pkt->l3_ofs += (char *) inner_l3 - (char *) nh; pkt->l4_ofs += inner_l4 - (char *) icmp; - if (conn->nat_info->nat_action & NAT_ACTION_SRC) { + if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) { packet_set_ipv4_addr(pkt, &inner_l3->ip_src, conn->key.src.addr.ipv4_aligned); - } else if (conn->nat_info->nat_action & NAT_ACTION_DST) { + } else if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_DST) { packet_set_ipv4_addr(pkt, &inner_l3->ip_dst, conn->key.dst.addr.ipv4_aligned); } @@ -772,12 +773,12 @@ reverse_nat_packet(struct dp_packet *pkt, const struct conn *conn) pkt->l3_ofs += (char *) inner_l3_6 - (char *) nh6; pkt->l4_ofs += inner_l4 - (char *) icmp6; - if (conn->nat_info->nat_action & NAT_ACTION_SRC) { + if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) { packet_set_ipv6_addr(pkt, conn->key.nw_proto, inner_l3_6->ip6_src.be32, &conn->key.src.addr.ipv6_aligned, true); - } else if (conn->nat_info->nat_action & NAT_ACTION_DST) { + } else if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_DST) { packet_set_ipv6_addr(pkt, conn->key.nw_proto, inner_l3_6->ip6_dst.be32, &conn->key.dst.addr.ipv6_aligned, @@ -797,7 +798,7 @@ static void un_nat_packet(struct dp_packet *pkt, const struct conn *conn, bool related) { - if (conn->nat_info->nat_action & NAT_ACTION_SRC) { + if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) { pkt->md.ct_state |= CS_DST_NAT; if (conn->key.dl_type == htons(ETH_TYPE_IP)) { struct ip_header *nh = dp_packet_l3(pkt); @@ -815,7 +816,7 @@ un_nat_packet(struct dp_packet *pkt, const struct conn *conn, } else { un_pat_packet(pkt, conn); } - } else if (conn->nat_info->nat_action & NAT_ACTION_DST) { + } else if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_DST) { pkt->md.ct_state |= CS_SRC_NAT; if (conn->key.dl_type == htons(ETH_TYPE_IP)) { struct ip_header *nh = dp_packet_l3(pkt); @@ -846,8 +847,8 @@ conn_seq_skew_set(const struct conn_key *key, long long now, int seq_skew, conn_key_lookup(key, hash, now, &conn, &reply); if (conn && seq_skew) { - conn->seq_skew = seq_skew; - conn->seq_skew_dir = seq_skew_dir; + conn->ext_alg->seq_skew = seq_skew; + conn->ext_alg->seq_skew_dir = seq_skew_dir; } } @@ -906,29 +907,38 @@ conn_not_found(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, nc->rev_key = nc->key; conn_key_reverse(&nc->rev_key); - if (ct_verify_helper(helper, ct_alg_ctl)) { - nc->alg = nullable_xstrdup(helper); + if (alg_exp || helper || ct_alg_ctl != CT_ALG_CTL_NONE) { + nc->ext_alg = xzalloc(sizeof *nc->ext_alg); + } + + if (nat_action_info) { + nc->ext_nat = xzalloc(sizeof *nc->ext_nat); + } + + if (nc->ext_alg && ct_verify_helper(helper, ct_alg_ctl)) { + nc->ext_alg->alg = nullable_xstrdup(helper); } if (alg_exp) { - nc->alg_related = true; + nc->ext_alg->alg_related = true; nc->mark = alg_exp->master_mark; nc->label = alg_exp->master_label; - nc->master_key = alg_exp->master_key; + nc->ext_alg->master_key = alg_exp->master_key; } if (nat_action_info) { - nc->nat_info = xmemdup(nat_action_info, sizeof *nc->nat_info); + nc->ext_nat->nat_info = xmemdup(nat_action_info, + sizeof *nc->ext_nat->nat_info); nat_conn = xzalloc(sizeof *nat_conn); if (alg_exp) { if (alg_exp->nat_rpl_dst) { nc->rev_key.dst.addr = alg_exp->alg_nat_repl_addr; - nc->nat_info->nat_action = NAT_ACTION_SRC; + nc->ext_nat->nat_info->nat_action = NAT_ACTION_SRC; } else { nc->rev_key.src.addr = alg_exp->alg_nat_repl_addr; - nc->nat_info->nat_action = NAT_ACTION_DST; + nc->ext_nat->nat_info->nat_action = NAT_ACTION_DST; } *nat_conn = *nc; } else { @@ -942,20 +952,19 @@ conn_not_found(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, /* Update nc with nat adjustments. */ *nc = *nat_conn; } + nc->ext_nat->nat_conn = nat_conn; nat_packet(pkt, nc, ctx->icmp_related); nat_conn->key = nc->rev_key; nat_conn->rev_key = nc->key; nat_conn->conn_type = CT_CONN_TYPE_UN_NAT; - nat_conn->nat_info = NULL; - nat_conn->alg = NULL; - nat_conn->nat_conn = NULL; + nat_conn->ext_nat = NULL; + nat_conn->ext_alg = NULL; uint32_t nat_hash = conn_key_hash(&nat_conn->key, hash_basis); cmap_insert(&cm_conns, &nat_conn->cm_node, nat_hash); } - nc->nat_conn = nat_conn; nc->conn_type = CT_CONN_TYPE_DEFAULT; cmap_insert(&cm_conns, &nc->cm_node, ctx->hash); nc->inserted = true; @@ -993,7 +1002,7 @@ conn_update_state(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, pkt->md.ct_state |= CS_REPLY_DIR; } } else { - if (conn->alg_related) { + if (conn->ext_alg && conn->ext_alg->alg_related) { pkt->md.ct_state |= CS_RELATED; } @@ -1027,7 +1036,7 @@ static void handle_nat(struct dp_packet *pkt, struct conn *conn, uint16_t zone, bool reply, bool related) { - if (conn->nat_info && + if (conn->ext_nat && conn->ext_nat->nat_info && (!(pkt->md.ct_state & (CS_SRC_NAT | CS_DST_NAT)) || (pkt->md.ct_state & (CS_SRC_NAT | CS_DST_NAT) && zone != pkt->md.ct_zone))) { @@ -1112,7 +1121,7 @@ conn_update_state_alg(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, /* Keep sequence tracking in sync with the source of the * sequence skew. */ conn_entry_lock(conn); - if (ctx->reply != conn->seq_skew_dir) { + if (ctx->reply != conn->ext_alg->seq_skew_dir) { handle_ftp_ctl(ctx, pkt, conn, now, CT_FTP_CTL_OTHER, !!nat_action_info); /* conn_update_state locks for unrelated fields, so unlock. */ @@ -1275,7 +1284,7 @@ conntrack_clear(struct dp_packet *packet) static void set_mark(struct dp_packet *pkt, struct conn *conn, uint32_t val, uint32_t mask) { - if (conn->alg_related) { + if (conn->ext_alg && conn->ext_alg->alg_related) { pkt->md.ct_mark = conn->mark; } else { pkt->md.ct_mark = val | (pkt->md.ct_mark & ~(mask)); @@ -1288,7 +1297,7 @@ set_label(struct dp_packet *pkt, struct conn *conn, const struct ovs_key_ct_labels *val, const struct ovs_key_ct_labels *mask) { - if (conn->alg_related) { + if (conn->ext_alg && conn->ext_alg->alg_related) { pkt->md.ct_label = conn->label; } else { ovs_u128 v, m; @@ -2012,11 +2021,11 @@ nat_range_hash(const struct conn *conn, uint32_t basis) { uint32_t hash = basis; - hash = ct_addr_hash_add(hash, &conn->nat_info->min_addr); - hash = ct_addr_hash_add(hash, &conn->nat_info->max_addr); + hash = ct_addr_hash_add(hash, &conn->ext_nat->nat_info->min_addr); + hash = ct_addr_hash_add(hash, &conn->ext_nat->nat_info->max_addr); hash = hash_add(hash, - (conn->nat_info->max_port << 16) - | conn->nat_info->min_port); + (conn->ext_nat->nat_info->max_port << 16) + | conn->ext_nat->nat_info->min_port); hash = ct_endpoint_hash_add(hash, &conn->key.src); hash = ct_endpoint_hash_add(hash, &conn->key.dst); hash = hash_add(hash, (OVS_FORCE uint32_t) conn->key.dl_type); @@ -2040,22 +2049,24 @@ nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn) uint16_t first_port; uint32_t hash = nat_range_hash(conn, hash_basis); - if ((conn->nat_info->nat_action & NAT_ACTION_SRC) && - (!(conn->nat_info->nat_action & NAT_ACTION_SRC_PORT))) { + if ((conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) && + (!(conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC_PORT))) { min_port = ntohs(conn->key.src.port); max_port = ntohs(conn->key.src.port); first_port = min_port; - } else if ((conn->nat_info->nat_action & NAT_ACTION_DST) && - (!(conn->nat_info->nat_action & NAT_ACTION_DST_PORT))) { + } else if ((conn->ext_nat->nat_info->nat_action & NAT_ACTION_DST) && + (!(conn->ext_nat->nat_info->nat_action + & NAT_ACTION_DST_PORT))) { min_port = ntohs(conn->key.dst.port); max_port = ntohs(conn->key.dst.port); first_port = min_port; } else { - uint16_t deltap = conn->nat_info->max_port - conn->nat_info->min_port; + uint16_t deltap = conn->ext_nat->nat_info->max_port - + conn->ext_nat->nat_info->min_port; uint32_t port_index = hash % (deltap + 1); - first_port = conn->nat_info->min_port + port_index; - min_port = conn->nat_info->min_port; - max_port = conn->nat_info->max_port; + first_port = conn->ext_nat->nat_info->min_port + port_index; + min_port = conn->ext_nat->nat_info->min_port; + max_port = conn->ext_nat->nat_info->max_port; } uint32_t deltaa = 0; @@ -2064,37 +2075,39 @@ nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn) memset(&ct_addr, 0, sizeof ct_addr); struct ct_addr max_ct_addr; memset(&max_ct_addr, 0, sizeof max_ct_addr); - max_ct_addr = conn->nat_info->max_addr; + max_ct_addr = conn->ext_nat->nat_info->max_addr; if (conn->key.dl_type == htons(ETH_TYPE_IP)) { - deltaa = ntohl(conn->nat_info->max_addr.ipv4_aligned) - - ntohl(conn->nat_info->min_addr.ipv4_aligned); + deltaa = ntohl(conn->ext_nat->nat_info->max_addr.ipv4_aligned) - + ntohl(conn->ext_nat->nat_info->min_addr.ipv4_aligned); address_index = hash % (deltaa + 1); ct_addr.ipv4_aligned = htonl( - ntohl(conn->nat_info->min_addr.ipv4_aligned) + address_index); + ntohl(conn->ext_nat->nat_info->min_addr.ipv4_aligned) + + address_index); } else { - deltaa = nat_ipv6_addrs_delta(&conn->nat_info->min_addr.ipv6_aligned, - &conn->nat_info->max_addr.ipv6_aligned); + deltaa = nat_ipv6_addrs_delta( + &conn->ext_nat->nat_info->min_addr.ipv6_aligned, + &conn->ext_nat->nat_info->max_addr.ipv6_aligned); /* deltaa must be within 32 bits for full hash coverage. A 64 or * 128 bit hash is unnecessary and hence not used here. Most code * is kept common with V4; nat_ipv6_addrs_delta() will do the * enforcement via max_ct_addr. */ - max_ct_addr = conn->nat_info->min_addr; + max_ct_addr = conn->ext_nat->nat_info->min_addr; nat_ipv6_addr_increment(&max_ct_addr.ipv6_aligned, deltaa); address_index = hash % (deltaa + 1); - ct_addr.ipv6_aligned = conn->nat_info->min_addr.ipv6_aligned; + ct_addr.ipv6_aligned = conn->ext_nat->nat_info->min_addr.ipv6_aligned; nat_ipv6_addr_increment(&ct_addr.ipv6_aligned, address_index); } uint16_t port = first_port; bool all_ports_tried = false; /* For DNAT, we don't use ephemeral ports. */ - bool ephemeral_ports_tried = conn->nat_info->nat_action & NAT_ACTION_DST - ? true : false; + bool ephemeral_ports_tried = + conn->ext_nat->nat_info->nat_action & NAT_ACTION_DST ? true : false; struct ct_addr first_addr = ct_addr; while (true) { - if (conn->nat_info->nat_action & NAT_ACTION_SRC) { + if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) { nat_conn->rev_key.dst.addr = ct_addr; } else { nat_conn->rev_key.src.addr = ct_addr; @@ -2103,7 +2116,7 @@ nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn) if ((conn->key.nw_proto == IPPROTO_ICMP) || (conn->key.nw_proto == IPPROTO_ICMPV6)) { all_ports_tried = true; - } else if (conn->nat_info->nat_action & NAT_ACTION_SRC) { + } else if (conn->ext_nat->nat_info->nat_action & NAT_ACTION_SRC) { nat_conn->rev_key.dst.port = htons(port); } else { nat_conn->rev_key.src.port = htons(port); @@ -2135,14 +2148,14 @@ nat_select_range_tuple(const struct conn *conn, struct conn *nat_conn) nat_ipv6_addr_increment(&ct_addr.ipv6_aligned, 1); } } else { - ct_addr = conn->nat_info->min_addr; + ct_addr = conn->ext_nat->nat_info->min_addr; } if (!memcmp(&ct_addr, &first_addr, sizeof ct_addr)) { if (ephemeral_ports_tried) { break; } else { ephemeral_ports_tried = true; - ct_addr = conn->nat_info->min_addr; + ct_addr = conn->ext_nat->nat_info->min_addr; first_addr = ct_addr; min_port = MIN_NAT_EPHEMERAL_PORT; max_port = MAX_NAT_EPHEMERAL_PORT; @@ -2193,16 +2206,26 @@ conn_update(struct dp_packet *pkt, struct conn *conn, } static void +delete_conn_cmn(struct conn *conn) +{ + free(conn->ext_alg); + free(conn->ext_nat); + free(conn); +} + +static void delete_conn(struct conn *conn) { if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { - free(conn->nat_info); - free(conn->alg); - conn_entry_destroy(conn); - if (conn->nat_conn) { - free(conn->nat_conn); + if (conn->ext_nat) { + free(conn->ext_nat->nat_info); + free(conn->ext_nat->nat_conn); } - free(conn); + if (conn->ext_alg) { + free(conn->ext_alg->alg); + } + conn_entry_destroy(conn); + delete_conn_cmn(conn); } } @@ -2210,11 +2233,15 @@ static void delete_conn_one(struct conn *conn) { if (conn->conn_type == CT_CONN_TYPE_DEFAULT) { - free(conn->nat_info); - free(conn->alg); + if (conn->ext_nat) { + free(conn->ext_nat->nat_info); + } + if (conn->ext_alg) { + free(conn->ext_alg->alg); + } conn_entry_destroy(conn); } - free(conn); + delete_conn_cmn(conn); } /* Convert a conntrack address 'a' into an IP address 'b' based on 'dl_type'. @@ -2328,9 +2355,11 @@ conn_to_ct_dpif_entry(const struct conn *conn, struct ct_dpif_entry *entry, entry->bkt = bkt; - if (conn->alg) { + if (conn->ext_alg && conn->ext_alg->alg) { /* Caller is responsible for freeing. */ - entry->helper.name = xstrdup(conn->alg); + entry->helper.name = xstrdup(conn->ext_alg->alg); + } else { + entry->helper.name = NULL; } } @@ -3016,7 +3045,7 @@ handle_ftp_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, return; } - if (!nat || !conn_for_expectation->seq_skew) { + if (!nat || !conn_for_expectation->ext_alg->seq_skew) { do_seq_skew_adj = false; } @@ -3024,7 +3053,7 @@ handle_ftp_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, int64_t seq_skew = 0; if (ftp_ctl == CT_FTP_CTL_OTHER) { - seq_skew = conn_for_expectation->seq_skew; + seq_skew = conn_for_expectation->ext_alg->seq_skew; } else if (ftp_ctl == CT_FTP_CTL_INTEREST) { enum ftp_ctl_pkt rc; if (ctx->key.dl_type == htons(ETH_TYPE_IPV6)) { @@ -3079,7 +3108,7 @@ handle_ftp_ctl(const struct conn_lookup_ctx *ctx, struct dp_packet *pkt, struct tcp_header *th = dp_packet_l4(pkt); if (do_seq_skew_adj && seq_skew != 0) { - if (ctx->reply != conn_for_expectation->seq_skew_dir) { + if (ctx->reply != conn_for_expectation->ext_alg->seq_skew_dir) { uint32_t tcp_ack = ntohl(get_16aligned_be32(&th->tcp_ack)); From patchwork Wed Nov 28 16:31:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Darrell Ball X-Patchwork-Id: 1004629 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="ft6wi6WK"; dkim-atps=neutral Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 434mWG5n3Wz9s2P for ; Thu, 29 Nov 2018 03:36:26 +1100 (AEDT) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id DF50DC64; Wed, 28 Nov 2018 16:32:16 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 141A4B5F for ; Wed, 28 Nov 2018 16:32:15 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id AA095712 for ; Wed, 28 Nov 2018 16:32:14 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id x21-v6so17616583pln.9 for ; Wed, 28 Nov 2018 08:32:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=HF2I0tjvm6XIhTThhn8Uz5qXp0JtAXaWDgV+9GDu9pA=; b=ft6wi6WK6qZ5uqCGRVRtRU4rqTHggIPGjO67+tBqS8c5bxjS7sXXD2mkXJQhOjrfjA NlOv806Sz7iQL8BtGBFJiAhJWqELpLsoDyI2KvIYryOCeF1H8kP3NlM6If5u/8N7TSkh ceYaJM/Iu6FJc7NcWKmHoyV2LHNdd40dKvxqsBhnyoIffJQPgGLfHFT44M+NF534FPQl roLaPsm5TnoycblWMb8TdBZeo2R+7a0pzD3ZofWo7yQxGs4Z04gn01My2Ou2VGQWVB4q 7kaINaz6y+C3KA8hsDHSK1qi1U3a9cgR+Nw/OA686Dm2UpfDL7vytuwbTaY0Evklbt1T BVSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=HF2I0tjvm6XIhTThhn8Uz5qXp0JtAXaWDgV+9GDu9pA=; b=bE+hcefVJLzOrz4Od2VKjhoKpbnfF7vgQD0mUWdv/duin6ICYpxSACa7JFN44lKYWZ x0WLQe/zf2SskTDELjnaIcBJLJQOq9K3L3CIw1a/SP44b4W0GSPUAxBFdxMDZ8ijnxfq 57H9cdrYBMgdUkG/PWYGpCuUbSA0Xs6G98kvFir+Jz900EntlsDFq47iy3a16fZ3rtrv T7b8EbSJahxtyELoTIirwwGfC44781mBRINaxANLYPUb3cdK3dKlQq+UyEUa6OThe22Y gMWiDZWMOUgrTrYEm2IYk0j62LzH53X7mPr0uEmxPRetJ1SjJvte8xYilpWzpO6lh3R/ 8r0A== X-Gm-Message-State: AA+aEWZqRfpB2maOLWZyJlEARzy45DXLZ7Gqde4/v3U70/gi/tJYUb8x gIpROj5DdWzkkmHeU6lvZ30= X-Google-Smtp-Source: AFSGD/Wo5Vj46toivHVcC5yc2O4NNU46kwWWYPWU8XpB5IL9dmlBEMFE8Igy973jx2VYP6NCK7F4CA== X-Received: by 2002:a17:902:6b0c:: with SMTP id o12mr38100866plk.291.1543422734276; Wed, 28 Nov 2018 08:32:14 -0800 (PST) Received: from ubuntu.localdomain (c-76-102-76-212.hsd1.ca.comcast.net. [76.102.76.212]) by smtp.gmail.com with ESMTPSA id v9sm1201512pfg.144.2018.11.28.08.32.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Nov 2018 08:32:13 -0800 (PST) From: Darrell Ball To: dlu998@gmail.com, dev@openvswitch.org Date: Wed, 28 Nov 2018 08:31:54 -0800 Message-Id: <1543422714-100901-6-git-send-email-dlu998@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1543422714-100901-1-git-send-email-dlu998@gmail.com> References: <1543422714-100901-1-git-send-email-dlu998@gmail.com> X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [patch v2 5/5] conntrack: Optimize recirculations. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org In most cases, recirculations through conntrack can be much less costly. Signed-off-by: Darrell Ball --- lib/conntrack.c | 50 +++++++++++++++++++++++++++++++++++++++++++++++--- lib/packets.h | 4 ++++ 2 files changed, 51 insertions(+), 3 deletions(-) diff --git a/lib/conntrack.c b/lib/conntrack.c index 9d10b14..f9c4d90 100644 --- a/lib/conntrack.c +++ b/lib/conntrack.c @@ -51,6 +51,7 @@ struct conn_lookup_ctx { uint32_t hash; bool reply; bool icmp_related; + bool nat; }; enum ftp_ctl_pkt { @@ -954,7 +955,7 @@ conn_not_found(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, } nc->ext_nat->nat_conn = nat_conn; nat_packet(pkt, nc, ctx->icmp_related); - + ctx->nat = true; nat_conn->key = nc->rev_key; nat_conn->rev_key = nc->key; nat_conn->conn_type = CT_CONN_TYPE_UN_NAT; @@ -1193,6 +1194,7 @@ process_one(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, uint16_t zone, } if (nat_action_info && !create_new_conn) { handle_nat(pkt, conn, zone, ctx->reply, ctx->icmp_related); + ctx->nat = true; } } else if (check_orig_tuple(pkt, ctx, now, &conn, nat_action_info)) { create_new_conn = conn_update_state(pkt, ctx, conn, now); @@ -1237,6 +1239,39 @@ process_one(struct dp_packet *pkt, struct conn_lookup_ctx *ctx, uint16_t zone, } handle_alg_ctl(ctx, pkt, ct_alg_ctl, conn, now, !!nat_action_info); + + if (!ctx->nat && ct_alg_ctl == CT_ALG_CTL_NONE) { + pkt->md.conn = conn; + pkt->md.reply = ctx->reply; + pkt->md.icmp_related = ctx->icmp_related; + } else { + pkt->md.conn = NULL; + } +} + +static inline void +process_one_fast(struct dp_packet *pkt, uint16_t zone, + const uint32_t *setmark, + const struct ovs_key_ct_labels *setlabel, + const struct nat_action_info_t *nat_action_info, + struct conn *conn) +{ + if (nat_action_info) { + handle_nat(pkt, conn, zone, pkt->md.reply, pkt->md.icmp_related); + pkt->md.conn = NULL; + } + + pkt->md.ct_zone = zone; + pkt->md.ct_mark = conn->mark; + pkt->md.ct_label = conn->label; + + if (setmark) { + set_mark(pkt, conn, setmark[0], setmark[1]); + } + + if (setlabel) { + set_label(pkt, conn, &setlabel[0], &setlabel[1]); + } } /* Sends the packets in '*pkt_batch' through the connection tracker 'ct'. All @@ -1260,8 +1295,16 @@ conntrack_execute(struct dp_packet_batch *pkt_batch, ovs_be16 dl_type, struct conn_lookup_ctx ctx; DP_PACKET_BATCH_FOR_EACH (i, packet, pkt_batch) { - if (packet->md.ct_state == CS_INVALID - || !conn_key_extract(packet, dl_type, &ctx, zone)) { + struct conn *conn = packet->md.conn; + if (OVS_UNLIKELY(packet->md.ct_state == CS_INVALID)) { + write_ct_md(packet, zone, NULL, NULL, NULL); + continue; + } else if (conn && !force && !commit && conn->key.zone == zone) { + process_one_fast(packet, zone, setmark, setlabel, nat_action_info, + packet->md.conn); + continue; + } else if (OVS_UNLIKELY(!conn_key_extract(packet, dl_type, &ctx, + zone))) { packet->md.ct_state = CS_INVALID; write_ct_md(packet, zone, NULL, NULL, NULL); continue; @@ -1279,6 +1322,7 @@ conntrack_clear(struct dp_packet *packet) /* According to pkt_metadata_init(), ct_state == 0 is enough to make all of * the conntrack fields invalid. */ packet->md.ct_state = 0; + packet->md.conn = NULL; } static void diff --git a/lib/packets.h b/lib/packets.h index 09a0ac3..a88d1ad 100644 --- a/lib/packets.h +++ b/lib/packets.h @@ -108,6 +108,9 @@ PADDED_MEMBERS_CACHELINE_MARKER(CACHE_LINE_SIZE, cacheline0, uint32_t ct_mark; /* Connection mark. */ ovs_u128 ct_label; /* Connection label. */ union flow_in_port in_port; /* Input port. */ + void *conn; + bool reply; + bool icmp_related; ); PADDED_MEMBERS_CACHELINE_MARKER(CACHE_LINE_SIZE, cacheline1, @@ -157,6 +160,7 @@ pkt_metadata_init(struct pkt_metadata *md, odp_port_t port) md->tunnel.ip_dst = 0; md->tunnel.ipv6_dst = in6addr_any; md->in_port.odp_port = port; + md->conn = NULL; } /* This function prefetches the cachelines touched by pkt_metadata_init()