Message ID | 1547804639-19033-1-git-send-email-lirongqing@baidu.com |
---|---|
State | Rejected |
Headers | show |
Series | [ovs-dev] conntrack: compute hash value once when calling conn_key_lookup | expand |
Bleep bloop. Greetings Li RongQing, I am a robot and I have tried out your patch. Thanks for your contribution. I encountered some error that I wasn't expecting. See the details below. checkpatch: WARNING: Unexpected sign-offs from developers who are not authors or co-authors or committers: Zhang Yu <zhangyu31@baidu.com> Lines checked: 78, Warnings: 1, Errors: 0 Please check this out. If you feel there has been an error, please email aconole@bytheb.org Thanks, 0-day Robot
Thanks for looking On Fri, Jan 18, 2019 at 1:47 AM Li RongQing <lirongqing@baidu.com> wrote: > it is expensive to compute hash value, and When call conn_lookup, > hash value is computed twice, in fact, we can cache it and pass > it to conn_lookup > > Signed-off-by: Zhang Yu <zhangyu31@baidu.com> > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- > lib/conntrack.c | 19 +++++++++++-------- > 1 file changed, 11 insertions(+), 8 deletions(-) > > diff --git a/lib/conntrack.c b/lib/conntrack.c > index 11a1e05bd..f01c2ceac 100644 > --- a/lib/conntrack.c > +++ b/lib/conntrack.c > @@ -743,12 +743,13 @@ un_nat_packet(struct dp_packet *pkt, const struct > conn *conn, > * and a hash would have already been needed. Hence, this function > * is just intended for code clarity. */ > static struct conn * > -conn_lookup(struct conntrack *ct, const struct conn_key *key, long long > now) > +conn_lookup(struct conntrack *ct, const struct conn_key *key, long long > now, > The function comment says it all: /* Typical usage of this helper is in non per-packet code; * this is because the bucket lock needs to be held for lookup * and a hash would have already been needed. Hence, this function * is just intended for code clarity. */ > + uint32_t hash) > { > struct conn_lookup_ctx ctx; > ctx.conn = NULL; > ctx.key = *key; > - ctx.hash = conn_key_hash(key, ct->hash_basis); > + ctx.hash = hash; > unsigned bucket = hash_to_bucket(ctx.hash); > conn_key_lookup(&ct->buckets[bucket], &ctx, now); > return ctx.conn; > @@ -758,9 +759,10 @@ static void > conn_seq_skew_set(struct conntrack *ct, const struct conn_key *key, > long long now, int seq_skew, bool seq_skew_dir) > { > - unsigned bucket = hash_to_bucket(conn_key_hash(key, ct->hash_basis)); > + uint32_t hash = conn_key_hash(key, ct->hash_basis); > + unsigned bucket = hash_to_bucket(hash); > ct_lock_lock(&ct->buckets[bucket].lock); > - struct conn *conn = conn_lookup(ct, key, now); > + struct conn *conn = conn_lookup(ct, key, now, hash); > if (conn && seq_skew) { > conn->seq_skew = seq_skew; > conn->seq_skew_dir = seq_skew_dir; > @@ -777,12 +779,13 @@ nat_clean(struct conntrack *ct, struct conn *conn, > nat_conn_keys_remove(&ct->nat_conn_keys, &conn->rev_key, > ct->hash_basis); > ct_rwlock_unlock(&ct->resources_lock); > ct_lock_unlock(&ctb->lock); > - unsigned bucket_rev_conn = > - hash_to_bucket(conn_key_hash(&conn->rev_key, ct->hash_basis)); > + > + uint32_t hash = conn_key_hash(&conn->rev_key, ct->hash_basis); > + unsigned bucket_rev_conn = hash_to_bucket(hash); > ct_lock_lock(&ct->buckets[bucket_rev_conn].lock); > ct_rwlock_wrlock(&ct->resources_lock); > long long now = time_msec(); > - struct conn *rev_conn = conn_lookup(ct, &conn->rev_key, now); > + struct conn *rev_conn = conn_lookup(ct, &conn->rev_key, now, hash); > struct nat_conn_key_node *nat_conn_key_node = > nat_conn_keys_lookup(&ct->nat_conn_keys, &conn->rev_key, > ct->hash_basis); > @@ -1016,7 +1019,7 @@ create_un_nat_conn(struct conntrack *ct, struct conn > *conn_for_un_nat_copy, > uint32_t un_nat_hash = conn_key_hash(&nc->key, ct->hash_basis); > unsigned un_nat_conn_bucket = hash_to_bucket(un_nat_hash); > ct_lock_lock(&ct->buckets[un_nat_conn_bucket].lock); > - struct conn *rev_conn = conn_lookup(ct, &nc->key, now); > + struct conn *rev_conn = conn_lookup(ct, &nc->key, now, un_nat_hash); > > if (alg_un_nat) { > if (!rev_conn) { > -- > 2.16.2 > > _______________________________________________ > dev mailing list > dev@openvswitch.org > https://mail.openvswitch.org/mailman/listinfo/ovs-dev >
发件人: Darrell Ball [mailto:dlu998@gmail.com] 发送时间: 2019年2月1日 16:25 收件人: Li,Rongqing <lirongqing@baidu.com> 抄送: ovs dev <dev@openvswitch.org> 主题: Re: [ovs-dev] [PATCH] conntrack: compute hash value once when calling conn_key_lookup > The function comment says it all: > /* Typical usage of this helper is in non per-packet code; > * this is because the bucket lock needs to be held for lookup > * and a hash would have already been needed. Hence, this function > * is just intended for code clarity. */ > Does it mean that the hash value will be changed ? if it is true, should we release original lock, and relock lock for new hash value? Thanks -RongQing
On Fri, Feb 1, 2019 at 12:49 AM Li,Rongqing <lirongqing@baidu.com> wrote: > > > 发件人: Darrell Ball [mailto:dlu998@gmail.com] > 发送时间: 2019年2月1日 16:25 > 收件人: Li,Rongqing <lirongqing@baidu.com> > 抄送: ovs dev <dev@openvswitch.org> > 主题: Re: [ovs-dev] [PATCH] conntrack: compute hash value once when calling > conn_key_lookup > > > The function comment says it all: > > /* Typical usage of this helper is in non per-packet code; > > * this is because the bucket lock needs to be held for lookup > > * and a hash would have already been needed. Hence, this function > > * is just intended for code clarity. */ > > > > Does it mean that the hash value will be changed ? No > if it is true, should we release original lock, and relock lock for new > hash value? > Also, the full context is this: The function is also written with the intention to remove all the 'bucket' stuff in a future patchset (check the ML). It turns out, the function will get deprecated in future. > > Thanks > > -RongQing >
diff --git a/lib/conntrack.c b/lib/conntrack.c index 11a1e05bd..f01c2ceac 100644 --- a/lib/conntrack.c +++ b/lib/conntrack.c @@ -743,12 +743,13 @@ un_nat_packet(struct dp_packet *pkt, const struct conn *conn, * and a hash would have already been needed. Hence, this function * is just intended for code clarity. */ static struct conn * -conn_lookup(struct conntrack *ct, const struct conn_key *key, long long now) +conn_lookup(struct conntrack *ct, const struct conn_key *key, long long now, + uint32_t hash) { struct conn_lookup_ctx ctx; ctx.conn = NULL; ctx.key = *key; - ctx.hash = conn_key_hash(key, ct->hash_basis); + ctx.hash = hash; unsigned bucket = hash_to_bucket(ctx.hash); conn_key_lookup(&ct->buckets[bucket], &ctx, now); return ctx.conn; @@ -758,9 +759,10 @@ static void conn_seq_skew_set(struct conntrack *ct, const struct conn_key *key, long long now, int seq_skew, bool seq_skew_dir) { - unsigned bucket = hash_to_bucket(conn_key_hash(key, ct->hash_basis)); + uint32_t hash = conn_key_hash(key, ct->hash_basis); + unsigned bucket = hash_to_bucket(hash); ct_lock_lock(&ct->buckets[bucket].lock); - struct conn *conn = conn_lookup(ct, key, now); + struct conn *conn = conn_lookup(ct, key, now, hash); if (conn && seq_skew) { conn->seq_skew = seq_skew; conn->seq_skew_dir = seq_skew_dir; @@ -777,12 +779,13 @@ nat_clean(struct conntrack *ct, struct conn *conn, nat_conn_keys_remove(&ct->nat_conn_keys, &conn->rev_key, ct->hash_basis); ct_rwlock_unlock(&ct->resources_lock); ct_lock_unlock(&ctb->lock); - unsigned bucket_rev_conn = - hash_to_bucket(conn_key_hash(&conn->rev_key, ct->hash_basis)); + + uint32_t hash = conn_key_hash(&conn->rev_key, ct->hash_basis); + unsigned bucket_rev_conn = hash_to_bucket(hash); ct_lock_lock(&ct->buckets[bucket_rev_conn].lock); ct_rwlock_wrlock(&ct->resources_lock); long long now = time_msec(); - struct conn *rev_conn = conn_lookup(ct, &conn->rev_key, now); + struct conn *rev_conn = conn_lookup(ct, &conn->rev_key, now, hash); struct nat_conn_key_node *nat_conn_key_node = nat_conn_keys_lookup(&ct->nat_conn_keys, &conn->rev_key, ct->hash_basis); @@ -1016,7 +1019,7 @@ create_un_nat_conn(struct conntrack *ct, struct conn *conn_for_un_nat_copy, uint32_t un_nat_hash = conn_key_hash(&nc->key, ct->hash_basis); unsigned un_nat_conn_bucket = hash_to_bucket(un_nat_hash); ct_lock_lock(&ct->buckets[un_nat_conn_bucket].lock); - struct conn *rev_conn = conn_lookup(ct, &nc->key, now); + struct conn *rev_conn = conn_lookup(ct, &nc->key, now, un_nat_hash); if (alg_un_nat) { if (!rev_conn) {