diff mbox

[3/6] rhashtable: Move seed init into bucket_table_alloc

Message ID E1YWMLA-0000Bg-H9@gondolin.me.apana.org.au
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Herbert Xu March 13, 2015, 9:57 a.m. UTC
It seems that I have already made every rehash redo the random
seed even though my commit message indicated otherwise :)

Since we have already taken that step, this patch goes one step
further and moves the seed initialisation into bucket_table_alloc.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 lib/rhashtable.c |   16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Daniel Borkmann March 13, 2015, 10:03 a.m. UTC | #1
On 03/13/2015 10:57 AM, Herbert Xu wrote:
> It seems that I have already made every rehash redo the random
> seed even though my commit message indicated otherwise :)
>
> Since we have already taken that step, this patch goes one step
> further and moves the seed initialisation into bucket_table_alloc.
>
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Acked-by: Daniel Borkmann <daniel@iogearbox.net>

I wanted to suggest the same. :) Thanks!
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Laight March 13, 2015, 11:33 a.m. UTC | #2
From: Herbert Xu
> It seems that I have already made every rehash redo the random
> seed even though my commit message indicated otherwise :)
> 
> Since we have already taken that step, this patch goes one step
> further and moves the seed initialisation into bucket_table_alloc.

If the decision to resize a hash table is based on the length
of the hash chains, then changing the hash function might either
make that unnecessary or make an immediate resize be needed.

Not that I'm convinced you can get a good enough hash function
to avoid one or two long chains when the has table is large.
I've not played with the hashes used for network 'stuff' but
I remember playing with the elf symbol table hash. Whatever I
did one or two chains would end up with more elements than
you might have hoped.

	David

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 13, 2015, 11:40 a.m. UTC | #3
On Fri, Mar 13, 2015 at 11:33:39AM +0000, David Laight wrote:
>
> If the decision to resize a hash table is based on the length
> of the hash chains, then changing the hash function might either
> make that unnecessary or make an immediate resize be needed.

My plan is to trigger a delayed resize (as we do now) if we're
over 75%.

If we get a chain length above 4 then we do an immediate resize
if we're already over 75% and the delayed resize hasn't started
yet, otherwise we do an immediate rehash with no change in the
hash table size.

Note that in my new scheme the rehash will contain two distinct
parts.  The first part that is executed immediately will only
involve allocating the hash table.

The hard work of movings across is always performed in the kernel
thread.

Cheers,
Thomas Graf March 13, 2015, 3:40 p.m. UTC | #4
On 03/13/15 at 08:57pm, Herbert Xu wrote:
> It seems that I have already made every rehash redo the random
> seed even though my commit message indicated otherwise :)
> 
> Since we have already taken that step, this patch goes one step
> further and moves the seed initialisation into bucket_table_alloc.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Acked-by: Thomas Graf <tgraf@suug.ch>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 5d06cc2..e55bbc8 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -142,7 +142,7 @@  static void bucket_table_free(const struct bucket_table *tbl)
 }
 
 static struct bucket_table *bucket_table_alloc(struct rhashtable *ht,
-					       size_t nbuckets, u32 hash_rnd)
+					       size_t nbuckets)
 {
 	struct bucket_table *tbl = NULL;
 	size_t size;
@@ -158,7 +158,6 @@  static struct bucket_table *bucket_table_alloc(struct rhashtable *ht,
 
 	tbl->size = nbuckets;
 	tbl->shift = ilog2(nbuckets);
-	tbl->hash_rnd = hash_rnd;
 
 	if (alloc_bucket_locks(ht, tbl) < 0) {
 		bucket_table_free(tbl);
@@ -167,6 +166,8 @@  static struct bucket_table *bucket_table_alloc(struct rhashtable *ht,
 
 	INIT_LIST_HEAD(&tbl->walkers);
 
+	get_random_bytes(&tbl->hash_rnd, sizeof(tbl->hash_rnd));
+
 	for (i = 0; i < nbuckets; i++)
 		INIT_RHT_NULLS_HEAD(tbl->buckets[i], ht, i);
 
@@ -264,8 +265,6 @@  static void rhashtable_rehash(struct rhashtable *ht,
 	struct rhashtable_walker *walker;
 	unsigned old_hash;
 
-	get_random_bytes(&new_tbl->hash_rnd, sizeof(new_tbl->hash_rnd));
-
 	/* Make insertions go into the new, empty table right away. Deletions
 	 * and lookups will be attempted in both tables until we synchronize.
 	 * The synchronize_rcu() guarantees for the new table to be picked up
@@ -315,7 +314,7 @@  int rhashtable_expand(struct rhashtable *ht)
 
 	ASSERT_RHT_MUTEX(ht);
 
-	new_tbl = bucket_table_alloc(ht, old_tbl->size * 2, old_tbl->hash_rnd);
+	new_tbl = bucket_table_alloc(ht, old_tbl->size * 2);
 	if (new_tbl == NULL)
 		return -ENOMEM;
 
@@ -346,7 +345,7 @@  int rhashtable_shrink(struct rhashtable *ht)
 
 	ASSERT_RHT_MUTEX(ht);
 
-	new_tbl = bucket_table_alloc(ht, old_tbl->size / 2, old_tbl->hash_rnd);
+	new_tbl = bucket_table_alloc(ht, old_tbl->size / 2);
 	if (new_tbl == NULL)
 		return -ENOMEM;
 
@@ -926,7 +925,6 @@  int rhashtable_init(struct rhashtable *ht, struct rhashtable_params *params)
 {
 	struct bucket_table *tbl;
 	size_t size;
-	u32 hash_rnd;
 
 	size = HASH_DEFAULT_SIZE;
 
@@ -952,9 +950,7 @@  int rhashtable_init(struct rhashtable *ht, struct rhashtable_params *params)
 	else
 		ht->p.locks_mul = BUCKET_LOCKS_PER_CPU;
 
-	get_random_bytes(&hash_rnd, sizeof(hash_rnd));
-
-	tbl = bucket_table_alloc(ht, size, hash_rnd);
+	tbl = bucket_table_alloc(ht, size);
 	if (tbl == NULL)
 		return -ENOMEM;