difference in the benchmarks.
> Also, I suggest you change bsockets to something more appropriate, eg a
> percpu counter.
I thought on that first, but found that looping over every cpu and
summing the total number of allocated/freed sockets will have noticebly
bigger overhead than having loosely maintaned number of sockets.
For the reference. This patch has nothing with the bug we discuss here,
the proper patch (without need to move bsockets around) was sent
earlier, which forces port selection codepath to return error when new
selection heuristic is not used.
@@ -134,7 +134,6 @@
struct inet_bind_hashbucket *bhash;
unsigned int bhash_size;
- int bsockets;
struct kmem_cache *bind_bucket_cachep;
@@ -150,6 +149,8 @@
struct inet_listen_hashbucket listening_hash[INET_LHTABLE_SIZE]
+ int bsockets ____cacheline_aligned_in_smp;