Patchwork 32 core net-next stack/netfilter "scaling"

login
register
mail settings
Submitter Eric Dumazet
Date Jan. 27, 2009, 11:29 a.m.
Message ID <497EF030.10504@cosmosbay.com>
Download mbox | patch
Permalink /patch/20446/
State RFC
Delegated to: David Miller
Headers show

Comments

Eric Dumazet - Jan. 27, 2009, 11:29 a.m.
Patrick McHardy a écrit :
> Eric Dumazet wrote:
>> [PATCH] netfilter: Get rid of central rwlock in tcp conntracking
>>
>> TCP connection tracking suffers of huge contention on a global rwlock,
>> used to protect tcp conntracking state.
>> As each tcp conntrack state have no relations between each others, we
>> can switch to fine grained lock, using a spinlock per "struct ip_ct_tcp"
>>
>> tcp_print_conntrack() dont need to lock anything to read
>> ct->proto.tcp.state,
>> so speedup /proc/net/ip_conntrack as well.
> 
> Thats an interesting test-case, but one lock per conntrack just for
> TCP tracking seems like overkill. We're trying to keep the conntrack
> stuctures as small as possible, so I'd prefer an array of spinlocks
> or something like that.

Yes, this is wise. Current sizeof(struct nf_conn) is 220 (0xdc) on 32 bits,
probably rounded to 0xE0 by SLAB/SLUB. I will provide a new patch using
an array of say 512 spinlocks. (512 spinlocks use 2048 bytes if non
debuging spinlocks, that spread to 32 x 64bytes cache lines)

However I wonder if for very large number of cpus we should at least ask conntrack 
to use hardware aligned "struct nf_conn" to avoid false sharing

We might also use a generic SLAB_HWCACHE_ALIGN_IFMANYCPUS flag if same tactic
could help other kmem_cache_create() users



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Patrick McHardy - Jan. 27, 2009, 11:37 a.m.
Eric Dumazet wrote:
> Patrick McHardy a écrit :
>   
>> Thats an interesting test-case, but one lock per conntrack just for
>> TCP tracking seems like overkill. We're trying to keep the conntrack
>> stuctures as small as possible, so I'd prefer an array of spinlocks
>> or something like that.
>>     
>
> Yes, this is wise. Current sizeof(struct nf_conn) is 220 (0xdc) on 32 bits,
> probably rounded to 0xE0 by SLAB/SLUB. I will provide a new patch using
> an array of say 512 spinlocks. (512 spinlocks use 2048 bytes if non
> debuging spinlocks, that spread to 32 x 64bytes cache lines)
>   

Sounds good, but it should be limited to NR_CPUS I guess.
> However I wonder if for very large number of cpus we should at least ask conntrack 
> to use hardware aligned "struct nf_conn" to avoid false sharing
>   

I'm not sure that is really a problem in practice, you usually have quite a
few inactive conntrack entries and false sharing would only happen when two
consequitive entries are active.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 90ce9dd..82332ce 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -1167,8 +1167,10 @@  static int nf_conntrack_init_init_net(void)
 	       nf_conntrack_max);
 
 	nf_conntrack_cachep = kmem_cache_create("nf_conntrack",
-						sizeof(struct nf_conn),
-						0, 0, NULL);
+						sizeof(struct nf_conn), 0,
+						num_possible_cpus() >= 32 ?
+							SLAB_HWCACHE_ALIGN : 0,
+						NULL);
 	if (!nf_conntrack_cachep) {
 		printk(KERN_ERR "Unable to create nf_conn slab cache\n");
 		ret = -ENOMEM;