diff mbox

speed regression in udp_lib_lport_inuse()

Message ID 4978EE03.9040207@cosmosbay.com
State RFC, archived
Delegated to: David Miller
Headers show

Commit Message

Eric Dumazet Jan. 22, 2009, 10:06 p.m. UTC
Vitaly Mayatskikh a écrit :
> Hello!
> 
> I found your latest patches w.r.t. udp port randomization really solve
> the "finding shortest chain kills randomness" problem, but
> significantly slow down system in the case when almost every port is
> in use. Kernel spends too much time trying to find free port number.
> 
> Try to compile and run this reproducer (after increasing open files
> limit).
> 
> #include <stdio.h>
> #include <stdlib.h>
> #include <errno.h>
> #include <string.h>
> #include <sys/types.h>
> #include <sys/socket.h>
> #include <netinet/in.h>
> #include <pthread.h>
> #include <assert.h>
> 
> #define PORTS 65536
> #define NP 64
> #define THREADS
> 
> void* foo(void* arg)
> {
> 	int s, err, i, j;
> 	struct sockaddr_in sa;
> 	int optval = 1, port;
> 	unsigned int p[PORTS] = { 0 };
> 
> 	for (i = 0; i < PORTS * 100; ++i) {
> 		s = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP);
> 		assert(s > 0);
> 		memset(&sa, 0, sizeof(sa));
> 		sa.sin_addr.s_addr = htonl(INADDR_ANY);
> 		sa.sin_family = AF_INET;
> 		sa.sin_port = 0;
> 		err = bind(s, (const struct sockaddr*)&sa, sizeof(sa));

Bug here, if bind() returns -1 (all ports are in use)

> 
> 		getsockname(s, (struct sockaddr*)&sa, &j);
> 		port = ntohs(sa.sin_port);
> 		p[port] = s;
> // free some ports
> 		if (p[port + 1]) {
> 			close(p[port + 1]);
> 			p[port + 1] = 0;
> 		}
> 		if (p[port - 1]) {
> 			close(p[port - 1]);
> 			p[port - 1] = 0;
> 		}
> 	}
> }
> 
> int main()
> {
> 	int i, err;
> #ifdef THREADS
> 	pthread_t t[NP];
> 
> 	for (i = 0; i < NP; ++i)
> 	{
> 		err = pthread_create(&t[i], NULL, foo, NULL);
> 		assert(err == 0);
> 	}
> 	for (i = 0; i < NP; ++i)
> 	{
> 		err = pthread_join(t[i], NULL);
> 		assert(err == 0);
> 	}
> #else
> 	for (i = 0; i < NP; ++i) {
> 		err = fork();
> 		if (err == 0)
> 			foo(NULL);
> 	}
> #endif
> }
> 
> I ran glxgears and had these numbers:
> 
> $ glxgears 
> 3297 frames in 5.0 seconds = 659.283 FPS
> 3680 frames in 5.0 seconds = 735.847 FPS
> 3840 frames in 5.0 seconds = 767.891 FPS
> 3574 frames in 5.0 seconds = 714.704 FPS
> -> here I ran reproducer
> 2507 frames in 5.1 seconds = 493.173 FPS
> 56 frames in 7.7 seconds =  7.316 FPS
> 14 frames in 5.1 seconds =  2.752 FPS
> 1 frames in 6.8 seconds =  0.146 FPS
> 9 frames in 7.6 seconds =  1.188 FPS
> 1 frames in 9.3 seconds =  0.108 FPS
> 12 frames in 5.5 seconds =  2.187 FPS
> 30 frames in 9.0 seconds =  3.338 FPS
> 25 frames in 5.1 seconds =  4.888 FPS
> <- here I killed reproducer
> 1034 frames in 5.0 seconds = 206.764 FPS
> 3728 frames in 5.0 seconds = 745.541 FPS
> 3668 frames in 5.0 seconds = 733.496 FPS
> 
> Last stable kernel survives it more or less smoothly.
> 
> Thanks!

Hello Vitaly, thanks for this excellent report.

Yes, current code is really not good when all ports are in use :

We now have to scan 28232 [1] times long chains of 220 sockets.
Thats very long (but at least thread is preemptable)

In the past (before patches), only one thread was allowed to run in kernel while scanning
udp port table (we had only one global lock udp_hash_lock protecting the whole udp table).
This thread was faster because it was not slowed down by other threads.
(But the rwlock we used was responsible for starvations of writers if many UDP frames
were received)



One way to solve the problem could be to use following :

1) Raising UDP_HTABLE_SIZE from 128 to 1024 to reduce average chain lengths.

2) In bind(0) algo, use rcu locking to find a possible usable port. All cpus can run in //, without
dirtying locks. Then lock the found chain and recheck port is available before using it.

[1] replace 28232 by your actual /proc/sys/net/ipv4/ip_local_port_range values
61000 - 32768 = 28232

I will try to code a patch before this week end.

Thanks

Note : I tried to use a mutex to force only one thread in bind(0) code but got no real speedup.
But it should help if you have a SMP machine, since only one cpu will be busy in bind(0)



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Evgeniy Polyakov Jan. 22, 2009, 10:14 p.m. UTC | #1
Hi Eric.

On Thu, Jan 22, 2009 at 11:06:59PM +0100, Eric Dumazet (dada1@cosmosbay.com) wrote:
> Hello Vitaly, thanks for this excellent report.
> 
> Yes, current code is really not good when all ports are in use :
> 
> We now have to scan 28232 [1] times long chains of 220 sockets.
> Thats very long (but at least thread is preemptable)
> 
> In the past (before patches), only one thread was allowed to run in kernel while scanning
> udp port table (we had only one global lock udp_hash_lock protecting the whole udp table).
> This thread was faster because it was not slowed down by other threads.
> (But the rwlock we used was responsible for starvations of writers if many UDP frames
> were received)
 
I believe problem is in the port searching algorithm, when we
have exponentially grow of the number of ports to check after random
selection of the first one. This allows to have small chains but setup
time will be very long. Not sure if bind chais should be that small
actually. In the 64k patch, which allows to have more than 64k bound
sockets per system I store rough amount of bound sockets and when it
becomes larger than sysctl limit I just randomly select a bundle.
This works for the bind(0) for the sockets with reuse option though.
I posted a picture of the bind(0) time for the .28 kernel iirc.

Or is this a different issue?
Vitaly Mayatskih Jan. 22, 2009, 10:40 p.m. UTC | #2
At Thu, 22 Jan 2009 23:06:59 +0100, Eric Dumazet wrote:

> > 		err = bind(s, (const struct sockaddr*)&sa, sizeof(sa));
> 
> Bug here, if bind() returns -1 (all ports are in use)

Yeah, there was assert(), but the program drops to problems very soon,
I was lazy to handle this situation correctly and just removed it ;)

> > Thanks!
> 
> Hello Vitaly, thanks for this excellent report.
> 
> Yes, current code is really not good when all ports are in use :
> 
> We now have to scan 28232 [1] times long chains of 220 sockets.
> Thats very long (but at least thread is preemptable)
> 
> In the past (before patches), only one thread was allowed to run in kernel while scanning
> udp port table (we had only one global lock udp_hash_lock protecting the whole udp table).

Very true, my (older) kernel with udp_hash_lock just become totally
unresponsive after running this test. .29-rc2 become jerky only, but
still works.

> This thread was faster because it was not slowed down by other threads.
> (But the rwlock we used was responsible for starvations of writers if many UDP frames
> were received)
>
> 
> 
> One way to solve the problem could be to use following :
> 
> 1) Raising UDP_HTABLE_SIZE from 128 to 1024 to reduce average chain lengths.
>
> 2) In bind(0) algo, use rcu locking to find a possible usable port. All cpus can run in //, without
> dirtying locks. Then lock the found chain and recheck port is available before using it.

I think 2 is definitely better than 1, because 1 is not actually
fixing anything, but postpones the problem slightly.

> [1] replace 28232 by your actual /proc/sys/net/ipv4/ip_local_port_range values
> 61000 - 32768 = 28232
> 
> I will try to code a patch before this week end.

Cool!

> Thanks
> 
> Note : I tried to use a mutex to force only one thread in bind(0) code but got no real speedup.
> But it should help if you have a SMP machine, since only one cpu will be busy in bind(0)
> 

You saved my time, I was thinking about trying mutexes also. Thanks :)

--
wbr, Vitaly
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet Jan. 23, 2009, 12:20 a.m. UTC | #3
Evgeniy Polyakov a écrit :
> Hi Eric.
> 
> On Thu, Jan 22, 2009 at 11:06:59PM +0100, Eric Dumazet (dada1@cosmosbay.com) wrote:
>> Hello Vitaly, thanks for this excellent report.
>>
>> Yes, current code is really not good when all ports are in use :
>>
>> We now have to scan 28232 [1] times long chains of 220 sockets.
>> Thats very long (but at least thread is preemptable)
>>
>> In the past (before patches), only one thread was allowed to run in kernel while scanning
>> udp port table (we had only one global lock udp_hash_lock protecting the whole udp table).
>> This thread was faster because it was not slowed down by other threads.
>> (But the rwlock we used was responsible for starvations of writers if many UDP frames
>> were received)
>  
> I believe problem is in the port searching algorithm, when we
> have exponentially grow of the number of ports to check after random
> selection of the first one. This allows to have small chains but setup
> time will be very long. Not sure if bind chais should be that small
> actually. In the 64k patch, which allows to have more than 64k bound
> sockets per system I store rough amount of bound sockets and when it
> becomes larger than sysctl limit I just randomly select a bundle.
> This works for the bind(0) for the sockets with reuse option though.
> I posted a picture of the bind(0) time for the .28 kernel iirc.
> 
> Or is this a different issue?
> 

Well, this is not exactly the same issue, udp bind() code is slightly different
than tcp. (Probably not so many machines use lot of udp sockets)

Since UDP hash table is really small (128 slots), we can try to allocate UDP ports
chains per chain, instead of port per port, to reduce number of chain lookups.
In tcp, most machines have 64k slots for bind table so this wont help

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index cf5ab05..a572407 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -155,6 +155,8 @@  int udp_lib_get_port(struct sock *sk, unsigned short snum,
 	struct udp_hslot *hslot;
 	struct udp_table *udptable = sk->sk_prot->h.udp_table;
 	int    error = 1;
+	static DEFINE_MUTEX(bind0_mutex);
+	int mutex_acquired = 0;
 	struct net *net = sock_net(sk);
 
 	if (!snum) {
@@ -162,6 +164,8 @@  int udp_lib_get_port(struct sock *sk, unsigned short snum,
 		unsigned rand;
 		unsigned short first;
 
+		mutex_lock(&bind0_mutex);
+		mutex_acquired = 1;
 		inet_get_local_port_range(&low, &high);
 		remaining = (high - low) + 1;
 
@@ -196,6 +200,8 @@  int udp_lib_get_port(struct sock *sk, unsigned short snum,
 fail_unlock:
 	spin_unlock_bh(&hslot->lock);
 fail:
+	if (mutex_acquired)
+		mutex_unlock(&bind0_mutex);
 	return error;
 }