From patchwork Thu Jan 22 22:06:59 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 19905 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 14932DDF16 for ; Fri, 23 Jan 2009 09:07:15 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753228AbZAVWHK (ORCPT ); Thu, 22 Jan 2009 17:07:10 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753082AbZAVWHJ (ORCPT ); Thu, 22 Jan 2009 17:07:09 -0500 Received: from gw2.cosmosbay.com ([86.64.20.130]:35287 "EHLO gw2.cosmosbay.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751537AbZAVWHH convert rfc822-to-8bit (ORCPT ); Thu, 22 Jan 2009 17:07:07 -0500 Received: from [127.0.0.1] (localhost [127.0.0.1]) by gw2.cosmosbay.com (8.12.11.20060308/8.12.11) with ESMTP id n0MM70W2001099; Thu, 22 Jan 2009 23:07:00 +0100 Message-ID: <4978EE03.9040207@cosmosbay.com> Date: Thu, 22 Jan 2009 23:06:59 +0100 From: Eric Dumazet User-Agent: Thunderbird 2.0.0.19 (Windows/20081209) MIME-Version: 1.0 To: Vitaly Mayatskikh CC: David Miller , netdev@vger.kernel.org Subject: Re: speed regression in udp_lib_lport_inuse() References: In-Reply-To: X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-1.6 (gw2.cosmosbay.com [127.0.0.1]); Thu, 22 Jan 2009 23:07:00 +0100 (CET) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Vitaly Mayatskikh a écrit : > Hello! > > I found your latest patches w.r.t. udp port randomization really solve > the "finding shortest chain kills randomness" problem, but > significantly slow down system in the case when almost every port is > in use. Kernel spends too much time trying to find free port number. > > Try to compile and run this reproducer (after increasing open files > limit). > > #include > #include > #include > #include > #include > #include > #include > #include > #include > > #define PORTS 65536 > #define NP 64 > #define THREADS > > void* foo(void* arg) > { > int s, err, i, j; > struct sockaddr_in sa; > int optval = 1, port; > unsigned int p[PORTS] = { 0 }; > > for (i = 0; i < PORTS * 100; ++i) { > s = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP); > assert(s > 0); > memset(&sa, 0, sizeof(sa)); > sa.sin_addr.s_addr = htonl(INADDR_ANY); > sa.sin_family = AF_INET; > sa.sin_port = 0; > err = bind(s, (const struct sockaddr*)&sa, sizeof(sa)); Bug here, if bind() returns -1 (all ports are in use) > > getsockname(s, (struct sockaddr*)&sa, &j); > port = ntohs(sa.sin_port); > p[port] = s; > // free some ports > if (p[port + 1]) { > close(p[port + 1]); > p[port + 1] = 0; > } > if (p[port - 1]) { > close(p[port - 1]); > p[port - 1] = 0; > } > } > } > > int main() > { > int i, err; > #ifdef THREADS > pthread_t t[NP]; > > for (i = 0; i < NP; ++i) > { > err = pthread_create(&t[i], NULL, foo, NULL); > assert(err == 0); > } > for (i = 0; i < NP; ++i) > { > err = pthread_join(t[i], NULL); > assert(err == 0); > } > #else > for (i = 0; i < NP; ++i) { > err = fork(); > if (err == 0) > foo(NULL); > } > #endif > } > > I ran glxgears and had these numbers: > > $ glxgears > 3297 frames in 5.0 seconds = 659.283 FPS > 3680 frames in 5.0 seconds = 735.847 FPS > 3840 frames in 5.0 seconds = 767.891 FPS > 3574 frames in 5.0 seconds = 714.704 FPS > -> here I ran reproducer > 2507 frames in 5.1 seconds = 493.173 FPS > 56 frames in 7.7 seconds = 7.316 FPS > 14 frames in 5.1 seconds = 2.752 FPS > 1 frames in 6.8 seconds = 0.146 FPS > 9 frames in 7.6 seconds = 1.188 FPS > 1 frames in 9.3 seconds = 0.108 FPS > 12 frames in 5.5 seconds = 2.187 FPS > 30 frames in 9.0 seconds = 3.338 FPS > 25 frames in 5.1 seconds = 4.888 FPS > <- here I killed reproducer > 1034 frames in 5.0 seconds = 206.764 FPS > 3728 frames in 5.0 seconds = 745.541 FPS > 3668 frames in 5.0 seconds = 733.496 FPS > > Last stable kernel survives it more or less smoothly. > > Thanks! Hello Vitaly, thanks for this excellent report. Yes, current code is really not good when all ports are in use : We now have to scan 28232 [1] times long chains of 220 sockets. Thats very long (but at least thread is preemptable) In the past (before patches), only one thread was allowed to run in kernel while scanning udp port table (we had only one global lock udp_hash_lock protecting the whole udp table). This thread was faster because it was not slowed down by other threads. (But the rwlock we used was responsible for starvations of writers if many UDP frames were received) One way to solve the problem could be to use following : 1) Raising UDP_HTABLE_SIZE from 128 to 1024 to reduce average chain lengths. 2) In bind(0) algo, use rcu locking to find a possible usable port. All cpus can run in //, without dirtying locks. Then lock the found chain and recheck port is available before using it. [1] replace 28232 by your actual /proc/sys/net/ipv4/ip_local_port_range values 61000 - 32768 = 28232 I will try to code a patch before this week end. Thanks Note : I tried to use a mutex to force only one thread in bind(0) code but got no real speedup. But it should help if you have a SMP machine, since only one cpu will be busy in bind(0) --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index cf5ab05..a572407 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -155,6 +155,8 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum, struct udp_hslot *hslot; struct udp_table *udptable = sk->sk_prot->h.udp_table; int error = 1; + static DEFINE_MUTEX(bind0_mutex); + int mutex_acquired = 0; struct net *net = sock_net(sk); if (!snum) { @@ -162,6 +164,8 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum, unsigned rand; unsigned short first; + mutex_lock(&bind0_mutex); + mutex_acquired = 1; inet_get_local_port_range(&low, &high); remaining = (high - low) + 1; @@ -196,6 +200,8 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum, fail_unlock: spin_unlock_bh(&hslot->lock); fail: + if (mutex_acquired) + mutex_unlock(&bind0_mutex); return error; }