diff mbox

SO_REUSEPORT - can it be done in kernel?

Message ID 20110301123250.GA7368@gondor.apana.org.au
State RFC, archived
Delegated to: David Miller
Headers show

Commit Message

Herbert Xu March 1, 2011, 12:32 p.m. UTC
On Tue, Mar 01, 2011 at 07:53:05PM +0800, Herbert Xu wrote:
> On Tue, Mar 01, 2011 at 12:45:09PM +0100, Eric Dumazet wrote:
> >
> > CPU 11 handles all TX completions : Its a potential bottleneck.
> > 
> > I might ressurect XPS patch ;)
> 
> Actually this has been my gripe all along with our TX multiqueue
> support.  We should not decide the queue based on the socket, but
> on the current CPU.
> 
> We already do the right thing for forwarded packets because there
> is no socket to latch onto, we just need to fix it for locally
> generated traffic.
> 
> The odd packet reordering each time your scheduler decides to
> migrate the process isn't a big deal IMHO.  If your scheduler
> is constantly moving things you've got bigger problems to worry
> about.

If anybody wants to play here is a patch to do exactly that:

net: Determine TX queue purely by current CPU

Distributing packets generated on one CPU to multiple queues
makes no sense.  Nor does putting packets from multiple CPUs
into a single queue.

While this may introduce packet reordering should the scheduler
decide to migrate a thread, it isn't a big deal because migration
is meant to be a rare event, and nothing will die as long as the
ordering doesn't occur all the time.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Cheers,

Comments

Eric Dumazet March 1, 2011, 1:04 p.m. UTC | #1
Le mardi 01 mars 2011 à 20:32 +0800, Herbert Xu a écrit :
> On Tue, Mar 01, 2011 at 07:53:05PM +0800, Herbert Xu wrote:
> > On Tue, Mar 01, 2011 at 12:45:09PM +0100, Eric Dumazet wrote:
> > >
> > > CPU 11 handles all TX completions : Its a potential bottleneck.
> > > 
> > > I might ressurect XPS patch ;)
> > 
> > Actually this has been my gripe all along with our TX multiqueue
> > support.  We should not decide the queue based on the socket, but
> > on the current CPU.
> > 
> > We already do the right thing for forwarded packets because there
> > is no socket to latch onto, we just need to fix it for locally
> > generated traffic.
> > 
> > The odd packet reordering each time your scheduler decides to
> > migrate the process isn't a big deal IMHO.  If your scheduler
> > is constantly moving things you've got bigger problems to worry
> > about.
> 
> If anybody wants to play here is a patch to do exactly that:
> 
> net: Determine TX queue purely by current CPU
> 
> Distributing packets generated on one CPU to multiple queues
> makes no sense.  Nor does putting packets from multiple CPUs
> into a single queue.
> 
> While this may introduce packet reordering should the scheduler
> decide to migrate a thread, it isn't a big deal because migration
> is meant to be a rare event, and nothing will die as long as the
> ordering doesn't occur all the time.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
> 
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 8ae6631..87bd20a 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -2164,22 +2164,12 @@ static u32 hashrnd __read_mostly;
>  u16 __skb_tx_hash(const struct net_device *dev, const struct sk_buff *skb,
>  		  unsigned int num_tx_queues)
>  {
> -	u32 hash;
> +	u32 hash = raw_smp_processor_id();
>  
> -	if (skb_rx_queue_recorded(skb)) {
> -		hash = skb_get_rx_queue(skb);
> -		while (unlikely(hash >= num_tx_queues))
> -			hash -= num_tx_queues;
> -		return hash;
> -	}
> +	while (unlikely(hash >= num_tx_queues))
> +		hash -= num_tx_queues;
>  
> -	if (skb->sk && skb->sk->sk_hash)
> -		hash = skb->sk->sk_hash;
> -	else
> -		hash = (__force u16) skb->protocol ^ skb->rxhash;
> -	hash = jhash_1word(hash, hashrnd);
> -
> -	return (u16) (((u64) hash * num_tx_queues) >> 32);
> +	return hash;
>  }
>  EXPORT_SYMBOL(__skb_tx_hash);
>  
> Cheers,

Well, some machines have 4096 cpus ;)



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 1, 2011, 1:11 p.m. UTC | #2
On Tue, Mar 01, 2011 at 02:04:29PM +0100, Eric Dumazet wrote:
> Well, some machines have 4096 cpus ;)

Well just change it to use the multiplication then :)
diff mbox

Patch

diff --git a/net/core/dev.c b/net/core/dev.c
index 8ae6631..87bd20a 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2164,22 +2164,12 @@  static u32 hashrnd __read_mostly;
 u16 __skb_tx_hash(const struct net_device *dev, const struct sk_buff *skb,
 		  unsigned int num_tx_queues)
 {
-	u32 hash;
+	u32 hash = raw_smp_processor_id();
 
-	if (skb_rx_queue_recorded(skb)) {
-		hash = skb_get_rx_queue(skb);
-		while (unlikely(hash >= num_tx_queues))
-			hash -= num_tx_queues;
-		return hash;
-	}
+	while (unlikely(hash >= num_tx_queues))
+		hash -= num_tx_queues;
 
-	if (skb->sk && skb->sk->sk_hash)
-		hash = skb->sk->sk_hash;
-	else
-		hash = (__force u16) skb->protocol ^ skb->rxhash;
-	hash = jhash_1word(hash, hashrnd);
-
-	return (u16) (((u64) hash * num_tx_queues) >> 32);
+	return hash;
 }
 EXPORT_SYMBOL(__skb_tx_hash);