diff mbox

[net-next-2.6,take2] ipv6: udp: Optimise multicast reception

Message ID 4AF45796.8010409@gmail.com
State Superseded, archived
Delegated to: David Miller
Headers show

Commit Message

Eric Dumazet Nov. 6, 2009, 5:06 p.m. UTC
Take 2 of patch, adding a missing kfree_skb() in case no socket could receive a copy

[PATCH net-next-2.6 take2] ipv6: udp: Optimise multicast reception

IPV6 UDP multicast rx path is a bit complex and can hold a spinlock
for a long time.

Using a small (32 or 64 entries) stack of socket pointers can help
to perform expensive operations (skb_clone(), udp_queue_rcv_skb())
outside of the lock, in most cases.



Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
 net/ipv6/udp.c |   71 +++++++++++++++++++++++++++++++----------------
 1 files changed, 47 insertions(+), 24 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Lucian Adrian Grijincu Nov. 6, 2009, 5:19 p.m. UTC | #1
În data de Vin 06 Noi 2009 19:06:30 Eric Dumazet a scris:
> +       for (i = 0; i < count; i++) {
> +               skb1 = (i == final) ? skb : skb_clone(skb, GFP_ATOMIC);
> +
> +               if (skb1) {
> +                       sk = stack[i];
> +                       bh_lock_sock(sk);
> +                       if (!sock_owned_by_user(sk))
> +                               udpv6_queue_rcv_skb(sk, skb1);
> +                       else
> +                               sk_add_backlog(sk, skb1);
> +                       bh_unlock_sock(sk);
> +               }
> +       }

Is there any reason to continue this loop if we ever get a NULL skb1?

If once we can't get a GFP_ATOMIC clone, are there merrits to calling 
skb_clone() like crazy (in a very tight loop) (whilst holding a spin lock in 
some cases)?
Eric Dumazet Nov. 6, 2009, 5:24 p.m. UTC | #2
Lucian Adrian Grijincu a écrit :
> În data de Vin 06 Noi 2009 19:06:30 Eric Dumazet a scris:
>> +       for (i = 0; i < count; i++) {
>> +               skb1 = (i == final) ? skb : skb_clone(skb, GFP_ATOMIC);
>> +
>> +               if (skb1) {
>> +                       sk = stack[i];
>> +                       bh_lock_sock(sk);
>> +                       if (!sock_owned_by_user(sk))
>> +                               udpv6_queue_rcv_skb(sk, skb1);
>> +                       else
>> +                               sk_add_backlog(sk, skb1);
>> +                       bh_unlock_sock(sk);
>> +               }
>> +       }
> 
> Is there any reason to continue this loop if we ever get a NULL skb1?
> 
> If once we can't get a GFP_ATOMIC clone, are there merrits to calling 
> skb_clone() like crazy (in a very tight loop) (whilst holding a spin lock in 
> some cases)?
> 

Well, because of future RCU patch, I'll need to continue the loop anyway,
because sock_put() will be part of the loop.

Also, the 'final' case will at least provide the skb we already have.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet Nov. 6, 2009, 5:54 p.m. UTC | #3
Eric Dumazet a écrit :
>
> 
> Well, because of future RCU patch, I'll need to continue the loop anyway,
> because sock_put() will be part of the loop.
> 
> Also, the 'final' case will at least provide the skb we already have.
> 

Last point :

We should atomic_inc(&sk->sk_drops) in the case skb_clone()
could not provide a copy, and eventually increment UDP_MIB_RCVBUFERRORS & 
UDP_MIB_INERRORS SNMP counters


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Lucian Adrian Grijincu Nov. 6, 2009, 5:59 p.m. UTC | #4
În data de Vin 06 Noi 2009 19:54:55 Eric Dumazet a scris:
> We should atomic_inc(&sk->sk_drops) in the case skb_clone()
> could not provide a copy, and eventually increment UDP_MIB_RCVBUFERRORS &
> UDP_MIB_INERRORS SNMP counters

Shouldn't this be also done if udp_queue_rcv_skb() returned a failure code (a 
positive number) instead of silently droping?
Eric Dumazet Nov. 6, 2009, 6:03 p.m. UTC | #5
Lucian Adrian Grijincu a écrit :
> În data de Vin 06 Noi 2009 19:54:55 Eric Dumazet a scris:
>> We should atomic_inc(&sk->sk_drops) in the case skb_clone()
>> could not provide a copy, and eventually increment UDP_MIB_RCVBUFERRORS &
>> UDP_MIB_INERRORS SNMP counters
> 
> Shouldn't this be also done if udp_queue_rcv_skb() returned a failure code (a 
> positive number) instead of silently droping?
> 

Nope, this is correctly done in __udp_queue_rcv_skb() & sock_queue_rcv_skb()

Only multicast is lazy in this area.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 5bc7cdb..ecb6a6f 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -440,6 +440,27 @@  static struct sock *udp_v6_mcast_next(struct net *net, struct sock *sk,
 	return NULL;
 }
 
+static void flush_stack(struct sock **stack, unsigned int count,
+			struct sk_buff *skb, unsigned int final)
+{
+	unsigned int i;
+	struct sock *sk;
+	struct sk_buff *skb1;
+
+	for (i = 0; i < count; i++) {
+		skb1 = (i == final) ? skb : skb_clone(skb, GFP_ATOMIC);
+
+		if (skb1) {
+			sk = stack[i];
+			bh_lock_sock(sk);
+			if (!sock_owned_by_user(sk))
+				udpv6_queue_rcv_skb(sk, skb1);
+			else
+				sk_add_backlog(sk, skb1);
+			bh_unlock_sock(sk);
+		}
+	}
+}
 /*
  * Note: called only from the BH handler context,
  * so we don't need to lock the hashes.
@@ -448,41 +469,43 @@  static int __udp6_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
 		struct in6_addr *saddr, struct in6_addr *daddr,
 		struct udp_table *udptable)
 {
-	struct sock *sk, *sk2;
+	struct sock *sk, *stack[256 / sizeof(struct sock *)];
 	const struct udphdr *uh = udp_hdr(skb);
 	struct udp_hslot *hslot = udp_hashslot(udptable, net, ntohs(uh->dest));
 	int dif;
+	unsigned int i, count = 0;
 
 	spin_lock(&hslot->lock);
 	sk = sk_nulls_head(&hslot->head);
 	dif = inet6_iif(skb);
 	sk = udp_v6_mcast_next(net, sk, uh->dest, daddr, uh->source, saddr, dif);
-	if (!sk) {
-		kfree_skb(skb);
-		goto out;
-	}
-
-	sk2 = sk;
-	while ((sk2 = udp_v6_mcast_next(net, sk_nulls_next(sk2), uh->dest, daddr,
-					uh->source, saddr, dif))) {
-		struct sk_buff *buff = skb_clone(skb, GFP_ATOMIC);
-		if (buff) {
-			bh_lock_sock(sk2);
-			if (!sock_owned_by_user(sk2))
-				udpv6_queue_rcv_skb(sk2, buff);
-			else
-				sk_add_backlog(sk2, buff);
-			bh_unlock_sock(sk2);
+	while (sk) {
+		stack[count++] = sk;
+		sk = udp_v6_mcast_next(net, sk_nulls_next(sk), uh->dest, daddr,
+				       uh->source, saddr, dif);
+		if (unlikely(count == ARRAY_SIZE(stack))) {
+			if (!sk)
+				break;
+			flush_stack(stack, count, skb, ~0);
+			count = 0;
 		}
 	}
-	bh_lock_sock(sk);
-	if (!sock_owned_by_user(sk))
-		udpv6_queue_rcv_skb(sk, skb);
-	else
-		sk_add_backlog(sk, skb);
-	bh_unlock_sock(sk);
-out:
+	/*
+	 * before releasing the lock, we must take reference on sockets
+	 */
+	for (i = 0; i < count; i++)
+		sock_hold(stack[i]);
+
 	spin_unlock(&hslot->lock);
+
+	if (count) {
+		flush_stack(stack, count, skb, count - 1);
+
+		for (i = 0; i < count; i++)
+			sock_put(stack[i]);
+	} else {
+		kfree_skb(skb);
+	}
 	return 0;
 }