diff mbox

ICMP packets - ll_temac with Microblaze

Message ID 1324487504.2301.36.camel@edumazet-HP-Compaq-6005-Pro-SFF-PC
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Eric Dumazet Dec. 21, 2011, 5:11 p.m. UTC
Le mercredi 21 décembre 2011 à 16:59 +0100, Eric Dumazet a écrit :

> I wonder if you applied/tested my patch correctly, since it really
> should had help in your case  (allowing first received packet to be
> queued, even if very big)

Hmm, I missed another spot in sock_queue_rcv_skb().
(RAW sockets are not PACKET :) )

Here is the patch again, I tested it with 'busybox ping' and MTU=9000,
it solved the problem for me.

Thanks

[PATCH net-next] net: relax rcvbuf limits

skb->truesize might be big even for a small packet.

Its even bigger after commit 87fb4b7b533 (net: more accurate skb
truesize) and big MTU.

We should allow queueing at least one packet per receiver, even with a
low RCVBUF setting.

Reported-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
 include/net/sock.h     |    4 +++-
 net/core/sock.c        |    6 +-----
 net/packet/af_packet.c |    6 ++----
 3 files changed, 6 insertions(+), 10 deletions(-)



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

David Miller Dec. 21, 2011, 8:55 p.m. UTC | #1
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Wed, 21 Dec 2011 18:11:44 +0100

> [PATCH net-next] net: relax rcvbuf limits
> 
> skb->truesize might be big even for a small packet.
> 
> Its even bigger after commit 87fb4b7b533 (net: more accurate skb
> truesize) and big MTU.
> 
> We should allow queueing at least one packet per receiver, even with a
> low RCVBUF setting.
> 
> Reported-by: Michal Simek <monstr@monstr.eu>
> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>

Applied to net-next, although I was tempted to put it into net.

We may end up backporting this into -stable at some point, we'll
see.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michal Simek Dec. 22, 2011, 7:49 a.m. UTC | #2
David Miller wrote:
> From: Eric Dumazet <eric.dumazet@gmail.com>
> Date: Wed, 21 Dec 2011 18:11:44 +0100
> 
>> [PATCH net-next] net: relax rcvbuf limits
>>
>> skb->truesize might be big even for a small packet.
>>
>> Its even bigger after commit 87fb4b7b533 (net: more accurate skb
>> truesize) and big MTU.
>>
>> We should allow queueing at least one packet per receiver, even with a
>> low RCVBUF setting.
>>
>> Reported-by: Michal Simek <monstr@monstr.eu>
>> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
> 
> Applied to net-next, although I was tempted to put it into net.
> 
> We may end up backporting this into -stable at some point, we'll
> see.

Yes, it works. Thanks Eric.

I hope that this patch will be in v3.2.

Thanks,
Michal
Eric Dumazet Dec. 22, 2011, 7:57 a.m. UTC | #3
Le jeudi 22 décembre 2011 à 08:49 +0100, Michal Simek a écrit :
> David Miller wrote:
> > From: Eric Dumazet <eric.dumazet@gmail.com>
> > Date: Wed, 21 Dec 2011 18:11:44 +0100
> > 
> >> [PATCH net-next] net: relax rcvbuf limits
> >>
> >> skb->truesize might be big even for a small packet.
> >>
> >> Its even bigger after commit 87fb4b7b533 (net: more accurate skb
> >> truesize) and big MTU.
> >>
> >> We should allow queueing at least one packet per receiver, even with a
> >> low RCVBUF setting.
> >>
> >> Reported-by: Michal Simek <monstr@monstr.eu>
> >> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
> > 
> > Applied to net-next, although I was tempted to put it into net.
> > 
> > We may end up backporting this into -stable at some point, we'll
> > see.
> 
> Yes, it works. Thanks Eric.
> 
> I hope that this patch will be in v3.2.
> 

Thanks for testing !

I overlooked fact that commit 87fb4b7b533 was already in 3.2, so yes, we
probably can push this to 3.2

(By the way, busybox ping probably doesnt work on 3.1 kernel with a
MTU=9000 non copybreak driver, so its not a clear 3.2 regression)



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michal Simek Dec. 22, 2011, 8:05 a.m. UTC | #4
Eric Dumazet wrote:
> Le jeudi 22 décembre 2011 à 08:49 +0100, Michal Simek a écrit :
>> David Miller wrote:
>>> From: Eric Dumazet <eric.dumazet@gmail.com>
>>> Date: Wed, 21 Dec 2011 18:11:44 +0100
>>>
>>>> [PATCH net-next] net: relax rcvbuf limits
>>>>
>>>> skb->truesize might be big even for a small packet.
>>>>
>>>> Its even bigger after commit 87fb4b7b533 (net: more accurate skb
>>>> truesize) and big MTU.
>>>>
>>>> We should allow queueing at least one packet per receiver, even with a
>>>> low RCVBUF setting.
>>>>
>>>> Reported-by: Michal Simek <monstr@monstr.eu>
>>>> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
>>> Applied to net-next, although I was tempted to put it into net.
>>>
>>> We may end up backporting this into -stable at some point, we'll
>>> see.
>> Yes, it works. Thanks Eric.
>>
>> I hope that this patch will be in v3.2.
>>
> 
> Thanks for testing !
> 
> I overlooked fact that commit 87fb4b7b533 was already in 3.2, so yes, we
> probably can push this to 3.2

great.

> (By the way, busybox ping probably doesnt work on 3.1 kernel with a
> MTU=9000 non copybreak driver, so its not a clear 3.2 regression)

good to know.

Thanks,
Michal
diff mbox

Patch

diff --git a/include/net/sock.h b/include/net/sock.h
index bf6b9fd..21bb3b5 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -662,12 +662,14 @@  static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)
 
 /*
  * Take into account size of receive queue and backlog queue
+ * Do not take into account this skb truesize,
+ * to allow even a single big packet to come.
  */
 static inline bool sk_rcvqueues_full(const struct sock *sk, const struct sk_buff *skb)
 {
 	unsigned int qsize = sk->sk_backlog.len + atomic_read(&sk->sk_rmem_alloc);
 
-	return qsize + skb->truesize > sk->sk_rcvbuf;
+	return qsize > sk->sk_rcvbuf;
 }
 
 /* The per-socket spinlock must be held here. */
diff --git a/net/core/sock.c b/net/core/sock.c
index a343286..347b6d9 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -339,11 +339,7 @@  int sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 	unsigned long flags;
 	struct sk_buff_head *list = &sk->sk_receive_queue;
 
-	/* Cast sk->rcvbuf to unsigned... It's pointless, but reduces
-	   number of warnings when compiling with -W --ANK
-	 */
-	if (atomic_read(&sk->sk_rmem_alloc) + skb->truesize >=
-	    (unsigned)sk->sk_rcvbuf) {
+	if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf) {
 		atomic_inc(&sk->sk_drops);
 		trace_sock_rcvqueue_full(sk, skb);
 		return -ENOMEM;
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index 0da505c..e56ca75 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -1631,8 +1631,7 @@  static int packet_rcv(struct sk_buff *skb, struct net_device *dev,
 	if (snaplen > res)
 		snaplen = res;
 
-	if (atomic_read(&sk->sk_rmem_alloc) + skb->truesize >=
-	    (unsigned)sk->sk_rcvbuf)
+	if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf)
 		goto drop_n_acct;
 
 	if (skb_shared(skb)) {
@@ -1763,8 +1762,7 @@  static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
 	if (po->tp_version <= TPACKET_V2) {
 		if (macoff + snaplen > po->rx_ring.frame_size) {
 			if (po->copy_thresh &&
-				atomic_read(&sk->sk_rmem_alloc) + skb->truesize
-				< (unsigned)sk->sk_rcvbuf) {
+			    atomic_read(&sk->sk_rmem_alloc) < sk->sk_rcvbuf) {
 				if (skb_shared(skb)) {
 					copy_skb = skb_clone(skb, GFP_ATOMIC);
 				} else {