diff mbox series

[net-next,v3] net: sched: fix skb leak in dev_requeue_skb()

Message ID 1514365552-150962-1-git-send-email-weiyongjun1@huawei.com
State Accepted, archived
Delegated to: David Miller
Headers show
Series [net-next,v3] net: sched: fix skb leak in dev_requeue_skb() | expand

Commit Message

Wei Yongjun Dec. 27, 2017, 9:05 a.m. UTC
When dev_requeue_skb() is called with bluked skb list, only the
first skb of the list will be requeued to qdisc layer, and leak
the others without free them.

TCP is broken due to skb leak since no free skb will be considered
as still in the host queue and never be retransmitted. This happend
when dev_requeue_skb() called from qdisc_restart().
  qdisc_restart
  |-- dequeue_skb
  |-- sch_direct_xmit()
      |-- dev_requeue_skb() <-- skb may bluked

Fix dev_requeue_skb() to requeue the full bluked list. Also change
to use __skb_queue_tail() in __dev_requeue_skb() to avoid skb out
of order.

Fixes: a53851e2c321 ("net: sched: explicit locking in gso_cpu fallback")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
---
v2 -> v3: move lock out of while loop
---
 net/sched/sch_generic.c | 29 +++++++++++++++++++++--------
 1 file changed, 21 insertions(+), 8 deletions(-)

Comments

David Miller Jan. 2, 2018, 6:49 p.m. UTC | #1
From: Wei Yongjun <weiyongjun1@huawei.com>
Date: Wed, 27 Dec 2017 17:05:52 +0800

> When dev_requeue_skb() is called with bluked skb list, only the
                                        ^^^^^^

"bulked"

> first skb of the list will be requeued to qdisc layer, and leak
> the others without free them.
> 
> TCP is broken due to skb leak since no free skb will be considered
> as still in the host queue and never be retransmitted. This happend
> when dev_requeue_skb() called from qdisc_restart().
>   qdisc_restart
>   |-- dequeue_skb
>   |-- sch_direct_xmit()
>       |-- dev_requeue_skb() <-- skb may bluked
> 
> Fix dev_requeue_skb() to requeue the full bluked list. Also change
> to use __skb_queue_tail() in __dev_requeue_skb() to avoid skb out
> of order.
> 
> Fixes: a53851e2c321 ("net: sched: explicit locking in gso_cpu fallback")
> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
> ---
> v2 -> v3: move lock out of while loop

Applied, thank you.
John Fastabend Jan. 2, 2018, 9:34 p.m. UTC | #2
On 01/02/2018 10:49 AM, David Miller wrote:
> From: Wei Yongjun <weiyongjun1@huawei.com>
> Date: Wed, 27 Dec 2017 17:05:52 +0800
> 
>> When dev_requeue_skb() is called with bluked skb list, only the
>                                         ^^^^^^
> 
> "bulked"
> 
>> first skb of the list will be requeued to qdisc layer, and leak
>> the others without free them.
>>
>> TCP is broken due to skb leak since no free skb will be considered
>> as still in the host queue and never be retransmitted. This happend
>> when dev_requeue_skb() called from qdisc_restart().
>>   qdisc_restart
>>   |-- dequeue_skb
>>   |-- sch_direct_xmit()
>>       |-- dev_requeue_skb() <-- skb may bluked
>>
>> Fix dev_requeue_skb() to requeue the full bluked list. Also change
>> to use __skb_queue_tail() in __dev_requeue_skb() to avoid skb out
>> of order.>>
>> Fixes: a53851e2c321 ("net: sched: explicit locking in gso_cpu fallback")
>> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
>> ---
>> v2 -> v3: move lock out of while loop
> 
> Applied, thank you.
> 

Bit late on my review but LGTM thanks!
diff mbox series

Patch

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 10aaa3b6..1c149ed 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -111,10 +111,16 @@  static inline void qdisc_enqueue_skb_bad_txq(struct Qdisc *q,
 
 static inline int __dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
 {
-	__skb_queue_head(&q->gso_skb, skb);
-	q->qstats.requeues++;
-	qdisc_qstats_backlog_inc(q, skb);
-	q->q.qlen++;	/* it's still part of the queue */
+	while (skb) {
+		struct sk_buff *next = skb->next;
+
+		__skb_queue_tail(&q->gso_skb, skb);
+		q->qstats.requeues++;
+		qdisc_qstats_backlog_inc(q, skb);
+		q->q.qlen++;	/* it's still part of the queue */
+
+		skb = next;
+	}
 	__netif_schedule(q);
 
 	return 0;
@@ -125,12 +131,19 @@  static inline int dev_requeue_skb_locked(struct sk_buff *skb, struct Qdisc *q)
 	spinlock_t *lock = qdisc_lock(q);
 
 	spin_lock(lock);
-	__skb_queue_tail(&q->gso_skb, skb);
+	while (skb) {
+		struct sk_buff *next = skb->next;
+
+		__skb_queue_tail(&q->gso_skb, skb);
+
+		qdisc_qstats_cpu_requeues_inc(q);
+		qdisc_qstats_cpu_backlog_inc(q, skb);
+		qdisc_qstats_cpu_qlen_inc(q);
+
+		skb = next;
+	}
 	spin_unlock(lock);
 
-	qdisc_qstats_cpu_requeues_inc(q);
-	qdisc_qstats_cpu_backlog_inc(q, skb);
-	qdisc_qstats_cpu_qlen_inc(q);
 	__netif_schedule(q);
 
 	return 0;