From patchwork Mon Dec 25 03:49:06 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yongjun X-Patchwork-Id: 852768 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3z4lNt72Mhz9s03 for ; Mon, 25 Dec 2017 14:44:46 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751824AbdLYDon (ORCPT ); Sun, 24 Dec 2017 22:44:43 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:2781 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751287AbdLYDol (ORCPT ); Sun, 24 Dec 2017 22:44:41 -0500 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 7795BAEF89934; Mon, 25 Dec 2017 11:44:27 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.361.1; Mon, 25 Dec 2017 11:44:19 +0800 From: Wei Yongjun To: John Fastabend , Jamal Hadi Salim , Cong Wang , Jiri Pirko CC: Wei Yongjun , Subject: [PATCH net-next v2] net: sched: fix skb leak in dev_requeue_skb() Date: Mon, 25 Dec 2017 11:49:06 +0800 Message-ID: <1514173746-165282-1-git-send-email-weiyongjun1@huawei.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When dev_requeue_skb() is called with bluked skb list, only the first skb of the list will be requeued to qdisc layer, and leak the others without free them. TCP is broken due to skb leak since no free skb will be considered as still in the host queue and never be retransmitted. This happend when dev_requeue_skb() called from qdisc_restart(). qdisc_restart |-- dequeue_skb |-- sch_direct_xmit() |-- dev_requeue_skb() <-- skb may bluked Fix dev_requeue_skb() to requeue the full bluked list. Fixes: a53851e2c321 ("net: sched: explicit locking in gso_cpu fallback") Signed-off-by: Wei Yongjun --- v1 -> v2: add net-next prefix --- diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 981c08f..0df2dbf 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -111,10 +111,16 @@ static inline void qdisc_enqueue_skb_bad_txq(struct Qdisc *q, static inline int __dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q) { - __skb_queue_head(&q->gso_skb, skb); - q->qstats.requeues++; - qdisc_qstats_backlog_inc(q, skb); - q->q.qlen++; /* it's still part of the queue */ + while (skb) { + struct sk_buff *next = skb->next; + + __skb_queue_tail(&q->gso_skb, skb); + q->qstats.requeues++; + qdisc_qstats_backlog_inc(q, skb); + q->q.qlen++; /* it's still part of the queue */ + + skb = next; + } __netif_schedule(q); return 0; @@ -124,13 +130,20 @@ static inline int dev_requeue_skb_locked(struct sk_buff *skb, struct Qdisc *q) { spinlock_t *lock = qdisc_lock(q); - spin_lock(lock); - __skb_queue_tail(&q->gso_skb, skb); - spin_unlock(lock); + while (skb) { + struct sk_buff *next = skb->next; + + spin_lock(lock); + __skb_queue_tail(&q->gso_skb, skb); + spin_unlock(lock); + + qdisc_qstats_cpu_requeues_inc(q); + qdisc_qstats_cpu_backlog_inc(q, skb); + qdisc_qstats_cpu_qlen_inc(q); + + skb = next; + } - qdisc_qstats_cpu_requeues_inc(q); - qdisc_qstats_cpu_backlog_inc(q, skb); - qdisc_qstats_cpu_qlen_inc(q); __netif_schedule(q); return 0;