Message ID | 20081106104103.GA29787@ubuntu |
---|---|
State | Rejected, archived |
Delegated to: | David Miller |
Headers | show |
From: Jianjun Kong <jianjun@zeuux.org> Date: Thu, 6 Nov 2008 18:41:03 +0800 > net/core/skbuff.c: void skb_queue_purge(struct sk_buff_head *list) > > This function should takes the the list lock, because the operation to > this list shoule be atomic. And __skb_queue_purge() (in > include/linux/skbuff.c) real delete the buffers in the list. > > Signed-off-by: Jianjun Kong <jianjun@zeuux.org> No, this function is fine. skb_dequeue() takes the lock so there cannot be any list corruption. And this function is called in contexts where the caller knows that no new packets can be added to the list (closing a socket, shutting down a device, etc.) And even if new packets could appear, taking the lock over the entire function would not help that problem. In fact, I suspect that many if not all skb_queue_purge() callers can be converted to use __skb_queue_purge(). -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Nov 10, 2008 at 01:35:15PM -0800, David Miller wrote: >From: Jianjun Kong <jianjun@zeuux.org> >Date: Thu, 6 Nov 2008 18:41:03 +0800 > >> net/core/skbuff.c: void skb_queue_purge(struct sk_buff_head *list) >> >> This function should takes the the list lock, because the operation to >> this list shoule be atomic. And __skb_queue_purge() (in >> include/linux/skbuff.c) real delete the buffers in the list. >> >> Signed-off-by: Jianjun Kong <jianjun@zeuux.org> > >No, this function is fine. skb_dequeue() takes the lock so >there cannot be any list corruption. > >And this function is called in contexts where the caller knows >that no new packets can be added to the list (closing a socket, >shutting down a device, etc.) And even if new packets could >appear, taking the lock over the entire function would not >help that problem. > >In fact, I suspect that many if not all skb_queue_purge() callers >can be converted to use __skb_queue_purge(). Thanks, I've known :-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index ebb6b94..3b89fb1 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1834,9 +1834,11 @@ struct sk_buff *skb_dequeue_tail(struct sk_buff_head *list) */ void skb_queue_purge(struct sk_buff_head *list) { - struct sk_buff *skb; - while ((skb = skb_dequeue(list)) != NULL) - kfree_skb(skb); + unsigned long flags; + + spin_lock_irqsave(&list->lock, flags); + __skb_queue_purge(list); + spin_unlock_irqrestore(&list->lock, flags); } /**
net/core/skbuff.c: void skb_queue_purge(struct sk_buff_head *list) This function should takes the the list lock, because the operation to this list shoule be atomic. And __skb_queue_purge() (in include/linux/skbuff.c) real delete the buffers in the list. Signed-off-by: Jianjun Kong <jianjun@zeuux.org> --- net/core/skbuff.c | 8 +++++--- 1 files changed, 5 insertions(+), 3 deletions(-)