diff mbox series

[stable,4.4,01/11] net: speed up skb_rbtree_purge()

Message ID 1548209986-83527-2-git-send-email-maowenan@huawei.com
State Not Applicable
Delegated to: BPF Maintainers
Headers show
Series fix FragmentSmack in stable branch (CVE-2018-5391) | expand

Commit Message

maowenan Jan. 23, 2019, 2:19 a.m. UTC
From: Eric Dumazet <edumazet@google.com>

[ Upstream commit 7c90584c66cc4b033a3b684b0e0950f79e7b7166 ]

As measured in my prior patch ("sch_netem: faster rb tree removal"),
rbtree_postorder_for_each_entry_safe() is nice looking but much slower
than using rb_next() directly, except when tree is small enough
to fit in CPU caches (then the cost is the same)

Also note that there is not even an increase of text size :
$ size net/core/skbuff.o.before net/core/skbuff.o
   text	   data	    bss	    dec	    hex	filename
  40711	   1298	      0	  42009	   a419	net/core/skbuff.o.before
  40711	   1298	      0	  42009	   a419	net/core/skbuff.o

From: Eric Dumazet <edumazet@google.com>

Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Mao Wenan <maowenan@huawei.com>
---
 net/core/skbuff.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)
diff mbox series

Patch

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 9703924..8a57bba 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -2388,12 +2388,15 @@  EXPORT_SYMBOL(skb_queue_purge);
  */
 void skb_rbtree_purge(struct rb_root *root)
 {
-	struct sk_buff *skb, *next;
+	struct rb_node *p = rb_first(root);
 
-	rbtree_postorder_for_each_entry_safe(skb, next, root, rbnode)
-		kfree_skb(skb);
+	while (p) {
+		struct sk_buff *skb = rb_entry(p, struct sk_buff, rbnode);
 
-	*root = RB_ROOT;
+		p = rb_next(p);
+		rb_erase(&skb->rbnode, root);
+		kfree_skb(skb);
+	}
 }
 
 /**