Message ID | 1506195552.29839.214.camel@edumazet-glaptop3.roam.corp.google.com |
---|---|
State | Accepted, archived |
Delegated to: | David Miller |
Headers | show |
Series | [net-next] net: speed up skb_rbtree_purge() | expand |
From: Eric Dumazet <eric.dumazet@gmail.com> Date: Sat, 23 Sep 2017 12:39:12 -0700 > From: Eric Dumazet <edumazet@google.com> > > As measured in my prior patch ("sch_netem: faster rb tree removal"), > rbtree_postorder_for_each_entry_safe() is nice looking but much slower > than using rb_next() directly, except when tree is small enough > to fit in CPU caches (then the cost is the same) > > Also note that there is not even an increase of text size : > $ size net/core/skbuff.o.before net/core/skbuff.o > text data bss dec hex filename > 40711 1298 0 42009 a419 net/core/skbuff.o.before > 40711 1298 0 42009 a419 net/core/skbuff.o > > > From: Eric Dumazet <edumazet@google.com> Applied.
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 16982de649b97b92423a4f9f5eac1e98ca803370..000ce735fa8d649e7abeeef2ebab8501dea96efd 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2848,12 +2848,15 @@ EXPORT_SYMBOL(skb_queue_purge); */ void skb_rbtree_purge(struct rb_root *root) { - struct sk_buff *skb, *next; + struct rb_node *p = rb_first(root); - rbtree_postorder_for_each_entry_safe(skb, next, root, rbnode) - kfree_skb(skb); + while (p) { + struct sk_buff *skb = rb_entry(p, struct sk_buff, rbnode); - *root = RB_ROOT; + p = rb_next(p); + rb_erase(&skb->rbnode, root); + kfree_skb(skb); + } } /**