Message ID | 20181228002450.18611-5-fw@strlen.de |
---|---|
State | Accepted |
Delegated to: | Pablo Neira |
Headers | show |
Series | netfilter: nf_conncount: rework locking and memory management | expand |
On Fri, Dec 28, 2018 at 01:24:45AM +0100, Florian Westphal wrote: > Also, unconditionally schedule the gc worker. > The condition > > gc_count > ARRAY_SIZE(gc_nodes)) > > cannot be true unless tree grows very large, as the height of the tree > will be low even with hundreds of nodes present. But in patch 3/8 you make the worker only free nodes if gc_count > ARRAY_SIZE(gc_nodes)). So the worker is going to burn CPU time scanning the trees but most of the time won't find enough to GC. Also does the height of the tree matter? I don't see how it affects how many empty nodes there are in the tree. -- Shawn
On Fri, Dec 28, 2018 at 08:59:23AM -0600, Shawn Bohrer wrote: > On Fri, Dec 28, 2018 at 01:24:45AM +0100, Florian Westphal wrote: > > Also, unconditionally schedule the gc worker. > > The condition > > > > gc_count > ARRAY_SIZE(gc_nodes)) > > > > cannot be true unless tree grows very large, as the height of the tree > > will be low even with hundreds of nodes present. > > But in patch 3/8 you make the worker only free nodes if gc_count > > ARRAY_SIZE(gc_nodes)). So the worker is going to burn CPU time > scanning the trees but most of the time won't find enough to GC. > > Also does the height of the tree matter? I don't see how it affects > how many empty nodes there are in the tree. Ugh, nevermind I get it. insert_tree() only walks the path to insert the node and thus will only find empty nodes limited by the height of the tree. The worker will scan the entire tree. -- Shawn
diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c index 753132e4afa8..0a83c694a8f1 100644 --- a/net/netfilter/nf_conncount.c +++ b/net/netfilter/nf_conncount.c @@ -346,9 +346,10 @@ insert_tree(struct net *net, struct nf_conncount_tuple *conn; unsigned int count = 0, gc_count = 0; bool node_found = false; + bool do_gc = true; spin_lock_bh(&nf_conncount_locks[hash]); - +restart: parent = NULL; rbnode = &(root->rb_node); while (*rbnode) { @@ -381,21 +382,16 @@ insert_tree(struct net *net, if (gc_count >= ARRAY_SIZE(gc_nodes)) continue; - if (nf_conncount_gc_list(net, &rbconn->list)) + if (do_gc && nf_conncount_gc_list(net, &rbconn->list)) gc_nodes[gc_count++] = rbconn; } if (gc_count) { tree_nodes_free(root, gc_nodes, gc_count); - /* tree_node_free before new allocation permits - * allocator to re-use newly free'd object. - * - * This is a rare event; in most cases we will find - * existing node to re-use. (or gc_count is 0). - */ - - if (gc_count >= ARRAY_SIZE(gc_nodes)) - schedule_gc_worker(data, hash); + schedule_gc_worker(data, hash); + gc_count = 0; + do_gc = false; + goto restart; } if (node_found)
Shawn Bohrer reported a following crash: |RIP: 0010:rb_erase+0xae/0x360 [..] Call Trace: nf_conncount_destroy+0x59/0xc0 [nf_conncount] cleanup_match+0x45/0x70 [ip_tables] ... Shawn tracked this down to bogus 'parent' pointer: Problem is that when we insert a new node, then there is a chance that the 'parent' that we found was also passed to tree_nodes_free() (because that node was empty) for erase+free. Instead of trying to be clever and detect when this happens, restart the search if we have evicted one or more nodes. To prevent frequent restarts, do not perform gc on the second round. Also, unconditionally schedule the gc worker. The condition gc_count > ARRAY_SIZE(gc_nodes)) cannot be true unless tree grows very large, as the height of the tree will be low even with hundreds of nodes present. Fixes: 5c789e131cbb9 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search") Reported-by: Shawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: Florian Westphal <fw@strlen.de> --- net/netfilter/nf_conncount.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-)