Message ID | e8a8cc8011d16169eaf604b78583836ab2246f0e.1478213579.git.daniel@iogearbox.net |
---|---|
State | Accepted, archived |
Delegated to: | David Miller |
Headers | show |
From: Daniel Borkmann <daniel@iogearbox.net> Date: Fri, 4 Nov 2016 00:01:19 +0100 > Commit a6ed3ea65d98 ("bpf: restore behavior of bpf_map_update_elem") > added an extra per-cpu reserve to the hash table map to restore old > behaviour from pre prealloc times. When non-prealloc is in use for a > map, then problem is that once a hash table extra element has been > linked into the hash-table, and the hash table is destroyed due to > refcount dropping to zero, then htab_map_free() -> delete_all_elements() > will walk the whole hash table and drop all elements via htab_elem_free(). > The problem is that the element from the extra reserve is first fed > to the wrong backend allocator and eventually freed twice. > > Fixes: a6ed3ea65d98 ("bpf: restore behavior of bpf_map_update_elem") > Reported-by: Dmitry Vyukov <dvyukov@google.com> > Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> > Acked-by: Alexei Starovoitov <ast@kernel.org> Applied and queued up for -stable, thanks!
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 570eeca..ad1bc67 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -687,7 +687,8 @@ static void delete_all_elements(struct bpf_htab *htab) hlist_for_each_entry_safe(l, n, head, hash_node) { hlist_del_rcu(&l->hash_node); - htab_elem_free(htab, l); + if (l->state != HTAB_EXTRA_ELEM_USED) + htab_elem_free(htab, l); } } }