Message ID | 20200208154209.1797988-15-jolsa@kernel.org |
---|---|
State | Superseded |
Delegated to: | BPF Maintainers |
Headers | show |
Series | bpf: Add trampoline and dispatcher to /proc/kallsyms | expand |
On Sat, Feb 8, 2020 at 7:43 AM Jiri Olsa <jolsa@kernel.org> wrote: > > Currently we don't sort bpf_kallsyms and display symbols > in proc/kallsyms as they come in via __bpf_ksym_add. > > Using the latch tree to get the next bpf_ksym object > and insert the new symbol ahead of it. > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > --- Acked-by: Andrii Nakryiko <andriin@fb.com> > kernel/bpf/core.c | 23 ++++++++++++++++++++++- > 1 file changed, 22 insertions(+), 1 deletion(-) > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > index 50af5dcf7ff9..c63ff34b2128 100644 > --- a/kernel/bpf/core.c > +++ b/kernel/bpf/core.c > @@ -651,9 +651,30 @@ static struct latch_tree_root bpf_progs_tree __cacheline_aligned; > > static void __bpf_ksym_add(struct bpf_ksym *ksym) > { > + struct list_head *head = &bpf_kallsyms; > + > WARN_ON_ONCE(!list_empty(&ksym->lnode)); > - list_add_tail_rcu(&ksym->lnode, &bpf_kallsyms); > latch_tree_insert(&ksym->tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); > + > + /* > + * Add ksym into bpf_kallsyms in ordered position, > + * which is prepared for us by latch tree addition. > + * > + * Find out the next symbol and insert ksym right > + * ahead of it. If ksym is the last one, just tail > + * add to the bpf_kallsyms. > + */ > + if (!list_empty(&bpf_kallsyms)) { nit: this is a bit redundant check (and unlikely to be false) > + struct rb_node *next = rb_next(&ksym->tnode.node[0]); > + > + if (next) { > + struct bpf_ksym *ptr; > + > + ptr = container_of(next, struct bpf_ksym, tnode.node[0]); > + head = &ptr->lnode; > + } > + } > + list_add_tail_rcu(&ksym->lnode, head); > } > > void bpf_ksym_add(struct bpf_ksym *ksym) > -- > 2.24.1 >
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 50af5dcf7ff9..c63ff34b2128 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -651,9 +651,30 @@ static struct latch_tree_root bpf_progs_tree __cacheline_aligned; static void __bpf_ksym_add(struct bpf_ksym *ksym) { + struct list_head *head = &bpf_kallsyms; + WARN_ON_ONCE(!list_empty(&ksym->lnode)); - list_add_tail_rcu(&ksym->lnode, &bpf_kallsyms); latch_tree_insert(&ksym->tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); + + /* + * Add ksym into bpf_kallsyms in ordered position, + * which is prepared for us by latch tree addition. + * + * Find out the next symbol and insert ksym right + * ahead of it. If ksym is the last one, just tail + * add to the bpf_kallsyms. + */ + if (!list_empty(&bpf_kallsyms)) { + struct rb_node *next = rb_next(&ksym->tnode.node[0]); + + if (next) { + struct bpf_ksym *ptr; + + ptr = container_of(next, struct bpf_ksym, tnode.node[0]); + head = &ptr->lnode; + } + } + list_add_tail_rcu(&ksym->lnode, head); } void bpf_ksym_add(struct bpf_ksym *ksym)
Currently we don't sort bpf_kallsyms and display symbols in proc/kallsyms as they come in via __bpf_ksym_add. Using the latch tree to get the next bpf_ksym object and insert the new symbol ahead of it. Signed-off-by: Jiri Olsa <jolsa@kernel.org> --- kernel/bpf/core.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-)