From patchwork Sat Feb 8 15:41:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235295 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdT0sSVz9sRQ for ; Sun, 9 Feb 2020 02:42:37 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727453AbgBHPmc convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:42:32 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:53347 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727390AbgBHPmc (ORCPT ); Sat, 8 Feb 2020 10:42:32 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-179-zNFRL5X6M8G2Fvw8FizYBw-1; Sat, 08 Feb 2020 10:42:27 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1577D800D6C; Sat, 8 Feb 2020 15:42:25 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id B134A5C28F; Sat, 8 Feb 2020 15:42:14 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: Dave Hansen , Andy Lutomirski , Peter Zijlstra , kbuild test robot , netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 01/14] x86/mm: Rename is_kernel_text to __is_kernel_text Date: Sat, 8 Feb 2020 16:41:56 +0100 Message-Id: <20200208154209.1797988-2-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: zNFRL5X6M8G2Fvw8FizYBw-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org The kbuild test robot reported compile issue on x86 in one of the following patches that adds include into , which is picked up by init_32.c object. The problem is that defines global function is_kernel_text which colides with the static function of the same name defined in init_32.c: $ make ARCH=i386 ... >> arch/x86/mm/init_32.c:241:19: error: redefinition of 'is_kernel_text' static inline int is_kernel_text(unsigned long addr) ^~~~~~~~~~~~~~ In file included from include/linux/bpf.h:21:0, from include/linux/bpf-cgroup.h:5, from include/linux/cgroup-defs.h:22, from include/linux/cgroup.h:28, from include/linux/hugetlb.h:9, from arch/x86/mm/init_32.c:18: include/linux/kallsyms.h:31:19: note: previous definition of 'is_kernel_text' was here static inline int is_kernel_text(unsigned long addr) Renaming the init_32.c is_kernel_text function to __is_kernel_text. Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Reported-by: kbuild test robot Signed-off-by: Jiri Olsa --- arch/x86/mm/init_32.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 23df4885bbed..eb6ede2c3d43 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -238,7 +238,11 @@ page_table_range_init(unsigned long start, unsigned long end, pgd_t *pgd_base) } } -static inline int is_kernel_text(unsigned long addr) +/* + * The already defines is_kernel_text, + * using '__' prefix not to get in conflict. + */ +static inline int __is_kernel_text(unsigned long addr) { if (addr >= (unsigned long)_text && addr <= (unsigned long)__init_end) return 1; @@ -328,8 +332,8 @@ kernel_physical_mapping_init(unsigned long start, addr2 = (pfn + PTRS_PER_PTE-1) * PAGE_SIZE + PAGE_OFFSET + PAGE_SIZE-1; - if (is_kernel_text(addr) || - is_kernel_text(addr2)) + if (__is_kernel_text(addr) || + __is_kernel_text(addr2)) prot = PAGE_KERNEL_LARGE_EXEC; pages_2m++; @@ -354,7 +358,7 @@ kernel_physical_mapping_init(unsigned long start, */ pgprot_t init_prot = __pgprot(PTE_IDENT_ATTR); - if (is_kernel_text(addr)) + if (__is_kernel_text(addr)) prot = PAGE_KERNEL_EXEC; pages_4k++; @@ -881,7 +885,7 @@ static void mark_nxdata_nx(void) */ unsigned long start = PFN_ALIGN(_etext); /* - * This comes from is_kernel_text upper limit. Also HPAGE where used: + * This comes from __is_kernel_text upper limit. Also HPAGE where used: */ unsigned long size = (((unsigned long)__init_end + HPAGE_SIZE) & HPAGE_MASK) - start; From patchwork Sat Feb 8 15:41:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235296 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdT4JkDz9sRR for ; Sun, 9 Feb 2020 02:42:37 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727473AbgBHPmg convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:42:36 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:39771 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727303AbgBHPmf (ORCPT ); Sat, 8 Feb 2020 10:42:35 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-309-mtMt63AEOJaSxn926ECukw-1; Sat, 08 Feb 2020 10:42:30 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5BD5F1800D42; Sat, 8 Feb 2020 15:42:28 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6D6C95C21B; Sat, 8 Feb 2020 15:42:25 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 02/14] bpf: Add bpf_trampoline_ name prefix for DECLARE_BPF_DISPATCHER Date: Sat, 8 Feb 2020 16:41:57 +0100 Message-Id: <20200208154209.1797988-3-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: mtMt63AEOJaSxn926ECukw-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel Adding bpf_trampoline_ name prefix for DECLARE_BPF_DISPATCHER, so all the dispatchers have the common name prefix. And also a small '_' cleanup for bpf_dispatcher_nopfunc function name. Signed-off-by: Björn Töpel Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 21 +++++++++++---------- include/linux/filter.h | 7 +++---- net/core/filter.c | 5 ++--- 3 files changed, 16 insertions(+), 17 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 8e9ad3943cd9..15c5f351f837 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -512,7 +512,7 @@ struct bpf_dispatcher { u32 image_off; }; -static __always_inline unsigned int bpf_dispatcher_nopfunc( +static __always_inline unsigned int bpf_dispatcher_nop_func( const void *ctx, const struct bpf_insn *insnsi, unsigned int (*bpf_func)(const void *, @@ -527,7 +527,7 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog); void bpf_trampoline_put(struct bpf_trampoline *tr); #define BPF_DISPATCHER_INIT(name) { \ .mutex = __MUTEX_INITIALIZER(name.mutex), \ - .func = &name##func, \ + .func = &name##_func, \ .progs = {}, \ .num_progs = 0, \ .image = NULL, \ @@ -535,7 +535,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr); } #define DEFINE_BPF_DISPATCHER(name) \ - noinline unsigned int name##func( \ + noinline unsigned int bpf_dispatcher_##name##_func( \ const void *ctx, \ const struct bpf_insn *insnsi, \ unsigned int (*bpf_func)(const void *, \ @@ -543,17 +543,18 @@ void bpf_trampoline_put(struct bpf_trampoline *tr); { \ return bpf_func(ctx, insnsi); \ } \ - EXPORT_SYMBOL(name##func); \ - struct bpf_dispatcher name = BPF_DISPATCHER_INIT(name); + EXPORT_SYMBOL(bpf_dispatcher_##name##_func); \ + struct bpf_dispatcher bpf_dispatcher_##name = \ + BPF_DISPATCHER_INIT(bpf_dispatcher_##name); #define DECLARE_BPF_DISPATCHER(name) \ - unsigned int name##func( \ + unsigned int bpf_dispatcher_##name##_func( \ const void *ctx, \ const struct bpf_insn *insnsi, \ unsigned int (*bpf_func)(const void *, \ const struct bpf_insn *)); \ - extern struct bpf_dispatcher name; -#define BPF_DISPATCHER_FUNC(name) name##func -#define BPF_DISPATCHER_PTR(name) (&name) + extern struct bpf_dispatcher bpf_dispatcher_##name; +#define BPF_DISPATCHER_FUNC(name) bpf_dispatcher_##name##_func +#define BPF_DISPATCHER_PTR(name) (&bpf_dispatcher_##name) void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, struct bpf_prog *to); struct bpf_image { @@ -579,7 +580,7 @@ static inline int bpf_trampoline_unlink_prog(struct bpf_prog *prog) static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {} #define DEFINE_BPF_DISPATCHER(name) #define DECLARE_BPF_DISPATCHER(name) -#define BPF_DISPATCHER_FUNC(name) bpf_dispatcher_nopfunc +#define BPF_DISPATCHER_FUNC(name) bpf_dispatcher_nop_func #define BPF_DISPATCHER_PTR(name) NULL static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, diff --git a/include/linux/filter.h b/include/linux/filter.h index f349e2c0884c..eafe72644282 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -577,7 +577,7 @@ DECLARE_STATIC_KEY_FALSE(bpf_stats_enabled_key); ret; }) #define BPF_PROG_RUN(prog, ctx) __BPF_PROG_RUN(prog, ctx, \ - bpf_dispatcher_nopfunc) + bpf_dispatcher_nop_func) #define BPF_SKB_CB_LEN QDISC_CB_PRIV_LEN @@ -701,7 +701,7 @@ static inline u32 bpf_prog_run_clear_cb(const struct bpf_prog *prog, return res; } -DECLARE_BPF_DISPATCHER(bpf_dispatcher_xdp) +DECLARE_BPF_DISPATCHER(xdp) static __always_inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog, struct xdp_buff *xdp) @@ -712,8 +712,7 @@ static __always_inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog, * already takes rcu_read_lock() when fetching the program, so * it's not necessary here anymore. */ - return __BPF_PROG_RUN(prog, xdp, - BPF_DISPATCHER_FUNC(bpf_dispatcher_xdp)); + return __BPF_PROG_RUN(prog, xdp, BPF_DISPATCHER_FUNC(xdp)); } void bpf_prog_change_xdp(struct bpf_prog *prev_prog, struct bpf_prog *prog); diff --git a/net/core/filter.c b/net/core/filter.c index 792e3744b915..5db435141e16 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -8835,10 +8835,9 @@ const struct bpf_prog_ops sk_reuseport_prog_ops = { }; #endif /* CONFIG_INET */ -DEFINE_BPF_DISPATCHER(bpf_dispatcher_xdp) +DEFINE_BPF_DISPATCHER(xdp) void bpf_prog_change_xdp(struct bpf_prog *prev_prog, struct bpf_prog *prog) { - bpf_dispatcher_change_prog(BPF_DISPATCHER_PTR(bpf_dispatcher_xdp), - prev_prog, prog); + bpf_dispatcher_change_prog(BPF_DISPATCHER_PTR(xdp), prev_prog, prog); } From patchwork Sat Feb 8 15:41:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235300 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdc3g1Lz9sRQ for ; Sun, 9 Feb 2020 02:42:44 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727491AbgBHPmj convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:42:39 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:33514 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727303AbgBHPmi (ORCPT ); Sat, 8 Feb 2020 10:42:38 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-147-DBvlFTMkOdOHEXKDqDkZAQ-1; Sat, 08 Feb 2020 10:42:33 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 99E45101FC60; Sat, 8 Feb 2020 15:42:31 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id B49635C28F; Sat, 8 Feb 2020 15:42:28 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 03/14] bpf: Add struct bpf_ksym Date: Sat, 8 Feb 2020 16:41:58 +0100 Message-Id: <20200208154209.1797988-4-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: DBvlFTMkOdOHEXKDqDkZAQ-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding 'struct bpf_ksym' object that will carry the kallsym information for bpf symbol. Adding the start and end address to begin with. It will be used by bpf_prog, bpf_trampoline, bpf_dispatcher. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 6 ++++++ kernel/bpf/core.c | 26 +++++++++++--------------- 2 files changed, 17 insertions(+), 15 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 15c5f351f837..e39ded33fb0c 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -462,6 +462,11 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end, u64 notrace __bpf_prog_enter(void); void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start); +struct bpf_ksym { + unsigned long start; + unsigned long end; +}; + enum bpf_tramp_prog_type { BPF_TRAMP_FENTRY, BPF_TRAMP_FEXIT, @@ -643,6 +648,7 @@ struct bpf_prog_aux { u32 size_poke_tab; struct latch_tree_node ksym_tnode; struct list_head ksym_lnode; + struct bpf_ksym ksym; const struct bpf_prog_ops *ops; struct bpf_map **used_maps; struct bpf_prog *prog; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 973a20d49749..09b5939dcad3 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -524,17 +524,15 @@ int bpf_jit_harden __read_mostly; long bpf_jit_limit __read_mostly; static __always_inline void -bpf_get_prog_addr_region(const struct bpf_prog *prog, - unsigned long *symbol_start, - unsigned long *symbol_end) +bpf_get_prog_addr_region(const struct bpf_prog *prog) { const struct bpf_binary_header *hdr = bpf_jit_binary_hdr(prog); unsigned long addr = (unsigned long)hdr; WARN_ON_ONCE(!bpf_prog_ebpf_jited(prog)); - *symbol_start = addr; - *symbol_end = addr + hdr->pages * PAGE_SIZE; + prog->aux->ksym.start = addr; + prog->aux->ksym.end = addr + hdr->pages * PAGE_SIZE; } void bpf_get_prog_name(const struct bpf_prog *prog, char *sym) @@ -575,13 +573,10 @@ void bpf_get_prog_name(const struct bpf_prog *prog, char *sym) static __always_inline unsigned long bpf_get_prog_addr_start(struct latch_tree_node *n) { - unsigned long symbol_start, symbol_end; const struct bpf_prog_aux *aux; aux = container_of(n, struct bpf_prog_aux, ksym_tnode); - bpf_get_prog_addr_region(aux->prog, &symbol_start, &symbol_end); - - return symbol_start; + return aux->ksym.start; } static __always_inline bool bpf_tree_less(struct latch_tree_node *a, @@ -593,15 +588,13 @@ static __always_inline bool bpf_tree_less(struct latch_tree_node *a, static __always_inline int bpf_tree_comp(void *key, struct latch_tree_node *n) { unsigned long val = (unsigned long)key; - unsigned long symbol_start, symbol_end; const struct bpf_prog_aux *aux; aux = container_of(n, struct bpf_prog_aux, ksym_tnode); - bpf_get_prog_addr_region(aux->prog, &symbol_start, &symbol_end); - if (val < symbol_start) + if (val < aux->ksym.start) return -1; - if (val >= symbol_end) + if (val >= aux->ksym.end) return 1; return 0; @@ -649,6 +642,8 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp) !capable(CAP_SYS_ADMIN)) return; + bpf_get_prog_addr_region(fp); + spin_lock_bh(&bpf_lock); bpf_prog_ksym_node_add(fp->aux); spin_unlock_bh(&bpf_lock); @@ -677,14 +672,15 @@ static struct bpf_prog *bpf_prog_kallsyms_find(unsigned long addr) const char *__bpf_address_lookup(unsigned long addr, unsigned long *size, unsigned long *off, char *sym) { - unsigned long symbol_start, symbol_end; struct bpf_prog *prog; char *ret = NULL; rcu_read_lock(); prog = bpf_prog_kallsyms_find(addr); if (prog) { - bpf_get_prog_addr_region(prog, &symbol_start, &symbol_end); + unsigned long symbol_start = prog->aux->ksym.start; + unsigned long symbol_end = prog->aux->ksym.end; + bpf_get_prog_name(prog, sym); ret = sym; From patchwork Sat Feb 8 15:41:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235301 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdc73SGz9sRR for ; Sun, 9 Feb 2020 02:42:44 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727502AbgBHPmn convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:42:43 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:58927 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727499AbgBHPmn (ORCPT ); Sat, 8 Feb 2020 10:42:43 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-342-SRaQa9irPK-hd9qKnJ4D-A-1; Sat, 08 Feb 2020 10:42:38 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 349E38010CA; Sat, 8 Feb 2020 15:42:36 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id F37345C21B; Sat, 8 Feb 2020 15:42:31 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 04/14] bpf: Add name to struct bpf_ksym Date: Sat, 8 Feb 2020 16:41:59 +0100 Message-Id: <20200208154209.1797988-5-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: SRaQa9irPK-hd9qKnJ4D-A-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding name to 'struct bpf_ksym' object to carry the name of the symbol for bpf_prog, bpf_trampoline, bpf_dispatcher. The current benefit is that name is now generated only when the symbol is added to the list, so we don't need to generate it every time it's accessed. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 2 ++ include/linux/filter.h | 6 ------ kernel/bpf/core.c | 8 +++++--- kernel/events/core.c | 4 ++-- 4 files changed, 9 insertions(+), 11 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index e39ded33fb0c..1327b07057a8 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -18,6 +18,7 @@ #include #include #include +#include struct bpf_verifier_env; struct bpf_verifier_log; @@ -465,6 +466,7 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start); struct bpf_ksym { unsigned long start; unsigned long end; + char name[KSYM_NAME_LEN]; }; enum bpf_tramp_prog_type { diff --git a/include/linux/filter.h b/include/linux/filter.h index eafe72644282..a945c250ad53 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1062,7 +1062,6 @@ bpf_address_lookup(unsigned long addr, unsigned long *size, void bpf_prog_kallsyms_add(struct bpf_prog *fp); void bpf_prog_kallsyms_del(struct bpf_prog *fp); -void bpf_get_prog_name(const struct bpf_prog *prog, char *sym); #else /* CONFIG_BPF_JIT */ @@ -1131,11 +1130,6 @@ static inline void bpf_prog_kallsyms_del(struct bpf_prog *fp) { } -static inline void bpf_get_prog_name(const struct bpf_prog *prog, char *sym) -{ - sym[0] = '\0'; -} - #endif /* CONFIG_BPF_JIT */ void bpf_prog_kallsyms_del_all(struct bpf_prog *fp); diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 09b5939dcad3..f4f0b3ca95ae 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -535,8 +535,9 @@ bpf_get_prog_addr_region(const struct bpf_prog *prog) prog->aux->ksym.end = addr + hdr->pages * PAGE_SIZE; } -void bpf_get_prog_name(const struct bpf_prog *prog, char *sym) +static void bpf_get_prog_name(const struct bpf_prog *prog) { + char *sym = prog->aux->ksym.name; const char *end = sym + KSYM_NAME_LEN; const struct btf_type *type; const char *func_name; @@ -643,6 +644,7 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp) return; bpf_get_prog_addr_region(fp); + bpf_get_prog_name(fp); spin_lock_bh(&bpf_lock); bpf_prog_ksym_node_add(fp->aux); @@ -681,7 +683,7 @@ const char *__bpf_address_lookup(unsigned long addr, unsigned long *size, unsigned long symbol_start = prog->aux->ksym.start; unsigned long symbol_end = prog->aux->ksym.end; - bpf_get_prog_name(prog, sym); + strncpy(sym, prog->aux->ksym.name, KSYM_NAME_LEN); ret = sym; if (size) @@ -738,7 +740,7 @@ int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type, if (it++ != symnum) continue; - bpf_get_prog_name(aux->prog, sym); + strncpy(sym, aux->ksym.name, KSYM_NAME_LEN); *value = (unsigned long)aux->prog->bpf_func; *type = BPF_SYM_ELF_TYPE; diff --git a/kernel/events/core.c b/kernel/events/core.c index 2173c23c25b4..c4b01ca30cd4 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -8250,7 +8250,7 @@ static void perf_event_bpf_emit_ksymbols(struct bpf_prog *prog, int i; if (prog->aux->func_cnt == 0) { - bpf_get_prog_name(prog, sym); + strncpy(sym, prog->aux->ksym.name, KSYM_NAME_LEN); perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, (u64)(unsigned long)prog->bpf_func, prog->jited_len, unregister, sym); @@ -8258,7 +8258,7 @@ static void perf_event_bpf_emit_ksymbols(struct bpf_prog *prog, for (i = 0; i < prog->aux->func_cnt; i++) { struct bpf_prog *subprog = prog->aux->func[i]; - bpf_get_prog_name(subprog, sym); + strncpy(sym, subprog->aux->ksym.name, KSYM_NAME_LEN); perf_event_ksymbol( PERF_RECORD_KSYMBOL_TYPE_BPF, (u64)(unsigned long)subprog->bpf_func, From patchwork Sat Feb 8 15:42:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235304 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdg09yDz9sRQ for ; Sun, 9 Feb 2020 02:42:47 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727522AbgBHPmp convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:42:45 -0500 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:28903 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727514AbgBHPmp (ORCPT ); Sat, 8 Feb 2020 10:42:45 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-163-_RoVVG3RPOK_370FP-FZng-1; Sat, 08 Feb 2020 10:42:41 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 85F65DB23; Sat, 8 Feb 2020 15:42:39 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8CD9D5C21B; Sat, 8 Feb 2020 15:42:36 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 05/14] bpf: Add lnode list node to struct bpf_ksym Date: Sat, 8 Feb 2020 16:42:00 +0100 Message-Id: <20200208154209.1797988-6-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: _RoVVG3RPOK_370FP-FZng-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Adding lnode list node to 'struct bpf_ksym' object, so the symbol itself can be chained and used in other objects like bpf_trampoline and bpf_dispatcher. Changing iterator to bpf_ksym in bpf_get_kallsym. This patch also changes the address used for bpf_prog displayed in /proc/kallsyms. Now it's the address of the whole bpf_prog region, not the address of the entry function. I think it make more sense for /proc/kallsyms to describe all the place used by bpf_prog. We can easily change it in future if needed. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 2 +- kernel/bpf/core.c | 22 +++++++++++----------- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 1327b07057a8..da67ca3afa2f 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -467,6 +467,7 @@ struct bpf_ksym { unsigned long start; unsigned long end; char name[KSYM_NAME_LEN]; + struct list_head lnode; }; enum bpf_tramp_prog_type { @@ -649,7 +650,6 @@ struct bpf_prog_aux { struct bpf_jit_poke_descriptor *poke_tab; u32 size_poke_tab; struct latch_tree_node ksym_tnode; - struct list_head ksym_lnode; struct bpf_ksym ksym; const struct bpf_prog_ops *ops; struct bpf_map **used_maps; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index f4f0b3ca95ae..b9b7077e60f3 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -97,7 +97,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag fp->aux->prog = fp; fp->jit_requested = ebpf_jit_enabled(); - INIT_LIST_HEAD_RCU(&fp->aux->ksym_lnode); + INIT_LIST_HEAD_RCU(&fp->aux->ksym.lnode); return fp; } @@ -612,18 +612,18 @@ static struct latch_tree_root bpf_tree __cacheline_aligned; static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux) { - WARN_ON_ONCE(!list_empty(&aux->ksym_lnode)); - list_add_tail_rcu(&aux->ksym_lnode, &bpf_kallsyms); + WARN_ON_ONCE(!list_empty(&aux->ksym.lnode)); + list_add_tail_rcu(&aux->ksym.lnode, &bpf_kallsyms); latch_tree_insert(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); } static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux) { - if (list_empty(&aux->ksym_lnode)) + if (list_empty(&aux->ksym.lnode)) return; latch_tree_erase(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); - list_del_rcu(&aux->ksym_lnode); + list_del_rcu(&aux->ksym.lnode); } static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp) @@ -633,8 +633,8 @@ static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp) static bool bpf_prog_kallsyms_verify_off(const struct bpf_prog *fp) { - return list_empty(&fp->aux->ksym_lnode) || - fp->aux->ksym_lnode.prev == LIST_POISON2; + return list_empty(&fp->aux->ksym.lnode) || + fp->aux->ksym.lnode.prev == LIST_POISON2; } void bpf_prog_kallsyms_add(struct bpf_prog *fp) @@ -728,7 +728,7 @@ const struct exception_table_entry *search_bpf_extables(unsigned long addr) int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type, char *sym) { - struct bpf_prog_aux *aux; + struct bpf_ksym *ksym; unsigned int it = 0; int ret = -ERANGE; @@ -736,13 +736,13 @@ int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type, return ret; rcu_read_lock(); - list_for_each_entry_rcu(aux, &bpf_kallsyms, ksym_lnode) { + list_for_each_entry_rcu(ksym, &bpf_kallsyms, lnode) { if (it++ != symnum) continue; - strncpy(sym, aux->ksym.name, KSYM_NAME_LEN); + strncpy(sym, ksym->name, KSYM_NAME_LEN); - *value = (unsigned long)aux->prog->bpf_func; + *value = ksym->start; *type = BPF_SYM_ELF_TYPE; ret = 0; From patchwork Sat Feb 8 15:42:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235306 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdk1rQFz9sRQ for ; Sun, 9 Feb 2020 02:42:50 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727529AbgBHPmt convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:42:49 -0500 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:24472 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727514AbgBHPmt (ORCPT ); Sat, 8 Feb 2020 10:42:49 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-212-nlASWSeAPqe1w0HtI1lVug-1; Sat, 08 Feb 2020 10:42:44 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B83F4800D6C; Sat, 8 Feb 2020 15:42:42 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id DE9655C21B; Sat, 8 Feb 2020 15:42:39 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 06/14] bpf: Add bpf_kallsyms_tree tree Date: Sat, 8 Feb 2020 16:42:01 +0100 Message-Id: <20200208154209.1797988-7-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: nlASWSeAPqe1w0HtI1lVug-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org The bpf_tree is used both for kallsyms iterations and searching for exception tables of bpf programs, which is needed only for bpf programs. Adding bpf_kallsyms_tree that will hold symbols for all bpf_prog, bpf_trampoline and bpf_dispatcher objects and keeping bpf_tree only for bpf_prog objects exception tables search to keep it fast. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 1 + kernel/bpf/core.c | 60 ++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 55 insertions(+), 6 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index da67ca3afa2f..151d7b1c8435 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -468,6 +468,7 @@ struct bpf_ksym { unsigned long end; char name[KSYM_NAME_LEN]; struct list_head lnode; + struct latch_tree_node tnode; }; enum bpf_tramp_prog_type { diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index b9b7077e60f3..1daa72341450 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -606,8 +606,46 @@ static const struct latch_tree_ops bpf_tree_ops = { .comp = bpf_tree_comp, }; +static __always_inline unsigned long +bpf_get_ksym_start(struct latch_tree_node *n) +{ + const struct bpf_ksym *ksym; + + ksym = container_of(n, struct bpf_ksym, tnode); + return ksym->start; +} + +static __always_inline bool +bpf_ksym_tree_less(struct latch_tree_node *a, + struct latch_tree_node *b) +{ + return bpf_get_ksym_start(a) < bpf_get_ksym_start(b); +} + +static __always_inline int +bpf_ksym_tree_comp(void *key, struct latch_tree_node *n) +{ + unsigned long val = (unsigned long)key; + const struct bpf_ksym *ksym; + + ksym = container_of(n, struct bpf_ksym, tnode); + + if (val < ksym->start) + return -1; + if (val >= ksym->end) + return 1; + + return 0; +} + +static const struct latch_tree_ops bpf_kallsyms_tree_ops = { + .less = bpf_ksym_tree_less, + .comp = bpf_ksym_tree_comp, +}; + static DEFINE_SPINLOCK(bpf_lock); static LIST_HEAD(bpf_kallsyms); +static struct latch_tree_root bpf_kallsyms_tree __cacheline_aligned; static struct latch_tree_root bpf_tree __cacheline_aligned; static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux) @@ -615,6 +653,7 @@ static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux) WARN_ON_ONCE(!list_empty(&aux->ksym.lnode)); list_add_tail_rcu(&aux->ksym.lnode, &bpf_kallsyms); latch_tree_insert(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); + latch_tree_insert(&aux->ksym.tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); } static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux) @@ -623,6 +662,7 @@ static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux) return; latch_tree_erase(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); + latch_tree_erase(&aux->ksym.tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); list_del_rcu(&aux->ksym.lnode); } @@ -671,19 +711,27 @@ static struct bpf_prog *bpf_prog_kallsyms_find(unsigned long addr) NULL; } +static struct bpf_ksym *bpf_ksym_find(unsigned long addr) +{ + struct latch_tree_node *n; + + n = latch_tree_find((void *)addr, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); + return n ? container_of(n, struct bpf_ksym, tnode) : NULL; +} + const char *__bpf_address_lookup(unsigned long addr, unsigned long *size, unsigned long *off, char *sym) { - struct bpf_prog *prog; + struct bpf_ksym *ksym; char *ret = NULL; rcu_read_lock(); - prog = bpf_prog_kallsyms_find(addr); - if (prog) { - unsigned long symbol_start = prog->aux->ksym.start; - unsigned long symbol_end = prog->aux->ksym.end; + ksym = bpf_ksym_find(addr); + if (ksym) { + unsigned long symbol_start = ksym->start; + unsigned long symbol_end = ksym->end; - strncpy(sym, prog->aux->ksym.name, KSYM_NAME_LEN); + strncpy(sym, ksym->name, KSYM_NAME_LEN); ret = sym; if (size) From patchwork Sat Feb 8 15:42:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235308 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdn2H2Lz9sRQ for ; Sun, 9 Feb 2020 02:42:53 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727340AbgBHPmw convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:42:52 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:60814 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727527AbgBHPmw (ORCPT ); Sat, 8 Feb 2020 10:42:52 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-211-3GgoLTZiMqmv82Oi1nUMyA-1; Sat, 08 Feb 2020 10:42:48 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 127A8801E74; Sat, 8 Feb 2020 15:42:46 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2F6885C28F; Sat, 8 Feb 2020 15:42:42 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 07/14] bpf: Move bpf_tree add/del from bpf_prog_ksym_node_add/del Date: Sat, 8 Feb 2020 16:42:02 +0100 Message-Id: <20200208154209.1797988-8-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: 3GgoLTZiMqmv82Oi1nUMyA-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Moving bpf_tree add/del from bpf_prog_ksym_node_add/del, because it will be used (and renamed) in following patches for bpf_ksym objects. The bpf_tree is specific for bpf_prog objects. Signed-off-by: Jiri Olsa --- kernel/bpf/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 1daa72341450..f4c16b362858 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -652,7 +652,6 @@ static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux) { WARN_ON_ONCE(!list_empty(&aux->ksym.lnode)); list_add_tail_rcu(&aux->ksym.lnode, &bpf_kallsyms); - latch_tree_insert(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); latch_tree_insert(&aux->ksym.tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); } @@ -661,7 +660,6 @@ static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux) if (list_empty(&aux->ksym.lnode)) return; - latch_tree_erase(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); latch_tree_erase(&aux->ksym.tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); list_del_rcu(&aux->ksym.lnode); } @@ -687,6 +685,7 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp) bpf_get_prog_name(fp); spin_lock_bh(&bpf_lock); + latch_tree_insert(&fp->aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); bpf_prog_ksym_node_add(fp->aux); spin_unlock_bh(&bpf_lock); } @@ -697,6 +696,7 @@ void bpf_prog_kallsyms_del(struct bpf_prog *fp) return; spin_lock_bh(&bpf_lock); + latch_tree_erase(&fp->aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); bpf_prog_ksym_node_del(fp->aux); spin_unlock_bh(&bpf_lock); } From patchwork Sat Feb 8 15:42:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235310 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdw2srZz9sRQ for ; Sun, 9 Feb 2020 02:43:00 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727556AbgBHPm7 convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:42:59 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:38277 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727546AbgBHPm7 (ORCPT ); Sat, 8 Feb 2020 10:42:59 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-216-9hSWATo-PHOSsWW2Or6T6g-1; Sat, 08 Feb 2020 10:42:51 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4A4151800D42; Sat, 8 Feb 2020 15:42:49 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6ADB95C21B; Sat, 8 Feb 2020 15:42:46 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 08/14] bpf: Separate kallsyms add/del functions Date: Sat, 8 Feb 2020 16:42:03 +0100 Message-Id: <20200208154209.1797988-9-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: 9hSWATo-PHOSsWW2Or6T6g-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Moving bpf_prog_ksym_node_add/del to __bpf_ksym_add/del and changing the argument to 'struct bpf_ksym' object. Signed-off-by: Jiri Olsa --- kernel/bpf/core.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index f4c16b362858..ee082c79ac99 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -648,20 +648,20 @@ static LIST_HEAD(bpf_kallsyms); static struct latch_tree_root bpf_kallsyms_tree __cacheline_aligned; static struct latch_tree_root bpf_tree __cacheline_aligned; -static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux) +static void __bpf_ksym_add(struct bpf_ksym *ksym) { - WARN_ON_ONCE(!list_empty(&aux->ksym.lnode)); - list_add_tail_rcu(&aux->ksym.lnode, &bpf_kallsyms); - latch_tree_insert(&aux->ksym.tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); + WARN_ON_ONCE(!list_empty(&ksym->lnode)); + list_add_tail_rcu(&ksym->lnode, &bpf_kallsyms); + latch_tree_insert(&ksym->tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); } -static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux) +static void __bpf_ksym_del(struct bpf_ksym *ksym) { - if (list_empty(&aux->ksym.lnode)) + if (list_empty(&ksym->lnode)) return; - latch_tree_erase(&aux->ksym.tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); - list_del_rcu(&aux->ksym.lnode); + latch_tree_erase(&ksym->tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); + list_del_rcu(&ksym->lnode); } static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp) @@ -686,7 +686,7 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp) spin_lock_bh(&bpf_lock); latch_tree_insert(&fp->aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); - bpf_prog_ksym_node_add(fp->aux); + __bpf_ksym_add(&fp->aux->ksym); spin_unlock_bh(&bpf_lock); } @@ -697,7 +697,7 @@ void bpf_prog_kallsyms_del(struct bpf_prog *fp) spin_lock_bh(&bpf_lock); latch_tree_erase(&fp->aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); - bpf_prog_ksym_node_del(fp->aux); + __bpf_ksym_del(&fp->aux->ksym); spin_unlock_bh(&bpf_lock); } From patchwork Sat Feb 8 15:42:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235313 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdy3z1vz9sRQ for ; Sun, 9 Feb 2020 02:43:02 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727561AbgBHPnC convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:43:02 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:29714 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727559AbgBHPnB (ORCPT ); Sat, 8 Feb 2020 10:43:01 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-162-HyJJIwAUNOCmXiVbmAIorA-1; Sat, 08 Feb 2020 10:42:54 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 778598014CE; Sat, 8 Feb 2020 15:42:52 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id A44685C28F; Sat, 8 Feb 2020 15:42:49 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 09/14] bpf: Add bpf_ksym_add/del functions Date: Sat, 8 Feb 2020 16:42:04 +0100 Message-Id: <20200208154209.1797988-10-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: HyJJIwAUNOCmXiVbmAIorA-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding bpf_ksym_add/del functions as locked version for __bpf_ksym_add/del. It will be used in following patches for bpf_trampoline and bpf_dispatcher. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 3 +++ kernel/bpf/core.c | 14 ++++++++++++++ 2 files changed, 17 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 151d7b1c8435..7a4626c8e747 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -573,6 +573,9 @@ struct bpf_image { #define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image)) bool is_bpf_image_address(unsigned long address); void *bpf_image_alloc(void); +/* Called only from code, so there's no need for stubs. */ +void bpf_ksym_add(struct bpf_ksym *ksym); +void bpf_ksym_del(struct bpf_ksym *ksym); #else static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index ee082c79ac99..73242fd07893 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -655,6 +655,13 @@ static void __bpf_ksym_add(struct bpf_ksym *ksym) latch_tree_insert(&ksym->tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); } +void bpf_ksym_add(struct bpf_ksym *ksym) +{ + spin_lock_bh(&bpf_lock); + __bpf_ksym_add(ksym); + spin_unlock_bh(&bpf_lock); +} + static void __bpf_ksym_del(struct bpf_ksym *ksym) { if (list_empty(&ksym->lnode)) @@ -664,6 +671,13 @@ static void __bpf_ksym_del(struct bpf_ksym *ksym) list_del_rcu(&ksym->lnode); } +void bpf_ksym_del(struct bpf_ksym *ksym) +{ + spin_lock_bh(&bpf_lock); + __bpf_ksym_del(ksym); + spin_unlock_bh(&bpf_lock); +} + static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp) { return fp->jited && !bpf_prog_was_classic(fp); From patchwork Sat Feb 8 15:42:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235312 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGdx4Mfyz9sRQ for ; Sun, 9 Feb 2020 02:43:01 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727558AbgBHPnB convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:43:01 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:51943 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727546AbgBHPnB (ORCPT ); Sat, 8 Feb 2020 10:43:01 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-350-45redfCwPh-WpmZuHuWcIg-1; Sat, 08 Feb 2020 10:42:57 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D037C8014D1; Sat, 8 Feb 2020 15:42:55 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id DD4225C28F; Sat, 8 Feb 2020 15:42:52 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 10/14] bpf: Re-initialize lnode in bpf_ksym_del Date: Sat, 8 Feb 2020 16:42:05 +0100 Message-Id: <20200208154209.1797988-11-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: 45redfCwPh-WpmZuHuWcIg-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org When bpf_prog is removed from kallsyms it's on the way out to be removed, so we don't care about lnode state. However the bpf_ksym_del will be used also by bpf_trampoline and bpf_dispatcher objects, which stay allocated even when they are not in kallsyms list, hence the lnode re-init. The list_del_rcu commentary states that we need to call synchronize_rcu, before we can change/re-init the list_head pointers. Signed-off-by: Jiri Olsa --- kernel/bpf/core.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 73242fd07893..66b17bea286e 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -676,6 +676,13 @@ void bpf_ksym_del(struct bpf_ksym *ksym) spin_lock_bh(&bpf_lock); __bpf_ksym_del(ksym); spin_unlock_bh(&bpf_lock); + + /* + * As explained in list_del_rcu, We must call synchronize_rcu + * before changing list_head pointers. + */ + synchronize_rcu(); + INIT_LIST_HEAD_RCU(&ksym->lnode); } static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp) From patchwork Sat Feb 8 15:42:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235316 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGf23qLlz9sRQ for ; Sun, 9 Feb 2020 02:43:06 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727575AbgBHPnG convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:43:06 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:46403 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727574AbgBHPnF (ORCPT ); Sat, 8 Feb 2020 10:43:05 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-306-nBi2I-omMnqZKroaQ9IxGQ-1; Sat, 08 Feb 2020 10:43:01 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 07BFF100726F; Sat, 8 Feb 2020 15:42:59 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id 35AFB5C21B; Sat, 8 Feb 2020 15:42:56 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 11/14] bpf: Rename bpf_tree to bpf_progs_tree Date: Sat, 8 Feb 2020 16:42:06 +0100 Message-Id: <20200208154209.1797988-12-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: nBi2I-omMnqZKroaQ9IxGQ-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Renaming bpf_tree to bpf_progs_tree and bpf_tree_ops to bpf_progs_tree_ops to better capture the usage of the tree, which is used for the bpf_prog objects only for exception tables search. Signed-off-by: Jiri Olsa --- kernel/bpf/core.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 66b17bea286e..50af5dcf7ff9 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -580,13 +580,14 @@ bpf_get_prog_addr_start(struct latch_tree_node *n) return aux->ksym.start; } -static __always_inline bool bpf_tree_less(struct latch_tree_node *a, - struct latch_tree_node *b) +static __always_inline bool +bpf_progs_tree_less(struct latch_tree_node *a, + struct latch_tree_node *b) { return bpf_get_prog_addr_start(a) < bpf_get_prog_addr_start(b); } -static __always_inline int bpf_tree_comp(void *key, struct latch_tree_node *n) +static __always_inline int bpf_progs_tree_comp(void *key, struct latch_tree_node *n) { unsigned long val = (unsigned long)key; const struct bpf_prog_aux *aux; @@ -601,9 +602,9 @@ static __always_inline int bpf_tree_comp(void *key, struct latch_tree_node *n) return 0; } -static const struct latch_tree_ops bpf_tree_ops = { - .less = bpf_tree_less, - .comp = bpf_tree_comp, +static const struct latch_tree_ops bpf_progs_tree_ops = { + .less = bpf_progs_tree_less, + .comp = bpf_progs_tree_comp, }; static __always_inline unsigned long @@ -646,7 +647,7 @@ static const struct latch_tree_ops bpf_kallsyms_tree_ops = { static DEFINE_SPINLOCK(bpf_lock); static LIST_HEAD(bpf_kallsyms); static struct latch_tree_root bpf_kallsyms_tree __cacheline_aligned; -static struct latch_tree_root bpf_tree __cacheline_aligned; +static struct latch_tree_root bpf_progs_tree __cacheline_aligned; static void __bpf_ksym_add(struct bpf_ksym *ksym) { @@ -706,7 +707,8 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp) bpf_get_prog_name(fp); spin_lock_bh(&bpf_lock); - latch_tree_insert(&fp->aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); + latch_tree_insert(&fp->aux->ksym_tnode, &bpf_progs_tree, + &bpf_progs_tree_ops); __bpf_ksym_add(&fp->aux->ksym); spin_unlock_bh(&bpf_lock); } @@ -717,7 +719,8 @@ void bpf_prog_kallsyms_del(struct bpf_prog *fp) return; spin_lock_bh(&bpf_lock); - latch_tree_erase(&fp->aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); + latch_tree_erase(&fp->aux->ksym_tnode, &bpf_progs_tree, + &bpf_progs_tree_ops); __bpf_ksym_del(&fp->aux->ksym); spin_unlock_bh(&bpf_lock); } @@ -726,7 +729,8 @@ static struct bpf_prog *bpf_prog_kallsyms_find(unsigned long addr) { struct latch_tree_node *n; - n = latch_tree_find((void *)addr, &bpf_tree, &bpf_tree_ops); + n = latch_tree_find((void *)addr, &bpf_progs_tree, + &bpf_progs_tree_ops); return n ? container_of(n, struct bpf_prog_aux, ksym_tnode)->prog : NULL; From patchwork Sat Feb 8 15:42:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235318 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGf563mZz9sRQ for ; Sun, 9 Feb 2020 02:43:09 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727582AbgBHPnJ convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:43:09 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:34550 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727557AbgBHPnJ (ORCPT ); Sat, 8 Feb 2020 10:43:09 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-2-_G4WUAj2MtSddpsluOlogQ-1; Sat, 08 Feb 2020 10:43:04 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 551C58014CE; Sat, 8 Feb 2020 15:43:02 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id 60BD65C28F; Sat, 8 Feb 2020 15:42:59 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 12/14] bpf: Add trampolines to kallsyms Date: Sat, 8 Feb 2020 16:42:07 +0100 Message-Id: <20200208154209.1797988-13-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: _G4WUAj2MtSddpsluOlogQ-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding trampolines to kallsyms. It's displayed as bpf_trampoline_ [bpf] where ID is the BTF id of the trampoline function. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 2 ++ kernel/bpf/trampoline.c | 23 +++++++++++++++++++++++ 2 files changed, 25 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 7a4626c8e747..b91bac10d3ea 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -502,6 +502,7 @@ struct bpf_trampoline { /* Executable image of trampoline */ void *image; u64 selector; + struct bpf_ksym ksym; }; #define BPF_DISPATCHER_MAX 48 /* Fits in 2048B */ @@ -573,6 +574,7 @@ struct bpf_image { #define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image)) bool is_bpf_image_address(unsigned long address); void *bpf_image_alloc(void); +void bpf_image_ksym_add(void *data, struct bpf_ksym *ksym); /* Called only from code, so there's no need for stubs. */ void bpf_ksym_add(struct bpf_ksym *ksym); void bpf_ksym_del(struct bpf_ksym *ksym); diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 6b264a92064b..1ee29907cbe5 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -96,6 +96,15 @@ bool is_bpf_image_address(unsigned long addr) return ret; } +void bpf_image_ksym_add(void *data, struct bpf_ksym *ksym) +{ + struct bpf_image *image = container_of(data, struct bpf_image, data); + + ksym->start = (unsigned long) image; + ksym->end = ksym->start + PAGE_SIZE; + bpf_ksym_add(ksym); +} + struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { struct bpf_trampoline *tr; @@ -131,6 +140,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key) for (i = 0; i < BPF_TRAMP_MAX; i++) INIT_HLIST_HEAD(&tr->progs_hlist[i]); tr->image = image; + INIT_LIST_HEAD_RCU(&tr->ksym.lnode); out: mutex_unlock(&trampoline_mutex); return tr; @@ -267,6 +277,15 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(enum bpf_attach_type t) } } +static void bpf_trampoline_kallsyms_add(struct bpf_trampoline *tr) +{ + struct bpf_ksym *ksym = &tr->ksym; + + snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", + tr->key & ((u64) (1LU << 32) - 1)); + bpf_image_ksym_add(tr->image, &tr->ksym); +} + int bpf_trampoline_link_prog(struct bpf_prog *prog) { enum bpf_tramp_prog_type kind; @@ -311,6 +330,8 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog) if (err) { hlist_del(&prog->aux->tramp_hlist); tr->progs_cnt[kind]--; + } else if (cnt == 0) { + bpf_trampoline_kallsyms_add(tr); } out: mutex_unlock(&tr->mutex); @@ -336,6 +357,8 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog) } hlist_del(&prog->aux->tramp_hlist); tr->progs_cnt[kind]--; + if (!(tr->progs_cnt[BPF_TRAMP_FENTRY] + tr->progs_cnt[BPF_TRAMP_FEXIT])) + bpf_ksym_del(&tr->ksym); err = bpf_trampoline_update(prog->aux->trampoline); out: mutex_unlock(&tr->mutex); From patchwork Sat Feb 8 15:42:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235320 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGf937Tlz9sRQ for ; Sun, 9 Feb 2020 02:43:13 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727589AbgBHPnN convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:43:13 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:53622 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727588AbgBHPnM (ORCPT ); Sat, 8 Feb 2020 10:43:12 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-399-Bql8ufMdP9ikBZNEOreDRw-1; Sat, 08 Feb 2020 10:43:07 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9BA0C800D6C; Sat, 8 Feb 2020 15:43:05 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id CDA2A5C21B; Sat, 8 Feb 2020 15:43:02 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 13/14] bpf: Add dispatchers to kallsyms Date: Sat, 8 Feb 2020 16:42:08 +0100 Message-Id: <20200208154209.1797988-14-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: Bql8ufMdP9ikBZNEOreDRw-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding dispatchers to kallsyms. It's displayed as bpf_dispatcher_ where NAME is the name of dispatcher. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 19 ++++++++++++------- kernel/bpf/dispatcher.c | 6 ++++++ 2 files changed, 18 insertions(+), 7 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index b91bac10d3ea..837cdc093d2c 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -520,6 +520,7 @@ struct bpf_dispatcher { int num_progs; void *image; u32 image_off; + struct bpf_ksym ksym; }; static __always_inline unsigned int bpf_dispatcher_nop_func( @@ -535,13 +536,17 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key); int bpf_trampoline_link_prog(struct bpf_prog *prog); int bpf_trampoline_unlink_prog(struct bpf_prog *prog); void bpf_trampoline_put(struct bpf_trampoline *tr); -#define BPF_DISPATCHER_INIT(name) { \ - .mutex = __MUTEX_INITIALIZER(name.mutex), \ - .func = &name##_func, \ - .progs = {}, \ - .num_progs = 0, \ - .image = NULL, \ - .image_off = 0 \ +#define BPF_DISPATCHER_INIT(_name) { \ + .mutex = __MUTEX_INITIALIZER(_name.mutex), \ + .func = &_name##_func, \ + .progs = {}, \ + .num_progs = 0, \ + .image = NULL, \ + .image_off = 0, \ + .ksym = { \ + .name = #_name, \ + .lnode = LIST_HEAD_INIT(_name.ksym.lnode), \ + }, \ } #define DEFINE_BPF_DISPATCHER(name) \ diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c index b3e5b214fed8..8771d2cc5840 100644 --- a/kernel/bpf/dispatcher.c +++ b/kernel/bpf/dispatcher.c @@ -152,6 +152,12 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, if (!changed) goto out; + if (!prev_num_progs) + bpf_image_ksym_add(d->image, &d->ksym); + + if (!d->num_progs) + bpf_ksym_del(&d->ksym); + bpf_dispatcher_update(d, prev_num_progs); out: mutex_unlock(&d->mutex); From patchwork Sat Feb 8 15:42:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1235322 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48FGfB6bQDz9sRQ for ; Sun, 9 Feb 2020 02:43:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727581AbgBHPnO convert rfc822-to-8bit (ORCPT ); Sat, 8 Feb 2020 10:43:14 -0500 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:42877 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727591AbgBHPnO (ORCPT ); Sat, 8 Feb 2020 10:43:14 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-165-7bPdM3CtNmWEZytLkHEdMQ-1; Sat, 08 Feb 2020 10:43:10 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CC7DA1800D42; Sat, 8 Feb 2020 15:43:08 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-79.brq.redhat.com [10.40.204.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id F3D325C28F; Sat, 8 Feb 2020 15:43:05 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer Subject: [PATCH 14/14] bpf: Sort bpf kallsyms symbols Date: Sat, 8 Feb 2020 16:42:09 +0100 Message-Id: <20200208154209.1797988-15-jolsa@kernel.org> In-Reply-To: <20200208154209.1797988-1-jolsa@kernel.org> References: <20200208154209.1797988-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: 7bPdM3CtNmWEZytLkHEdMQ-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Currently we don't sort bpf_kallsyms and display symbols in proc/kallsyms as they come in via __bpf_ksym_add. Using the latch tree to get the next bpf_ksym object and insert the new symbol ahead of it. Signed-off-by: Jiri Olsa Acked-by: Andrii Nakryiko --- kernel/bpf/core.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 50af5dcf7ff9..c63ff34b2128 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -651,9 +651,30 @@ static struct latch_tree_root bpf_progs_tree __cacheline_aligned; static void __bpf_ksym_add(struct bpf_ksym *ksym) { + struct list_head *head = &bpf_kallsyms; + WARN_ON_ONCE(!list_empty(&ksym->lnode)); - list_add_tail_rcu(&ksym->lnode, &bpf_kallsyms); latch_tree_insert(&ksym->tnode, &bpf_kallsyms_tree, &bpf_kallsyms_tree_ops); + + /* + * Add ksym into bpf_kallsyms in ordered position, + * which is prepared for us by latch tree addition. + * + * Find out the next symbol and insert ksym right + * ahead of it. If ksym is the last one, just tail + * add to the bpf_kallsyms. + */ + if (!list_empty(&bpf_kallsyms)) { + struct rb_node *next = rb_next(&ksym->tnode.node[0]); + + if (next) { + struct bpf_ksym *ptr; + + ptr = container_of(next, struct bpf_ksym, tnode.node[0]); + head = &ptr->lnode; + } + } + list_add_tail_rcu(&ksym->lnode, head); } void bpf_ksym_add(struct bpf_ksym *ksym)