From patchwork Thu Mar 12 19:55:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253938 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfj90Q2Lz9sNg for ; Fri, 13 Mar 2020 06:56:28 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726856AbgCLT42 convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:56:28 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:58348 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726558AbgCLT42 (ORCPT ); Thu, 12 Mar 2020 15:56:28 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-299-aYOUD-UoOs6cskvHX4Rsyg-1; Thu, 12 Mar 2020 15:56:23 -0400 X-MC-Unique: aYOUD-UoOs6cskvHX4Rsyg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2000219251A3; Thu, 12 Mar 2020 19:56:21 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id A51105D9C5; Thu, 12 Mar 2020 19:56:15 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: Song Liu , kbuild test robot , netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 01/15] x86/mm: Rename is_kernel_text to __is_kernel_text Date: Thu, 12 Mar 2020 20:55:56 +0100 Message-Id: <20200312195610.346362-2-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org The kbuild test robot reported compile issue on x86 in one of the following patches that adds include into , which is picked up by init_32.c object. The problem is that defines global function is_kernel_text which colides with the static function of the same name defined in init_32.c: $ make ARCH=i386 ... >> arch/x86/mm/init_32.c:241:19: error: redefinition of 'is_kernel_text' static inline int is_kernel_text(unsigned long addr) ^~~~~~~~~~~~~~ In file included from include/linux/bpf.h:21:0, from include/linux/bpf-cgroup.h:5, from include/linux/cgroup-defs.h:22, from include/linux/cgroup.h:28, from include/linux/hugetlb.h:9, from arch/x86/mm/init_32.c:18: include/linux/kallsyms.h:31:19: note: previous definition of 'is_kernel_text' was here static inline int is_kernel_text(unsigned long addr) Renaming the init_32.c is_kernel_text function to __is_kernel_text. Acked-by: Song Liu Reported-by: kbuild test robot Signed-off-by: Jiri Olsa --- arch/x86/mm/init_32.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 23df4885bbed..eb6ede2c3d43 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -238,7 +238,11 @@ page_table_range_init(unsigned long start, unsigned long end, pgd_t *pgd_base) } } -static inline int is_kernel_text(unsigned long addr) +/* + * The already defines is_kernel_text, + * using '__' prefix not to get in conflict. + */ +static inline int __is_kernel_text(unsigned long addr) { if (addr >= (unsigned long)_text && addr <= (unsigned long)__init_end) return 1; @@ -328,8 +332,8 @@ kernel_physical_mapping_init(unsigned long start, addr2 = (pfn + PTRS_PER_PTE-1) * PAGE_SIZE + PAGE_OFFSET + PAGE_SIZE-1; - if (is_kernel_text(addr) || - is_kernel_text(addr2)) + if (__is_kernel_text(addr) || + __is_kernel_text(addr2)) prot = PAGE_KERNEL_LARGE_EXEC; pages_2m++; @@ -354,7 +358,7 @@ kernel_physical_mapping_init(unsigned long start, */ pgprot_t init_prot = __pgprot(PTE_IDENT_ATTR); - if (is_kernel_text(addr)) + if (__is_kernel_text(addr)) prot = PAGE_KERNEL_EXEC; pages_4k++; @@ -881,7 +885,7 @@ static void mark_nxdata_nx(void) */ unsigned long start = PFN_ALIGN(_etext); /* - * This comes from is_kernel_text upper limit. Also HPAGE where used: + * This comes from __is_kernel_text upper limit. Also HPAGE where used: */ unsigned long size = (((unsigned long)__init_end + HPAGE_SIZE) & HPAGE_MASK) - start; From patchwork Thu Mar 12 19:55:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253941 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfjH4p7Fz9sNg for ; Fri, 13 Mar 2020 06:56:35 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726882AbgCLT4e convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:56:34 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:53733 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726729AbgCLT4e (ORCPT ); Thu, 12 Mar 2020 15:56:34 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-84-UoZYZcN4OESpUgb2FTkFDQ-1; Thu, 12 Mar 2020 15:56:28 -0400 X-MC-Unique: UoZYZcN4OESpUgb2FTkFDQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F207919251A6; Thu, 12 Mar 2020 19:56:25 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 80AF55D9C5; Thu, 12 Mar 2020 19:56:21 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: Song Liu , netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 02/15] bpf: Add bpf_trampoline_ name prefix for DECLARE_BPF_DISPATCHER Date: Thu, 12 Mar 2020 20:55:57 +0100 Message-Id: <20200312195610.346362-3-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel Adding bpf_trampoline_ name prefix for DECLARE_BPF_DISPATCHER, so all the dispatchers have the common name prefix. And also a small '_' cleanup for bpf_dispatcher_nopfunc function name. Acked-by: Song Liu Signed-off-by: Björn Töpel Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 21 +++++++++++---------- include/linux/filter.h | 9 ++++----- net/core/filter.c | 5 ++--- 3 files changed, 17 insertions(+), 18 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 4fd91b7c95ea..47cdb98e0e0a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -522,7 +522,7 @@ struct bpf_dispatcher { u32 image_off; }; -static __always_inline unsigned int bpf_dispatcher_nopfunc( +static __always_inline unsigned int bpf_dispatcher_nop_func( const void *ctx, const struct bpf_insn *insnsi, unsigned int (*bpf_func)(const void *, @@ -537,7 +537,7 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog); void bpf_trampoline_put(struct bpf_trampoline *tr); #define BPF_DISPATCHER_INIT(name) { \ .mutex = __MUTEX_INITIALIZER(name.mutex), \ - .func = &name##func, \ + .func = &name##_func, \ .progs = {}, \ .num_progs = 0, \ .image = NULL, \ @@ -545,7 +545,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr); } #define DEFINE_BPF_DISPATCHER(name) \ - noinline unsigned int name##func( \ + noinline unsigned int bpf_dispatcher_##name##_func( \ const void *ctx, \ const struct bpf_insn *insnsi, \ unsigned int (*bpf_func)(const void *, \ @@ -553,17 +553,18 @@ void bpf_trampoline_put(struct bpf_trampoline *tr); { \ return bpf_func(ctx, insnsi); \ } \ - EXPORT_SYMBOL(name##func); \ - struct bpf_dispatcher name = BPF_DISPATCHER_INIT(name); + EXPORT_SYMBOL(bpf_dispatcher_##name##_func); \ + struct bpf_dispatcher bpf_dispatcher_##name = \ + BPF_DISPATCHER_INIT(bpf_dispatcher_##name); #define DECLARE_BPF_DISPATCHER(name) \ - unsigned int name##func( \ + unsigned int bpf_dispatcher_##name##_func( \ const void *ctx, \ const struct bpf_insn *insnsi, \ unsigned int (*bpf_func)(const void *, \ const struct bpf_insn *)); \ - extern struct bpf_dispatcher name; -#define BPF_DISPATCHER_FUNC(name) name##func -#define BPF_DISPATCHER_PTR(name) (&name) + extern struct bpf_dispatcher bpf_dispatcher_##name; +#define BPF_DISPATCHER_FUNC(name) bpf_dispatcher_##name##_func +#define BPF_DISPATCHER_PTR(name) (&bpf_dispatcher_##name) void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, struct bpf_prog *to); struct bpf_image { @@ -589,7 +590,7 @@ static inline int bpf_trampoline_unlink_prog(struct bpf_prog *prog) static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {} #define DEFINE_BPF_DISPATCHER(name) #define DECLARE_BPF_DISPATCHER(name) -#define BPF_DISPATCHER_FUNC(name) bpf_dispatcher_nopfunc +#define BPF_DISPATCHER_FUNC(name) bpf_dispatcher_nop_func #define BPF_DISPATCHER_PTR(name) NULL static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, diff --git a/include/linux/filter.h b/include/linux/filter.h index 43b5e455d2f5..6249679275b3 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -577,7 +577,7 @@ DECLARE_STATIC_KEY_FALSE(bpf_stats_enabled_key); ret; }) #define BPF_PROG_RUN(prog, ctx) \ - __BPF_PROG_RUN(prog, ctx, bpf_dispatcher_nopfunc) + __BPF_PROG_RUN(prog, ctx, bpf_dispatcher_nop_func) /* * Use in preemptible and therefore migratable context to make sure that @@ -596,7 +596,7 @@ static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog, u32 ret; migrate_disable(); - ret = __BPF_PROG_RUN(prog, ctx, bpf_dispatcher_nopfunc); + ret = __BPF_PROG_RUN(prog, ctx, bpf_dispatcher_nop_func); migrate_enable(); return ret; } @@ -722,7 +722,7 @@ static inline u32 bpf_prog_run_clear_cb(const struct bpf_prog *prog, return res; } -DECLARE_BPF_DISPATCHER(bpf_dispatcher_xdp) +DECLARE_BPF_DISPATCHER(xdp) static __always_inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog, struct xdp_buff *xdp) @@ -733,8 +733,7 @@ static __always_inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog, * already takes rcu_read_lock() when fetching the program, so * it's not necessary here anymore. */ - return __BPF_PROG_RUN(prog, xdp, - BPF_DISPATCHER_FUNC(bpf_dispatcher_xdp)); + return __BPF_PROG_RUN(prog, xdp, BPF_DISPATCHER_FUNC(xdp)); } void bpf_prog_change_xdp(struct bpf_prog *prev_prog, struct bpf_prog *prog); diff --git a/net/core/filter.c b/net/core/filter.c index cd0a532db4e7..25c04d6535e2 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -8845,10 +8845,9 @@ const struct bpf_prog_ops sk_reuseport_prog_ops = { }; #endif /* CONFIG_INET */ -DEFINE_BPF_DISPATCHER(bpf_dispatcher_xdp) +DEFINE_BPF_DISPATCHER(xdp) void bpf_prog_change_xdp(struct bpf_prog *prev_prog, struct bpf_prog *prog) { - bpf_dispatcher_change_prog(BPF_DISPATCHER_PTR(bpf_dispatcher_xdp), - prev_prog, prog); + bpf_dispatcher_change_prog(BPF_DISPATCHER_PTR(xdp), prev_prog, prog); } From patchwork Thu Mar 12 19:55:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253943 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfjK5HsCz9sRN for ; Fri, 13 Mar 2020 06:56:37 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726902AbgCLT4h convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:56:37 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:51347 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726883AbgCLT4g (ORCPT ); Thu, 12 Mar 2020 15:56:36 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-208-x8bzd1z7MWeC5tO87x5wJQ-1; Thu, 12 Mar 2020 15:56:31 -0400 X-MC-Unique: x8bzd1z7MWeC5tO87x5wJQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A835A19251A1; Thu, 12 Mar 2020 19:56:29 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5C0E35D9C5; Thu, 12 Mar 2020 19:56:26 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: Song Liu , netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 03/15] bpf: Add struct bpf_ksym Date: Thu, 12 Mar 2020 20:55:58 +0100 Message-Id: <20200312195610.346362-4-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding 'struct bpf_ksym' object that will carry the kallsym information for bpf symbol. Adding the start and end address to begin with. It will be used by bpf_prog, bpf_trampoline, bpf_dispatcher objects. The symbol_start/symbol_end values were originally used to sort bpf_prog objects. For the address displayed in /proc/kallsyms we are using prog->bpf_func value. I'm using the bpf_func value for program symbol start instead of the symbol_start, because it makes no difference for sorting bpf_prog objects and we can use it directly as an address to display it in /proc/kallsyms. Acked-by: Song Liu Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 6 ++++++ kernel/bpf/core.c | 28 ++++++++++++---------------- 2 files changed, 18 insertions(+), 16 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 47cdb98e0e0a..f5fa4c847451 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -471,6 +471,11 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end, u64 notrace __bpf_prog_enter(void); void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start); +struct bpf_ksym { + unsigned long start; + unsigned long end; +}; + enum bpf_tramp_prog_type { BPF_TRAMP_FENTRY, BPF_TRAMP_FEXIT, @@ -653,6 +658,7 @@ struct bpf_prog_aux { u32 size_poke_tab; struct latch_tree_node ksym_tnode; struct list_head ksym_lnode; + struct bpf_ksym ksym; const struct bpf_prog_ops *ops; struct bpf_map **used_maps; struct bpf_prog *prog; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 973a20d49749..e2dd2d87c987 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -523,18 +523,16 @@ int bpf_jit_kallsyms __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON); int bpf_jit_harden __read_mostly; long bpf_jit_limit __read_mostly; -static __always_inline void -bpf_get_prog_addr_region(const struct bpf_prog *prog, - unsigned long *symbol_start, - unsigned long *symbol_end) +static void +bpf_prog_ksym_set_addr(struct bpf_prog *prog) { const struct bpf_binary_header *hdr = bpf_jit_binary_hdr(prog); unsigned long addr = (unsigned long)hdr; WARN_ON_ONCE(!bpf_prog_ebpf_jited(prog)); - *symbol_start = addr; - *symbol_end = addr + hdr->pages * PAGE_SIZE; + prog->aux->ksym.start = (unsigned long) prog->bpf_func; + prog->aux->ksym.end = addr + hdr->pages * PAGE_SIZE; } void bpf_get_prog_name(const struct bpf_prog *prog, char *sym) @@ -575,13 +573,10 @@ void bpf_get_prog_name(const struct bpf_prog *prog, char *sym) static __always_inline unsigned long bpf_get_prog_addr_start(struct latch_tree_node *n) { - unsigned long symbol_start, symbol_end; const struct bpf_prog_aux *aux; aux = container_of(n, struct bpf_prog_aux, ksym_tnode); - bpf_get_prog_addr_region(aux->prog, &symbol_start, &symbol_end); - - return symbol_start; + return aux->ksym.start; } static __always_inline bool bpf_tree_less(struct latch_tree_node *a, @@ -593,15 +588,13 @@ static __always_inline bool bpf_tree_less(struct latch_tree_node *a, static __always_inline int bpf_tree_comp(void *key, struct latch_tree_node *n) { unsigned long val = (unsigned long)key; - unsigned long symbol_start, symbol_end; const struct bpf_prog_aux *aux; aux = container_of(n, struct bpf_prog_aux, ksym_tnode); - bpf_get_prog_addr_region(aux->prog, &symbol_start, &symbol_end); - if (val < symbol_start) + if (val < aux->ksym.start) return -1; - if (val >= symbol_end) + if (val >= aux->ksym.end) return 1; return 0; @@ -649,6 +642,8 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp) !capable(CAP_SYS_ADMIN)) return; + bpf_prog_ksym_set_addr(fp); + spin_lock_bh(&bpf_lock); bpf_prog_ksym_node_add(fp->aux); spin_unlock_bh(&bpf_lock); @@ -677,14 +672,15 @@ static struct bpf_prog *bpf_prog_kallsyms_find(unsigned long addr) const char *__bpf_address_lookup(unsigned long addr, unsigned long *size, unsigned long *off, char *sym) { - unsigned long symbol_start, symbol_end; struct bpf_prog *prog; char *ret = NULL; rcu_read_lock(); prog = bpf_prog_kallsyms_find(addr); if (prog) { - bpf_get_prog_addr_region(prog, &symbol_start, &symbol_end); + unsigned long symbol_start = prog->aux->ksym.start; + unsigned long symbol_end = prog->aux->ksym.end; + bpf_get_prog_name(prog, sym); ret = sym; From patchwork Thu Mar 12 19:55:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253945 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfjN4SnJz9sRN for ; Fri, 13 Mar 2020 06:56:40 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726912AbgCLT4j convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:56:39 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:59675 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726729AbgCLT4j (ORCPT ); Thu, 12 Mar 2020 15:56:39 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-421-9qbsDPw9Pj2KXs9hLo2eNQ-1; Thu, 12 Mar 2020 15:56:35 -0400 X-MC-Unique: 9qbsDPw9Pj2KXs9hLo2eNQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6FD7219251A1; Thu, 12 Mar 2020 19:56:33 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0FA725D9C5; Thu, 12 Mar 2020 19:56:29 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: Song Liu , netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 04/15] bpf: Add name to struct bpf_ksym Date: Thu, 12 Mar 2020 20:55:59 +0100 Message-Id: <20200312195610.346362-5-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Adding name to 'struct bpf_ksym' object to carry the name of the symbol for bpf_prog, bpf_trampoline, bpf_dispatcher objects. The current benefit is that name is now generated only when the symbol is added to the list, so we don't need to generate it every time it's accessed. The future benefit is that we will have all the bpf objects symbols represented by struct bpf_ksym. Acked-by: Song Liu Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 2 ++ include/linux/filter.h | 6 ------ kernel/bpf/core.c | 9 ++++++--- kernel/events/core.c | 9 ++++----- 4 files changed, 12 insertions(+), 14 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f5fa4c847451..e1cd64f2bf05 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -18,6 +18,7 @@ #include #include #include +#include struct bpf_verifier_env; struct bpf_verifier_log; @@ -474,6 +475,7 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start); struct bpf_ksym { unsigned long start; unsigned long end; + char name[KSYM_NAME_LEN]; }; enum bpf_tramp_prog_type { diff --git a/include/linux/filter.h b/include/linux/filter.h index 6249679275b3..9b5aa5c483cc 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1083,7 +1083,6 @@ bpf_address_lookup(unsigned long addr, unsigned long *size, void bpf_prog_kallsyms_add(struct bpf_prog *fp); void bpf_prog_kallsyms_del(struct bpf_prog *fp); -void bpf_get_prog_name(const struct bpf_prog *prog, char *sym); #else /* CONFIG_BPF_JIT */ @@ -1152,11 +1151,6 @@ static inline void bpf_prog_kallsyms_del(struct bpf_prog *fp) { } -static inline void bpf_get_prog_name(const struct bpf_prog *prog, char *sym) -{ - sym[0] = '\0'; -} - #endif /* CONFIG_BPF_JIT */ void bpf_prog_kallsyms_del_all(struct bpf_prog *fp); diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index e2dd2d87c987..f86cb15d6f2e 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -535,8 +535,10 @@ bpf_prog_ksym_set_addr(struct bpf_prog *prog) prog->aux->ksym.end = addr + hdr->pages * PAGE_SIZE; } -void bpf_get_prog_name(const struct bpf_prog *prog, char *sym) +static void +bpf_prog_ksym_set_name(struct bpf_prog *prog) { + char *sym = prog->aux->ksym.name; const char *end = sym + KSYM_NAME_LEN; const struct btf_type *type; const char *func_name; @@ -643,6 +645,7 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp) return; bpf_prog_ksym_set_addr(fp); + bpf_prog_ksym_set_name(fp); spin_lock_bh(&bpf_lock); bpf_prog_ksym_node_add(fp->aux); @@ -681,7 +684,7 @@ const char *__bpf_address_lookup(unsigned long addr, unsigned long *size, unsigned long symbol_start = prog->aux->ksym.start; unsigned long symbol_end = prog->aux->ksym.end; - bpf_get_prog_name(prog, sym); + strncpy(sym, prog->aux->ksym.name, KSYM_NAME_LEN); ret = sym; if (size) @@ -738,7 +741,7 @@ int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type, if (it++ != symnum) continue; - bpf_get_prog_name(aux->prog, sym); + strncpy(sym, aux->ksym.name, KSYM_NAME_LEN); *value = (unsigned long)aux->prog->bpf_func; *type = BPF_SYM_ELF_TYPE; diff --git a/kernel/events/core.c b/kernel/events/core.c index bbdfac0182f4..9b89ef176247 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -8255,23 +8255,22 @@ static void perf_event_bpf_emit_ksymbols(struct bpf_prog *prog, enum perf_bpf_event_type type) { bool unregister = type == PERF_BPF_EVENT_PROG_UNLOAD; - char sym[KSYM_NAME_LEN]; int i; if (prog->aux->func_cnt == 0) { - bpf_get_prog_name(prog, sym); perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, (u64)(unsigned long)prog->bpf_func, - prog->jited_len, unregister, sym); + prog->jited_len, unregister, + prog->aux->ksym.name); } else { for (i = 0; i < prog->aux->func_cnt; i++) { struct bpf_prog *subprog = prog->aux->func[i]; - bpf_get_prog_name(subprog, sym); perf_event_ksymbol( PERF_RECORD_KSYMBOL_TYPE_BPF, (u64)(unsigned long)subprog->bpf_func, - subprog->jited_len, unregister, sym); + subprog->jited_len, unregister, + prog->aux->ksym.name); } } } From patchwork Thu Mar 12 19:56:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253947 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfjS0WW6z9sRN for ; Fri, 13 Mar 2020 06:56:44 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726910AbgCLT4n convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:56:43 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:20552 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726729AbgCLT4n (ORCPT ); Thu, 12 Mar 2020 15:56:43 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-211-2a3cYJl7Ow-mVUR3uBaDHw-1; Thu, 12 Mar 2020 15:56:40 -0400 X-MC-Unique: 2a3cYJl7Ow-mVUR3uBaDHw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D846A107ACC7; Thu, 12 Mar 2020 19:56:37 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id D871E5D9C5; Thu, 12 Mar 2020 19:56:33 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: Song Liu , netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 05/15] bpf: Move lnode list node to struct bpf_ksym Date: Thu, 12 Mar 2020 20:56:00 +0100 Message-Id: <20200312195610.346362-6-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding lnode list node to 'struct bpf_ksym' object, so the struct bpf_ksym itself can be chained and used in other objects like bpf_trampoline and bpf_dispatcher. Changing iterator to bpf_ksym in bpf_get_kallsym function. The ksym->start is holding the prog->bpf_func value, so it's ok to use it as value in bpf_get_kallsym. Acked-by: Song Liu Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 2 +- kernel/bpf/core.c | 22 +++++++++++----------- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index e1cd64f2bf05..de624c4f66ec 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -476,6 +476,7 @@ struct bpf_ksym { unsigned long start; unsigned long end; char name[KSYM_NAME_LEN]; + struct list_head lnode; }; enum bpf_tramp_prog_type { @@ -659,7 +660,6 @@ struct bpf_prog_aux { struct bpf_jit_poke_descriptor *poke_tab; u32 size_poke_tab; struct latch_tree_node ksym_tnode; - struct list_head ksym_lnode; struct bpf_ksym ksym; const struct bpf_prog_ops *ops; struct bpf_map **used_maps; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index f86cb15d6f2e..084abfbc3362 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -97,7 +97,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag fp->aux->prog = fp; fp->jit_requested = ebpf_jit_enabled(); - INIT_LIST_HEAD_RCU(&fp->aux->ksym_lnode); + INIT_LIST_HEAD_RCU(&fp->aux->ksym.lnode); return fp; } @@ -613,18 +613,18 @@ static struct latch_tree_root bpf_tree __cacheline_aligned; static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux) { - WARN_ON_ONCE(!list_empty(&aux->ksym_lnode)); - list_add_tail_rcu(&aux->ksym_lnode, &bpf_kallsyms); + WARN_ON_ONCE(!list_empty(&aux->ksym.lnode)); + list_add_tail_rcu(&aux->ksym.lnode, &bpf_kallsyms); latch_tree_insert(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); } static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux) { - if (list_empty(&aux->ksym_lnode)) + if (list_empty(&aux->ksym.lnode)) return; latch_tree_erase(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); - list_del_rcu(&aux->ksym_lnode); + list_del_rcu(&aux->ksym.lnode); } static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp) @@ -634,8 +634,8 @@ static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp) static bool bpf_prog_kallsyms_verify_off(const struct bpf_prog *fp) { - return list_empty(&fp->aux->ksym_lnode) || - fp->aux->ksym_lnode.prev == LIST_POISON2; + return list_empty(&fp->aux->ksym.lnode) || + fp->aux->ksym.lnode.prev == LIST_POISON2; } void bpf_prog_kallsyms_add(struct bpf_prog *fp) @@ -729,7 +729,7 @@ const struct exception_table_entry *search_bpf_extables(unsigned long addr) int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type, char *sym) { - struct bpf_prog_aux *aux; + struct bpf_ksym *ksym; unsigned int it = 0; int ret = -ERANGE; @@ -737,13 +737,13 @@ int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type, return ret; rcu_read_lock(); - list_for_each_entry_rcu(aux, &bpf_kallsyms, ksym_lnode) { + list_for_each_entry_rcu(ksym, &bpf_kallsyms, lnode) { if (it++ != symnum) continue; - strncpy(sym, aux->ksym.name, KSYM_NAME_LEN); + strncpy(sym, ksym->name, KSYM_NAME_LEN); - *value = (unsigned long)aux->prog->bpf_func; + *value = ksym->start; *type = BPF_SYM_ELF_TYPE; ret = 0; From patchwork Thu Mar 12 19:56:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253949 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfjb6LySz9sNg for ; Fri, 13 Mar 2020 06:56:51 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726949AbgCLT4v convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:56:51 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:44307 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726729AbgCLT4v (ORCPT ); Thu, 12 Mar 2020 15:56:51 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-469-feJ9WQg6PESirBnShiXgvQ-1; Thu, 12 Mar 2020 15:56:46 -0400 X-MC-Unique: feJ9WQg6PESirBnShiXgvQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6D66A19251A1; Thu, 12 Mar 2020 19:56:44 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5B5DF5D9C5; Thu, 12 Mar 2020 19:56:38 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 06/15] bpf: Move ksym_tnode to bpf_ksym Date: Thu, 12 Mar 2020 20:56:01 +0100 Message-Id: <20200312195610.346362-7-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Moving ksym_tnode list node to 'struct bpf_ksym' object, so the symbol itself can be chained and used in other objects like bpf_trampoline and bpf_dispatcher. We need bpf_ksym object to be linked both in bpf_kallsyms via lnode for /proc/kallsyms and in bpf_tree via tnode for bpf address lookup functions like __bpf_address_lookup or bpf_prog_kallsyms_find. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 2 +- kernel/bpf/core.c | 24 ++++++++++-------------- 2 files changed, 11 insertions(+), 15 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index de624c4f66ec..c5afff03b0f4 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -477,6 +477,7 @@ struct bpf_ksym { unsigned long end; char name[KSYM_NAME_LEN]; struct list_head lnode; + struct latch_tree_node tnode; }; enum bpf_tramp_prog_type { @@ -659,7 +660,6 @@ struct bpf_prog_aux { void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab; u32 size_poke_tab; - struct latch_tree_node ksym_tnode; struct bpf_ksym ksym; const struct bpf_prog_ops *ops; struct bpf_map **used_maps; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 084abfbc3362..df0caa4bc9cc 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -572,31 +572,27 @@ bpf_prog_ksym_set_name(struct bpf_prog *prog) *sym = 0; } -static __always_inline unsigned long -bpf_get_prog_addr_start(struct latch_tree_node *n) +static unsigned long bpf_get_ksym_start(struct latch_tree_node *n) { - const struct bpf_prog_aux *aux; - - aux = container_of(n, struct bpf_prog_aux, ksym_tnode); - return aux->ksym.start; + return container_of(n, struct bpf_ksym, tnode)->start; } static __always_inline bool bpf_tree_less(struct latch_tree_node *a, struct latch_tree_node *b) { - return bpf_get_prog_addr_start(a) < bpf_get_prog_addr_start(b); + return bpf_get_ksym_start(a) < bpf_get_ksym_start(b); } static __always_inline int bpf_tree_comp(void *key, struct latch_tree_node *n) { unsigned long val = (unsigned long)key; - const struct bpf_prog_aux *aux; + const struct bpf_ksym *ksym; - aux = container_of(n, struct bpf_prog_aux, ksym_tnode); + ksym = container_of(n, struct bpf_ksym, tnode); - if (val < aux->ksym.start) + if (val < ksym->start) return -1; - if (val >= aux->ksym.end) + if (val >= ksym->end) return 1; return 0; @@ -615,7 +611,7 @@ static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux) { WARN_ON_ONCE(!list_empty(&aux->ksym.lnode)); list_add_tail_rcu(&aux->ksym.lnode, &bpf_kallsyms); - latch_tree_insert(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); + latch_tree_insert(&aux->ksym.tnode, &bpf_tree, &bpf_tree_ops); } static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux) @@ -623,7 +619,7 @@ static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux) if (list_empty(&aux->ksym.lnode)) return; - latch_tree_erase(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops); + latch_tree_erase(&aux->ksym.tnode, &bpf_tree, &bpf_tree_ops); list_del_rcu(&aux->ksym.lnode); } @@ -668,7 +664,7 @@ static struct bpf_prog *bpf_prog_kallsyms_find(unsigned long addr) n = latch_tree_find((void *)addr, &bpf_tree, &bpf_tree_ops); return n ? - container_of(n, struct bpf_prog_aux, ksym_tnode)->prog : + container_of(n, struct bpf_prog_aux, ksym.tnode)->prog : NULL; } From patchwork Thu Mar 12 19:56:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253951 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfjk224yz9sNg for ; Fri, 13 Mar 2020 06:56:58 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726973AbgCLT45 convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:56:57 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:58598 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726810AbgCLT45 (ORCPT ); Thu, 12 Mar 2020 15:56:57 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-46-gtkdpjTwOACrhaN1BXBxHA-1; Thu, 12 Mar 2020 15:56:54 -0400 X-MC-Unique: gtkdpjTwOACrhaN1BXBxHA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 15A4ADB61; Thu, 12 Mar 2020 19:56:52 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id EAAE65DA7B; Thu, 12 Mar 2020 19:56:44 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 07/15] bpf: Add bpf_ksym_find function Date: Thu, 12 Mar 2020 20:56:02 +0100 Message-Id: <20200312195610.346362-8-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Adding bpf_ksym_find function that is used bpf bpf address lookup functions: __bpf_address_lookup is_bpf_text_address while keeping bpf_prog_kallsyms_find to be used only for lookup of bpf_prog objects (will happen in following changes). Signed-off-by: Jiri Olsa --- kernel/bpf/core.c | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index df0caa4bc9cc..6b58c2f1c9c0 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -668,19 +668,27 @@ static struct bpf_prog *bpf_prog_kallsyms_find(unsigned long addr) NULL; } +static struct bpf_ksym *bpf_ksym_find(unsigned long addr) +{ + struct latch_tree_node *n; + + n = latch_tree_find((void *)addr, &bpf_tree, &bpf_tree_ops); + return n ? container_of(n, struct bpf_ksym, tnode) : NULL; +} + const char *__bpf_address_lookup(unsigned long addr, unsigned long *size, unsigned long *off, char *sym) { - struct bpf_prog *prog; + struct bpf_ksym *ksym; char *ret = NULL; rcu_read_lock(); - prog = bpf_prog_kallsyms_find(addr); - if (prog) { - unsigned long symbol_start = prog->aux->ksym.start; - unsigned long symbol_end = prog->aux->ksym.end; + ksym = bpf_ksym_find(addr); + if (ksym) { + unsigned long symbol_start = ksym->start; + unsigned long symbol_end = ksym->end; - strncpy(sym, prog->aux->ksym.name, KSYM_NAME_LEN); + strncpy(sym, ksym->name, KSYM_NAME_LEN); ret = sym; if (size) @@ -698,7 +706,7 @@ bool is_bpf_text_address(unsigned long addr) bool ret; rcu_read_lock(); - ret = bpf_prog_kallsyms_find(addr) != NULL; + ret = bpf_ksym_find(addr) != NULL; rcu_read_unlock(); return ret; From patchwork Thu Mar 12 19:56:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253954 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfjq5bFsz9sNg for ; Fri, 13 Mar 2020 06:57:03 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726980AbgCLT5D convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:57:03 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:25192 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726810AbgCLT5D (ORCPT ); Thu, 12 Mar 2020 15:57:03 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-6-Y_-efEgtMQ-mgotKQmtRlQ-1; Thu, 12 Mar 2020 15:57:00 -0400 X-MC-Unique: Y_-efEgtMQ-mgotKQmtRlQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F20AA8018C9; Thu, 12 Mar 2020 19:56:57 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7D96B5D9C5; Thu, 12 Mar 2020 19:56:52 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 08/15] bpf: Add prog flag to struct bpf_ksym object Date: Thu, 12 Mar 2020 20:56:03 +0100 Message-Id: <20200312195610.346362-9-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding 'prog' bool flag to 'struct bpf_ksym' to mark that this object belongs to bpf_prog object. This change allows having bpf_prog objects together with other types (trampolines and dispatchers) in the single bpf_tree. It's used when searching for bpf_prog exception tables by the bpf_prog_ksym_find function, where we need to get the bpf_prog pointer. From now we can safely add bpf_ksym support for trampoline or dispatcher objects, because we can differentiate them from bpf_prog objects. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 1 + kernel/bpf/core.c | 22 +++++++++++----------- 2 files changed, 12 insertions(+), 11 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index c5afff03b0f4..739ec3f562bf 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -478,6 +478,7 @@ struct bpf_ksym { char name[KSYM_NAME_LEN]; struct list_head lnode; struct latch_tree_node tnode; + bool prog; }; enum bpf_tramp_prog_type { diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 6b58c2f1c9c0..92577a81122a 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -642,6 +642,7 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp) bpf_prog_ksym_set_addr(fp); bpf_prog_ksym_set_name(fp); + fp->aux->ksym.prog = true; spin_lock_bh(&bpf_lock); bpf_prog_ksym_node_add(fp->aux); @@ -658,16 +659,6 @@ void bpf_prog_kallsyms_del(struct bpf_prog *fp) spin_unlock_bh(&bpf_lock); } -static struct bpf_prog *bpf_prog_kallsyms_find(unsigned long addr) -{ - struct latch_tree_node *n; - - n = latch_tree_find((void *)addr, &bpf_tree, &bpf_tree_ops); - return n ? - container_of(n, struct bpf_prog_aux, ksym.tnode)->prog : - NULL; -} - static struct bpf_ksym *bpf_ksym_find(unsigned long addr) { struct latch_tree_node *n; @@ -712,13 +703,22 @@ bool is_bpf_text_address(unsigned long addr) return ret; } +static struct bpf_prog *bpf_prog_ksym_find(unsigned long addr) +{ + struct bpf_ksym *ksym = bpf_ksym_find(addr); + + return ksym && ksym->prog ? + container_of(ksym, struct bpf_prog_aux, ksym)->prog : + NULL; +} + const struct exception_table_entry *search_bpf_extables(unsigned long addr) { const struct exception_table_entry *e = NULL; struct bpf_prog *prog; rcu_read_lock(); - prog = bpf_prog_kallsyms_find(addr); + prog = bpf_prog_ksym_find(addr); if (!prog) goto out; if (!prog->aux->num_exentries) From patchwork Thu Mar 12 19:56:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253956 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfjx4gKYz9sRN for ; Fri, 13 Mar 2020 06:57:09 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727006AbgCLT5J convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:57:09 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:23852 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727000AbgCLT5I (ORCPT ); Thu, 12 Mar 2020 15:57:08 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-19-fY9hT-M8OjOZttMZys8QDg-1; Thu, 12 Mar 2020 15:57:03 -0400 X-MC-Unique: fY9hT-M8OjOZttMZys8QDg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A0E5A109DDEA; Thu, 12 Mar 2020 19:57:01 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5AA285D9C5; Thu, 12 Mar 2020 19:56:58 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 09/15] bpf: Add bpf_ksym_add/del functions Date: Thu, 12 Mar 2020 20:56:04 +0100 Message-Id: <20200312195610.346362-10-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Separating /proc/kallsyms add/del code and adding bpf_ksym_add/del functions for that. Moving bpf_prog_ksym_node_add/del functions to __bpf_ksym_add/del and changing their argument to 'struct bpf_ksym' object. This way we can call them for other bpf objects types like trampoline and dispatcher. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 3 +++ kernel/bpf/core.c | 33 +++++++++++++++++++-------------- 2 files changed, 22 insertions(+), 14 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 739ec3f562bf..83920cbce24c 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -584,6 +584,9 @@ struct bpf_image { #define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image)) bool is_bpf_image_address(unsigned long address); void *bpf_image_alloc(void); +/* Called only from JIT-enabled code, so there's no need for stubs. */ +void bpf_ksym_add(struct bpf_ksym *ksym); +void bpf_ksym_del(struct bpf_ksym *ksym); #else static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 92577a81122a..cc360bfde1fe 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -607,20 +607,29 @@ static DEFINE_SPINLOCK(bpf_lock); static LIST_HEAD(bpf_kallsyms); static struct latch_tree_root bpf_tree __cacheline_aligned; -static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux) +void bpf_ksym_add(struct bpf_ksym *ksym) { - WARN_ON_ONCE(!list_empty(&aux->ksym.lnode)); - list_add_tail_rcu(&aux->ksym.lnode, &bpf_kallsyms); - latch_tree_insert(&aux->ksym.tnode, &bpf_tree, &bpf_tree_ops); + spin_lock_bh(&bpf_lock); + WARN_ON_ONCE(!list_empty(&ksym->lnode)); + list_add_tail_rcu(&ksym->lnode, &bpf_kallsyms); + latch_tree_insert(&ksym->tnode, &bpf_tree, &bpf_tree_ops); + spin_unlock_bh(&bpf_lock); } -static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux) +static void __bpf_ksym_del(struct bpf_ksym *ksym) { - if (list_empty(&aux->ksym.lnode)) + if (list_empty(&ksym->lnode)) return; - latch_tree_erase(&aux->ksym.tnode, &bpf_tree, &bpf_tree_ops); - list_del_rcu(&aux->ksym.lnode); + latch_tree_erase(&ksym->tnode, &bpf_tree, &bpf_tree_ops); + list_del_rcu(&ksym->lnode); +} + +void bpf_ksym_del(struct bpf_ksym *ksym) +{ + spin_lock_bh(&bpf_lock); + __bpf_ksym_del(ksym); + spin_unlock_bh(&bpf_lock); } static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp) @@ -644,9 +653,7 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp) bpf_prog_ksym_set_name(fp); fp->aux->ksym.prog = true; - spin_lock_bh(&bpf_lock); - bpf_prog_ksym_node_add(fp->aux); - spin_unlock_bh(&bpf_lock); + bpf_ksym_add(&fp->aux->ksym); } void bpf_prog_kallsyms_del(struct bpf_prog *fp) @@ -654,9 +661,7 @@ void bpf_prog_kallsyms_del(struct bpf_prog *fp) if (!bpf_prog_kallsyms_candidate(fp)) return; - spin_lock_bh(&bpf_lock); - bpf_prog_ksym_node_del(fp->aux); - spin_unlock_bh(&bpf_lock); + bpf_ksym_del(&fp->aux->ksym); } static struct bpf_ksym *bpf_ksym_find(unsigned long addr) From patchwork Thu Mar 12 19:56:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253958 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfk23R97z9sRN for ; Fri, 13 Mar 2020 06:57:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727030AbgCLT5O convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:57:14 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:54673 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727020AbgCLT5N (ORCPT ); Thu, 12 Mar 2020 15:57:13 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-130-V2ElMOf-Nj-JirYlo6WjzA-1; Thu, 12 Mar 2020 15:57:08 -0400 X-MC-Unique: V2ElMOf-Nj-JirYlo6WjzA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 57AD4800D53; Thu, 12 Mar 2020 19:57:06 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 031085D9C5; Thu, 12 Mar 2020 19:57:01 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 10/15] bpf: Add trampolines to kallsyms Date: Thu, 12 Mar 2020 20:56:05 +0100 Message-Id: <20200312195610.346362-11-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Adding trampolines to kallsyms. It's displayed as bpf_trampoline_ [bpf] where ID is the BTF id of the trampoline function. Adding bpf_image_ksym_add/del functions that setup the start/end values and call KSYMBOL perf events handlers. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 3 +++ kernel/bpf/trampoline.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 83920cbce24c..7edbf30cf2b2 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -513,6 +513,7 @@ struct bpf_trampoline { /* Executable image of trampoline */ void *image; u64 selector; + struct bpf_ksym ksym; }; #define BPF_DISPATCHER_MAX 48 /* Fits in 2048B */ @@ -585,6 +586,8 @@ struct bpf_image { bool is_bpf_image_address(unsigned long address); void *bpf_image_alloc(void); /* Called only from JIT-enabled code, so there's no need for stubs. */ +void bpf_image_ksym_add(void *data, struct bpf_ksym *ksym); +void bpf_image_ksym_del(struct bpf_ksym *ksym); void bpf_ksym_add(struct bpf_ksym *ksym); void bpf_ksym_del(struct bpf_ksym *ksym); #else diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 221a17af1f81..36549c9afec4 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -5,6 +5,7 @@ #include #include #include +#include /* dummy _ops. The verifier will operate on target program's ops. */ const struct bpf_verifier_ops bpf_extension_verifier_ops = { @@ -96,6 +97,30 @@ bool is_bpf_image_address(unsigned long addr) return ret; } +void bpf_image_ksym_add(void *data, struct bpf_ksym *ksym) +{ + ksym->start = (unsigned long) data; + ksym->end = ksym->start + BPF_IMAGE_SIZE; + bpf_ksym_add(ksym); + perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, ksym->start, + BPF_IMAGE_SIZE, false, ksym->name); +} + +void bpf_image_ksym_del(struct bpf_ksym *ksym) +{ + bpf_ksym_del(ksym); + perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, ksym->start, + BPF_IMAGE_SIZE, true, ksym->name); +} + +static void bpf_trampoline_ksym_add(struct bpf_trampoline *tr) +{ + struct bpf_ksym *ksym = &tr->ksym; + + snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", tr->key); + bpf_image_ksym_add(tr->image, ksym); +} + struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { struct bpf_trampoline *tr; @@ -131,6 +156,8 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key) for (i = 0; i < BPF_TRAMP_MAX; i++) INIT_HLIST_HEAD(&tr->progs_hlist[i]); tr->image = image; + INIT_LIST_HEAD_RCU(&tr->ksym.lnode); + bpf_trampoline_ksym_add(tr); out: mutex_unlock(&trampoline_mutex); return tr; @@ -368,6 +395,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr) goto out; if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT]))) goto out; + bpf_image_ksym_del(&tr->ksym); image = container_of(tr->image, struct bpf_image, data); latch_tree_erase(&image->tnode, &image_tree, &image_tree_ops); /* wait for tasks to get out of trampoline before freeing it */ From patchwork Thu Mar 12 19:56:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253960 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfk61Wbzz9sNg for ; Fri, 13 Mar 2020 06:57:18 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727033AbgCLT5R convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:57:17 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:43518 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727032AbgCLT5R (ORCPT ); Thu, 12 Mar 2020 15:57:17 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-200-epYriTC_M92rrRp6wEhU9Q-1; Thu, 12 Mar 2020 15:57:12 -0400 X-MC-Unique: epYriTC_M92rrRp6wEhU9Q-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8105CA0CC9; Thu, 12 Mar 2020 19:57:10 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id CE7B05D9C5; Thu, 12 Mar 2020 19:57:06 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 11/15] bpf: Add dispatchers to kallsyms Date: Thu, 12 Mar 2020 20:56:06 +0100 Message-Id: <20200312195610.346362-12-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding dispatchers to kallsyms. It's displayed as bpf_dispatcher_ where NAME is the name of dispatcher. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 19 ++++++++++++------- kernel/bpf/dispatcher.c | 1 + 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 7edbf30cf2b2..95e13efe658a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -531,6 +531,7 @@ struct bpf_dispatcher { int num_progs; void *image; u32 image_off; + struct bpf_ksym ksym; }; static __always_inline unsigned int bpf_dispatcher_nop_func( @@ -546,13 +547,17 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key); int bpf_trampoline_link_prog(struct bpf_prog *prog); int bpf_trampoline_unlink_prog(struct bpf_prog *prog); void bpf_trampoline_put(struct bpf_trampoline *tr); -#define BPF_DISPATCHER_INIT(name) { \ - .mutex = __MUTEX_INITIALIZER(name.mutex), \ - .func = &name##_func, \ - .progs = {}, \ - .num_progs = 0, \ - .image = NULL, \ - .image_off = 0 \ +#define BPF_DISPATCHER_INIT(_name) { \ + .mutex = __MUTEX_INITIALIZER(_name.mutex), \ + .func = &_name##_func, \ + .progs = {}, \ + .num_progs = 0, \ + .image = NULL, \ + .image_off = 0, \ + .ksym = { \ + .name = #_name, \ + .lnode = LIST_HEAD_INIT(_name.ksym.lnode), \ + }, \ } #define DEFINE_BPF_DISPATCHER(name) \ diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c index b3e5b214fed8..a2679bae9e73 100644 --- a/kernel/bpf/dispatcher.c +++ b/kernel/bpf/dispatcher.c @@ -143,6 +143,7 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, d->image = bpf_image_alloc(); if (!d->image) goto out; + bpf_image_ksym_add(d->image, &d->ksym); } prev_num_progs = d->num_progs; From patchwork Thu Mar 12 19:56:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253962 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfkG4NnCz9sRN for ; Fri, 13 Mar 2020 06:57:26 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726822AbgCLT5Y convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:57:24 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:39219 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726824AbgCLT5Y (ORCPT ); Thu, 12 Mar 2020 15:57:24 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-263-95c27OYnPc6S23i34kjVUg-1; Thu, 12 Mar 2020 15:57:19 -0400 X-MC-Unique: 95c27OYnPc6S23i34kjVUg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D82B01937FC2; Thu, 12 Mar 2020 19:57:16 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id F12345D9C5; Thu, 12 Mar 2020 19:57:10 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Song Liu , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Arnaldo Carvalho de Melo , Song Liu Subject: [PATCH 12/15] bpf: Remove bpf_image tree Date: Thu, 12 Mar 2020 20:56:07 +0100 Message-Id: <20200312195610.346362-13-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Now that we have all the objects (bpf_prog, bpf_trampoline, bpf_dispatcher) linked in bpf_tree, there's no need to have separate bpf_image tree for images. Reverting the bpf_image tree together with struct bpf_image, because it's no longer needed. Also removing bpf_image_alloc function and adding the original bpf_jit_alloc_exec_page interface instead. The kernel_text_address function can now rely only on is_bpf_text_address, because it checks the bpf_tree that contains all the objects. Keeping bpf_image_ksym_add and bpf_image_ksym_del because they are useful wrappers with perf's ksymbol interface calls. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 8 +--- kernel/bpf/dispatcher.c | 4 +- kernel/bpf/trampoline.c | 83 +++++------------------------------------ kernel/extable.c | 2 - 4 files changed, 13 insertions(+), 84 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 95e13efe658a..8c9175320d6c 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -583,14 +583,8 @@ void bpf_trampoline_put(struct bpf_trampoline *tr); #define BPF_DISPATCHER_PTR(name) (&bpf_dispatcher_##name) void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, struct bpf_prog *to); -struct bpf_image { - struct latch_tree_node tnode; - unsigned char data[]; -}; -#define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image)) -bool is_bpf_image_address(unsigned long address); -void *bpf_image_alloc(void); /* Called only from JIT-enabled code, so there's no need for stubs. */ +void *bpf_jit_alloc_exec_page(void); void bpf_image_ksym_add(void *data, struct bpf_ksym *ksym); void bpf_image_ksym_del(struct bpf_ksym *ksym); void bpf_ksym_add(struct bpf_ksym *ksym); diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c index a2679bae9e73..2444bd15cc2d 100644 --- a/kernel/bpf/dispatcher.c +++ b/kernel/bpf/dispatcher.c @@ -113,7 +113,7 @@ static void bpf_dispatcher_update(struct bpf_dispatcher *d, int prev_num_progs) noff = 0; } else { old = d->image + d->image_off; - noff = d->image_off ^ (BPF_IMAGE_SIZE / 2); + noff = d->image_off ^ (PAGE_SIZE / 2); } new = d->num_progs ? d->image + noff : NULL; @@ -140,7 +140,7 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, mutex_lock(&d->mutex); if (!d->image) { - d->image = bpf_image_alloc(); + d->image = bpf_jit_alloc_exec_page(); if (!d->image) goto out; bpf_image_ksym_add(d->image, &d->ksym); diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 36549c9afec4..f42f700c1d28 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -18,12 +18,11 @@ const struct bpf_prog_ops bpf_extension_prog_ops = { #define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS) static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE]; -static struct latch_tree_root image_tree __cacheline_aligned; -/* serializes access to trampoline_table and image_tree */ +/* serializes access to trampoline_table */ static DEFINE_MUTEX(trampoline_mutex); -static void *bpf_jit_alloc_exec_page(void) +void *bpf_jit_alloc_exec_page(void) { void *image; @@ -39,78 +38,20 @@ static void *bpf_jit_alloc_exec_page(void) return image; } -static __always_inline bool image_tree_less(struct latch_tree_node *a, - struct latch_tree_node *b) -{ - struct bpf_image *ia = container_of(a, struct bpf_image, tnode); - struct bpf_image *ib = container_of(b, struct bpf_image, tnode); - - return ia < ib; -} - -static __always_inline int image_tree_comp(void *addr, struct latch_tree_node *n) -{ - void *image = container_of(n, struct bpf_image, tnode); - - if (addr < image) - return -1; - if (addr >= image + PAGE_SIZE) - return 1; - - return 0; -} - -static const struct latch_tree_ops image_tree_ops = { - .less = image_tree_less, - .comp = image_tree_comp, -}; - -static void *__bpf_image_alloc(bool lock) -{ - struct bpf_image *image; - - image = bpf_jit_alloc_exec_page(); - if (!image) - return NULL; - - if (lock) - mutex_lock(&trampoline_mutex); - latch_tree_insert(&image->tnode, &image_tree, &image_tree_ops); - if (lock) - mutex_unlock(&trampoline_mutex); - return image->data; -} - -void *bpf_image_alloc(void) -{ - return __bpf_image_alloc(true); -} - -bool is_bpf_image_address(unsigned long addr) -{ - bool ret; - - rcu_read_lock(); - ret = latch_tree_find((void *) addr, &image_tree, &image_tree_ops) != NULL; - rcu_read_unlock(); - - return ret; -} - void bpf_image_ksym_add(void *data, struct bpf_ksym *ksym) { ksym->start = (unsigned long) data; - ksym->end = ksym->start + BPF_IMAGE_SIZE; + ksym->end = ksym->start + PAGE_SIZE; bpf_ksym_add(ksym); perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, ksym->start, - BPF_IMAGE_SIZE, false, ksym->name); + PAGE_SIZE, false, ksym->name); } void bpf_image_ksym_del(struct bpf_ksym *ksym) { bpf_ksym_del(ksym); perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, ksym->start, - BPF_IMAGE_SIZE, true, ksym->name); + PAGE_SIZE, true, ksym->name); } static void bpf_trampoline_ksym_add(struct bpf_trampoline *tr) @@ -141,7 +82,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key) goto out; /* is_root was checked earlier. No need for bpf_jit_charge_modmem() */ - image = __bpf_image_alloc(false); + image = bpf_jit_alloc_exec_page(); if (!image) { kfree(tr); tr = NULL; @@ -243,8 +184,8 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total) static int bpf_trampoline_update(struct bpf_trampoline *tr) { - void *old_image = tr->image + ((tr->selector + 1) & 1) * BPF_IMAGE_SIZE/2; - void *new_image = tr->image + (tr->selector & 1) * BPF_IMAGE_SIZE/2; + void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2; + void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2; struct bpf_tramp_progs *tprogs; u32 flags = BPF_TRAMP_F_RESTORE_REGS; int err, total; @@ -272,7 +213,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr) synchronize_rcu_tasks(); - err = arch_prepare_bpf_trampoline(new_image, new_image + BPF_IMAGE_SIZE / 2, + err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2, &tr->func.model, flags, tprogs, tr->func.addr); if (err < 0) @@ -383,8 +324,6 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog) void bpf_trampoline_put(struct bpf_trampoline *tr) { - struct bpf_image *image; - if (!tr) return; mutex_lock(&trampoline_mutex); @@ -396,11 +335,9 @@ void bpf_trampoline_put(struct bpf_trampoline *tr) if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT]))) goto out; bpf_image_ksym_del(&tr->ksym); - image = container_of(tr->image, struct bpf_image, data); - latch_tree_erase(&image->tnode, &image_tree, &image_tree_ops); /* wait for tasks to get out of trampoline before freeing it */ synchronize_rcu_tasks(); - bpf_jit_free_exec(image); + bpf_jit_free_exec(tr->image); hlist_del(&tr->hlist); kfree(tr); out: diff --git a/kernel/extable.c b/kernel/extable.c index a0024f27d3a1..7681f87e89dd 100644 --- a/kernel/extable.c +++ b/kernel/extable.c @@ -149,8 +149,6 @@ int kernel_text_address(unsigned long addr) goto out; if (is_bpf_text_address(addr)) goto out; - if (is_bpf_image_address(addr)) - goto out; ret = 0; out: if (no_rcu) From patchwork Thu Mar 12 19:56:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253964 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfkM1YPbz9sRY for ; Fri, 13 Mar 2020 06:57:31 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727076AbgCLT5a convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:57:30 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:60471 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727070AbgCLT52 (ORCPT ); Thu, 12 Mar 2020 15:57:28 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-154-tEgeGkoXP3ibxX89_EKqPA-1; Thu, 12 Mar 2020 15:57:23 -0400 X-MC-Unique: tEgeGkoXP3ibxX89_EKqPA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 153781937FC2; Thu, 12 Mar 2020 19:57:21 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6023F5D9C5; Thu, 12 Mar 2020 19:57:17 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: Arnaldo Carvalho de Melo , Song Liu , netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Song Liu Subject: [PATCH 13/15] perf tools: Synthesize bpf_trampoline/dispatcher ksymbol event Date: Thu, 12 Mar 2020 20:56:08 +0100 Message-Id: <20200312195610.346362-14-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Synthesize bpf images (trampolines/dispatchers) on start, as ksymbol events from /proc/kallsyms. Having this perf can recognize samples from those images and perf report and top shows them correctly. The rest of the ksymbol handling is already in place from for the bpf programs monitoring, so only the initial state was needed. perf report output: # Overhead Command Shared Object Symbol 12.37% test_progs [kernel.vmlinux] [k] entry_SYSCALL_64 11.80% test_progs [kernel.vmlinux] [k] syscall_return_via_sysret 9.63% test_progs bpf_prog_bcf7977d3b93787c_prog2 [k] bpf_prog_bcf7977d3b93787c_prog2 6.90% test_progs bpf_trampoline_24456 [k] bpf_trampoline_24456 6.36% test_progs [kernel.vmlinux] [k] memcpy_erms Acked-by: Arnaldo Carvalho de Melo Acked-by: Song Liu Signed-off-by: Jiri Olsa Acked-by: Jiri Olsa --- tools/perf/util/bpf-event.c | 92 +++++++++++++++++++++++++++++++++++++ 1 file changed, 92 insertions(+) diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c index a3207d900339..3728db98981e 100644 --- a/tools/perf/util/bpf-event.c +++ b/tools/perf/util/bpf-event.c @@ -6,6 +6,9 @@ #include #include #include +#include +#include +#include #include "bpf-event.h" #include "debug.h" #include "dso.h" @@ -290,11 +293,81 @@ static int perf_event__synthesize_one_bpf_prog(struct perf_session *session, return err ? -1 : 0; } +struct kallsyms_parse { + union perf_event *event; + perf_event__handler_t process; + struct machine *machine; + struct perf_tool *tool; +}; + +static int +process_bpf_image(char *name, u64 addr, struct kallsyms_parse *data) +{ + struct machine *machine = data->machine; + union perf_event *event = data->event; + struct perf_record_ksymbol *ksymbol; + + ksymbol = &event->ksymbol; + + *ksymbol = (struct perf_record_ksymbol) { + .header = { + .type = PERF_RECORD_KSYMBOL, + .size = offsetof(struct perf_record_ksymbol, name), + }, + .addr = addr, + .len = page_size, + .ksym_type = PERF_RECORD_KSYMBOL_TYPE_BPF, + .flags = 0, + }; + + strncpy(ksymbol->name, name, KSYM_NAME_LEN); + ksymbol->header.size += PERF_ALIGN(strlen(name) + 1, sizeof(u64)); + memset((void *) event + event->header.size, 0, machine->id_hdr_size); + event->header.size += machine->id_hdr_size; + + return perf_tool__process_synth_event(data->tool, event, machine, + data->process); +} + +static int +kallsyms_process_symbol(void *data, const char *_name, + char type __maybe_unused, u64 start) +{ + char disp[KSYM_NAME_LEN]; + char *module, *name; + unsigned long id; + int err = 0; + + module = strchr(_name, '\t'); + if (!module) + return 0; + + /* We are going after [bpf] module ... */ + if (strcmp(module + 1, "[bpf]")) + return 0; + + name = memdup(_name, (module - _name) + 1); + if (!name) + return -ENOMEM; + + name[module - _name] = 0; + + /* .. and only for trampolines and dispatchers */ + if ((sscanf(name, "bpf_trampoline_%lu", &id) == 1) || + (sscanf(name, "bpf_dispatcher_%s", disp) == 1)) + err = process_bpf_image(name, start, data); + + free(name); + return err; +} + int perf_event__synthesize_bpf_events(struct perf_session *session, perf_event__handler_t process, struct machine *machine, struct record_opts *opts) { + const char *kallsyms_filename = "/proc/kallsyms"; + struct kallsyms_parse arg; union perf_event *event; __u32 id = 0; int err; @@ -303,6 +376,8 @@ int perf_event__synthesize_bpf_events(struct perf_session *session, event = malloc(sizeof(event->bpf) + KSYM_NAME_LEN + machine->id_hdr_size); if (!event) return -1; + + /* Synthesize all the bpf programs in system. */ while (true) { err = bpf_prog_get_next_id(id, &id); if (err) { @@ -335,6 +410,23 @@ int perf_event__synthesize_bpf_events(struct perf_session *session, break; } } + + /* Synthesize all the bpf images - trampolines/dispatchers. */ + if (symbol_conf.kallsyms_name != NULL) + kallsyms_filename = symbol_conf.kallsyms_name; + + arg = (struct kallsyms_parse) { + .event = event, + .process = process, + .machine = machine, + .tool = session->tool, + }; + + if (kallsyms__parse(kallsyms_filename, &arg, kallsyms_process_symbol)) { + pr_err("%s: failed to synthesize bpf images: %s\n", + __func__, strerror(errno)); + } + free(event); return err; } From patchwork Thu Mar 12 19:56:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253966 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfkN6J6Rz9sRN for ; Fri, 13 Mar 2020 06:57:32 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727079AbgCLT5c convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:57:32 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:20140 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727077AbgCLT5c (ORCPT ); Thu, 12 Mar 2020 15:57:32 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-23-IxvI6rggMmeLaU-M9WQ-LA-1; Thu, 12 Mar 2020 15:57:27 -0400 X-MC-Unique: IxvI6rggMmeLaU-M9WQ-LA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 634F1801E6C; Thu, 12 Mar 2020 19:57:25 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 76EEF5D9C5; Thu, 12 Mar 2020 19:57:21 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: Arnaldo Carvalho de Melo , Song Liu , netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Song Liu Subject: [PATCH 14/15] perf tools: Set ksymbol dso as loaded on arrival Date: Thu, 12 Mar 2020 20:56:09 +0100 Message-Id: <20200312195610.346362-15-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org There's no special load action for ksymbol data on map__load/dso__load action, where the kernel is getting loaded. It only gets confused with kernel kallsyms/vmlinux load for bpf object, which fails and could mess up with the map. Disabling any further load of the map for ksymbol related dso/map. Acked-by: Arnaldo Carvalho de Melo Acked-by: Song Liu Signed-off-by: Jiri Olsa --- tools/perf/util/machine.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index fb5c2cd44d30..463ada5117f8 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -742,6 +742,7 @@ static int machine__process_ksymbol_register(struct machine *machine, map->start = event->ksymbol.addr; map->end = map->start + event->ksymbol.len; maps__insert(&machine->kmaps, map); + dso__set_loaded(dso); } sym = symbol__new(map->map_ip(map, map->start), From patchwork Thu Mar 12 19:56:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1253968 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48dfkV2DMLz9sNg for ; Fri, 13 Mar 2020 06:57:38 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727070AbgCLT5h convert rfc822-to-8bit (ORCPT ); Thu, 12 Mar 2020 15:57:37 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:40266 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726776AbgCLT5h (ORCPT ); Thu, 12 Mar 2020 15:57:37 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-59-iqrqSnl9NQGQKELD8lnsAw-1; Thu, 12 Mar 2020 15:57:33 -0400 X-MC-Unique: iqrqSnl9NQGQKELD8lnsAw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CAC39A0CBF; Thu, 12 Mar 2020 19:57:30 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-40.brq.redhat.com [10.40.204.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id D9EB35D9C5; Thu, 12 Mar 2020 19:57:25 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: Arnaldo Carvalho de Melo , Song Liu , netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , John Fastabend , Jesper Dangaard Brouer , Song Liu Subject: [PATCH 15/15] perf annotate: Add base support for bpf_image Date: Thu, 12 Mar 2020 20:56:10 +0100 Message-Id: <20200312195610.346362-16-jolsa@kernel.org> In-Reply-To: <20200312195610.346362-1-jolsa@kernel.org> References: <20200312195610.346362-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding the DSO_BINARY_TYPE__BPF_IMAGE dso binary type to recognize bpf images that carry trampoline or dispatcher. Upcoming patches will add support to read the image data, store it within the BPF feature in perf.data and display it for annotation purposes. Currently we only display following message: # ./perf annotate bpf_trampoline_24456 --stdio Percent | Source code & Disassembly of . for cycles (504 ... --------------------------------------------------------------- ... : to be implemented Acked-by: Arnaldo Carvalho de Melo Acked-by: Song Liu Signed-off-by: Jiri Olsa --- tools/perf/util/annotate.c | 20 ++++++++++++++++++++ tools/perf/util/dso.c | 1 + tools/perf/util/dso.h | 1 + tools/perf/util/machine.c | 11 +++++++++++ tools/perf/util/symbol.c | 1 + 5 files changed, 34 insertions(+) diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c index ca73fb74ad03..d9e606e11936 100644 --- a/tools/perf/util/annotate.c +++ b/tools/perf/util/annotate.c @@ -1843,6 +1843,24 @@ static int symbol__disassemble_bpf(struct symbol *sym __maybe_unused, } #endif // defined(HAVE_LIBBFD_SUPPORT) && defined(HAVE_LIBBPF_SUPPORT) +static int +symbol__disassemble_bpf_image(struct symbol *sym, + struct annotate_args *args) +{ + struct annotation *notes = symbol__annotation(sym); + struct disasm_line *dl; + + args->offset = -1; + args->line = strdup("to be implemented"); + args->line_nr = 0; + dl = disasm_line__new(args); + if (dl) + annotation_line__add(&dl->al, ¬es->src->source); + + free(args->line); + return 0; +} + /* * Possibly create a new version of line with tabs expanded. Returns the * existing or new line, storage is updated if a new line is allocated. If @@ -1942,6 +1960,8 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args) if (dso->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO) { return symbol__disassemble_bpf(sym, args); + } else if (dso->binary_type == DSO_BINARY_TYPE__BPF_IMAGE) { + return symbol__disassemble_bpf_image(sym, args); } else if (dso__is_kcore(dso)) { kce.kcore_filename = symfs_filename; kce.addr = map__rip_2objdump(map, sym->start); diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c index 91f21239608b..f338990e0fe6 100644 --- a/tools/perf/util/dso.c +++ b/tools/perf/util/dso.c @@ -191,6 +191,7 @@ int dso__read_binary_type_filename(const struct dso *dso, case DSO_BINARY_TYPE__GUEST_KALLSYMS: case DSO_BINARY_TYPE__JAVA_JIT: case DSO_BINARY_TYPE__BPF_PROG_INFO: + case DSO_BINARY_TYPE__BPF_IMAGE: case DSO_BINARY_TYPE__NOT_FOUND: ret = -1; break; diff --git a/tools/perf/util/dso.h b/tools/perf/util/dso.h index 2db64b79617a..9553a1fd9e8a 100644 --- a/tools/perf/util/dso.h +++ b/tools/perf/util/dso.h @@ -40,6 +40,7 @@ enum dso_binary_type { DSO_BINARY_TYPE__GUEST_KCORE, DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO, DSO_BINARY_TYPE__BPF_PROG_INFO, + DSO_BINARY_TYPE__BPF_IMAGE, DSO_BINARY_TYPE__NOT_FOUND, }; diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index 463ada5117f8..372ed147bed5 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -719,6 +719,12 @@ int machine__process_switch_event(struct machine *machine __maybe_unused, return 0; } +static int is_bpf_image(const char *name) +{ + return strncmp(name, "bpf_trampoline_", sizeof("bpf_trampoline_") - 1) || + strncmp(name, "bpf_dispatcher_", sizeof("bpf_dispatcher_") - 1); +} + static int machine__process_ksymbol_register(struct machine *machine, union perf_event *event, struct perf_sample *sample __maybe_unused) @@ -743,6 +749,11 @@ static int machine__process_ksymbol_register(struct machine *machine, map->end = map->start + event->ksymbol.len; maps__insert(&machine->kmaps, map); dso__set_loaded(dso); + + if (is_bpf_image(event->ksymbol.name)) { + dso->binary_type = DSO_BINARY_TYPE__BPF_IMAGE; + dso__set_long_name(dso, "", false); + } } sym = symbol__new(map->map_ip(map, map->start), diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c index 1077013d8ce2..fd0795aa9b9a 100644 --- a/tools/perf/util/symbol.c +++ b/tools/perf/util/symbol.c @@ -1544,6 +1544,7 @@ static bool dso__is_compatible_symtab_type(struct dso *dso, bool kmod, return true; case DSO_BINARY_TYPE__BPF_PROG_INFO: + case DSO_BINARY_TYPE__BPF_IMAGE: case DSO_BINARY_TYPE__NOT_FOUND: default: return false;