From patchwork Sun Dec 29 14:37:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1216091 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47m37k0zC3z9sPV for ; Mon, 30 Dec 2019 01:37:54 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726677AbfL2Ohx convert rfc822-to-8bit (ORCPT ); Sun, 29 Dec 2019 09:37:53 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:54451 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726675AbfL2Ohx (ORCPT ); Sun, 29 Dec 2019 09:37:53 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-294-nBBww4CzPESMoQb7BWCZ0A-1; Sun, 29 Dec 2019 09:37:48 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1743E800D41; Sun, 29 Dec 2019 14:37:47 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-25.brq.redhat.com [10.40.204.25]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9F90D5DA2C; Sun, 29 Dec 2019 14:37:44 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller Subject: [PATCH 1/5] bpf: Allow non struct type for btf ctx access Date: Sun, 29 Dec 2019 15:37:36 +0100 Message-Id: <20191229143740.29143-2-jolsa@kernel.org> In-Reply-To: <20191229143740.29143-1-jolsa@kernel.org> References: <20191229143740.29143-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-MC-Unique: nBBww4CzPESMoQb7BWCZ0A-1 X-Mimecast-Spam-Score: 0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org I'm not sure why the restriction was added, but I can't access pointers to POD types like const char * when probing vfs_read function. Removing the check and allow non struct type access in context. Signed-off-by: Jiri Olsa --- kernel/bpf/btf.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index ed2075884724..ae90f60ac1b8 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3712,12 +3712,6 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip modifiers */ while (btf_type_is_modifier(t)) t = btf_type_by_id(btf, t->type); - if (!btf_type_is_struct(t)) { - bpf_log(log, - "func '%s' arg%d type %s is not a struct\n", - tname, arg, btf_kind_str[BTF_INFO_KIND(t->info)]); - return false; - } bpf_log(log, "func '%s' arg%d has btf_id %d type %s '%s'\n", tname, arg, info->btf_id, btf_kind_str[BTF_INFO_KIND(t->info)], __btf_name_by_offset(btf, t->name_off)); From patchwork Sun Dec 29 14:37:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1216093 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47m37n2wCdz9sPV for ; Mon, 30 Dec 2019 01:37:57 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726597AbfL2Oh4 convert rfc822-to-8bit (ORCPT ); Sun, 29 Dec 2019 09:37:56 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:25328 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726706AbfL2Ohz (ORCPT ); Sun, 29 Dec 2019 09:37:55 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-14-NCbSZOvOMXyKRDCtIq-2Bw-1; Sun, 29 Dec 2019 09:37:51 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 187D81005502; Sun, 29 Dec 2019 14:37:50 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-25.brq.redhat.com [10.40.204.25]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9824A5DA7D; Sun, 29 Dec 2019 14:37:47 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller Subject: [PATCH 2/5] bpf: Add bpf_perf_event_output_kfunc Date: Sun, 29 Dec 2019 15:37:37 +0100 Message-Id: <20191229143740.29143-3-jolsa@kernel.org> In-Reply-To: <20191229143740.29143-1-jolsa@kernel.org> References: <20191229143740.29143-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-MC-Unique: NCbSZOvOMXyKRDCtIq-2Bw-1 X-Mimecast-Spam-Score: 0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding support to use perf_event_output in BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs. There are no pt_regs available in the trampoline, so getting one via bpf_kfunc_regs array. Signed-off-by: Jiri Olsa --- kernel/trace/bpf_trace.c | 67 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 67 insertions(+) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index e5ef4ae9edb5..1b270bbd9016 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1151,6 +1151,69 @@ raw_tp_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) } } +struct bpf_kfunc_regs { + struct pt_regs regs[3]; +}; + +static DEFINE_PER_CPU(struct bpf_kfunc_regs, bpf_kfunc_regs); +static DEFINE_PER_CPU(int, bpf_kfunc_nest_level); + +static struct pt_regs *get_bpf_kfunc_regs(void) +{ + struct bpf_kfunc_regs *tp_regs = this_cpu_ptr(&bpf_kfunc_regs); + int nest_level = this_cpu_inc_return(bpf_kfunc_nest_level); + + if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(tp_regs->regs))) { + this_cpu_dec(bpf_kfunc_nest_level); + return ERR_PTR(-EBUSY); + } + + return &tp_regs->regs[nest_level - 1]; +} + +static void put_bpf_kfunc_regs(void) +{ + this_cpu_dec(bpf_kfunc_nest_level); +} + +BPF_CALL_5(bpf_perf_event_output_kfunc, void *, ctx, struct bpf_map *, map, + u64, flags, void *, data, u64, size) +{ + struct pt_regs *regs = get_bpf_kfunc_regs(); + int ret; + + if (IS_ERR(regs)) + return PTR_ERR(regs); + + perf_fetch_caller_regs(regs); + ret = ____bpf_perf_event_output(regs, map, flags, data, size); + + put_bpf_kfunc_regs(); + return ret; +} + +static const struct bpf_func_proto bpf_perf_event_output_proto_kfunc = { + .func = bpf_perf_event_output_kfunc, + .gpl_only = true, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, + .arg2_type = ARG_CONST_MAP_PTR, + .arg3_type = ARG_ANYTHING, + .arg4_type = ARG_PTR_TO_MEM, + .arg5_type = ARG_CONST_SIZE_OR_ZERO, +}; + +static const struct bpf_func_proto * +kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) +{ + switch (func_id) { + case BPF_FUNC_perf_event_output: + return &bpf_perf_event_output_proto_kfunc; + default: + return tracing_func_proto(func_id, prog); + } +} + static const struct bpf_func_proto * tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { @@ -1160,6 +1223,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_skb_output_proto; #endif default: + if (prog->expected_attach_type == BPF_TRACE_FENTRY || + prog->expected_attach_type == BPF_TRACE_FEXIT) + return kfunc_prog_func_proto(func_id, prog); + return raw_tp_prog_func_proto(func_id, prog); } } From patchwork Sun Dec 29 14:37:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1216095 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47m37r2qQWz9sPV for ; Mon, 30 Dec 2019 01:38:00 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726730AbfL2Oh7 convert rfc822-to-8bit (ORCPT ); Sun, 29 Dec 2019 09:37:59 -0500 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:51085 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726728AbfL2Oh6 (ORCPT ); Sun, 29 Dec 2019 09:37:58 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-98-P3Sfw3ASNXG7ANfNk_ItoA-1; Sun, 29 Dec 2019 09:37:54 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1413E107ACC4; Sun, 29 Dec 2019 14:37:53 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-25.brq.redhat.com [10.40.204.25]) by smtp.corp.redhat.com (Postfix) with ESMTP id 90ECD5DA2C; Sun, 29 Dec 2019 14:37:50 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller Subject: [PATCH 3/5] bpf: Add bpf_get_stackid_kfunc Date: Sun, 29 Dec 2019 15:37:38 +0100 Message-Id: <20191229143740.29143-4-jolsa@kernel.org> In-Reply-To: <20191229143740.29143-1-jolsa@kernel.org> References: <20191229143740.29143-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-MC-Unique: P3Sfw3ASNXG7ANfNk_ItoA-1 X-Mimecast-Spam-Score: 0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding support to use bpf_get_stackid_kfunc in BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs. Signed-off-by: Jiri Olsa --- kernel/trace/bpf_trace.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 1b270bbd9016..c8e0709704f5 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1203,12 +1203,40 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_kfunc = { .arg5_type = ARG_CONST_SIZE_OR_ZERO, }; +BPF_CALL_3(bpf_get_stackid_kfunc, void*, args, + struct bpf_map *, map, u64, flags) +{ + struct pt_regs *regs = get_bpf_kfunc_regs(); + int ret; + + if (IS_ERR(regs)) + return PTR_ERR(regs); + + perf_fetch_caller_regs(regs); + /* similar to bpf_perf_event_output_tp, but pt_regs fetched differently */ + ret = bpf_get_stackid((unsigned long) regs, (unsigned long) map, + flags, 0, 0); + put_bpf_kfunc_regs(); + return ret; +} + +static const struct bpf_func_proto bpf_get_stackid_proto_kfunc = { + .func = bpf_get_stackid_kfunc, + .gpl_only = true, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, + .arg2_type = ARG_CONST_MAP_PTR, + .arg3_type = ARG_ANYTHING, +}; + static const struct bpf_func_proto * kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { switch (func_id) { case BPF_FUNC_perf_event_output: return &bpf_perf_event_output_proto_kfunc; + case BPF_FUNC_get_stackid: + return &bpf_get_stackid_proto_kfunc; default: return tracing_func_proto(func_id, prog); } From patchwork Sun Dec 29 14:37:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1216097 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47m37w2zq1z9sPV for ; Mon, 30 Dec 2019 01:38:04 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726761AbfL2OiD convert rfc822-to-8bit (ORCPT ); Sun, 29 Dec 2019 09:38:03 -0500 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:54384 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726596AbfL2OiD (ORCPT ); Sun, 29 Dec 2019 09:38:03 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-252-nFGLHNO0M9S5tmLfGUf2Eg-1; Sun, 29 Dec 2019 09:37:57 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0A21110054E3; Sun, 29 Dec 2019 14:37:56 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-25.brq.redhat.com [10.40.204.25]) by smtp.corp.redhat.com (Postfix) with ESMTP id 88F135DA7D; Sun, 29 Dec 2019 14:37:53 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller Subject: [PATCH 4/5] bpf: Add bpf_get_stack_kfunc Date: Sun, 29 Dec 2019 15:37:39 +0100 Message-Id: <20191229143740.29143-5-jolsa@kernel.org> In-Reply-To: <20191229143740.29143-1-jolsa@kernel.org> References: <20191229143740.29143-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-MC-Unique: nFGLHNO0M9S5tmLfGUf2Eg-1 X-Mimecast-Spam-Score: 0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding support to use bpf_get_stack_kfunc in BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs. Signed-off-by: Jiri Olsa --- kernel/trace/bpf_trace.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index c8e0709704f5..02979c5d6357 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1229,6 +1229,32 @@ static const struct bpf_func_proto bpf_get_stackid_proto_kfunc = { .arg3_type = ARG_ANYTHING, }; +BPF_CALL_4(bpf_get_stack_kfunc, void*, args, + void *, buf, u32, size, u64, flags) +{ + struct pt_regs *regs = get_bpf_kfunc_regs(); + int ret; + + if (IS_ERR(regs)) + return PTR_ERR(regs); + + perf_fetch_caller_regs(regs); + ret = bpf_get_stack((unsigned long) regs, (unsigned long) buf, + (unsigned long) size, flags, 0); + put_bpf_kfunc_regs(); + return ret; +} + +static const struct bpf_func_proto bpf_get_stack_proto_kfunc = { + .func = bpf_get_stack_raw_tp, + .gpl_only = true, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, + .arg2_type = ARG_PTR_TO_MEM, + .arg3_type = ARG_CONST_SIZE_OR_ZERO, + .arg4_type = ARG_ANYTHING, +}; + static const struct bpf_func_proto * kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { @@ -1237,6 +1263,8 @@ kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_perf_event_output_proto_kfunc; case BPF_FUNC_get_stackid: return &bpf_get_stackid_proto_kfunc; + case BPF_FUNC_get_stack: + return &bpf_get_stack_proto_kfunc; default: return tracing_func_proto(func_id, prog); } From patchwork Sun Dec 29 14:37:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 1216099 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47m37y1Qvqz9sPV for ; Mon, 30 Dec 2019 01:38:06 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726809AbfL2OiF convert rfc822-to-8bit (ORCPT ); Sun, 29 Dec 2019 09:38:05 -0500 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:60965 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726796AbfL2OiE (ORCPT ); Sun, 29 Dec 2019 09:38:04 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-24-BUO75DSvOymR2akmbqlz0A-1; Sun, 29 Dec 2019 09:38:00 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1D1F21005502; Sun, 29 Dec 2019 14:37:59 +0000 (UTC) Received: from krava.redhat.com (ovpn-204-25.brq.redhat.com [10.40.204.25]) by smtp.corp.redhat.com (Postfix) with ESMTP id 86B405DA7D; Sun, 29 Dec 2019 14:37:56 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Andrii Nakryiko , Yonghong Song , Martin KaFai Lau , Jakub Kicinski , David Miller Subject: [PATCH 5/5] bpf: Allow to resolve bpf trampoline in unwind Date: Sun, 29 Dec 2019 15:37:40 +0100 Message-Id: <20191229143740.29143-6-jolsa@kernel.org> In-Reply-To: <20191229143740.29143-1-jolsa@kernel.org> References: <20191229143740.29143-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-MC-Unique: BUO75DSvOymR2akmbqlz0A-1 X-Mimecast-Spam-Score: 0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org When unwinding the stack we need to identify each address to successfully continue. Adding latch tree to keep trampolines for quick lookup during the unwind. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 6 ++++++ kernel/bpf/core.c | 2 ++ kernel/bpf/trampoline.c | 35 +++++++++++++++++++++++++++++++++++ 3 files changed, 43 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index b14e51d56a82..66825c821ac9 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -470,6 +470,7 @@ struct bpf_trampoline { /* Executable image of trampoline */ void *image; u64 selector; + struct latch_tree_node tnode; }; #define BPF_DISPATCHER_MAX 48 /* Fits in 2048B */ @@ -502,6 +503,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key); int bpf_trampoline_link_prog(struct bpf_prog *prog); int bpf_trampoline_unlink_prog(struct bpf_prog *prog); void bpf_trampoline_put(struct bpf_trampoline *tr); +bool is_bpf_trampoline(void *addr); void *bpf_jit_alloc_exec_page(void); #define BPF_DISPATCHER_INIT(name) { \ .mutex = __MUTEX_INITIALIZER(name.mutex), \ @@ -555,6 +557,10 @@ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {} static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, struct bpf_prog *to) {} +static inline bool is_bpf_trampoline(void *addr) +{ + return false; +} #endif struct bpf_func_info_aux { diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 29d47aae0dd1..63a515b5aa7b 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -704,6 +704,8 @@ bool is_bpf_text_address(unsigned long addr) rcu_read_lock(); ret = bpf_prog_kallsyms_find(addr) != NULL; + if (!ret) + ret = is_bpf_trampoline((void*) addr); rcu_read_unlock(); return ret; diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 505f4e4b31d2..4b5f0d0b0072 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -4,16 +4,44 @@ #include #include #include +#include /* btf_vmlinux has ~22k attachable functions. 1k htab is enough. */ #define TRAMPOLINE_HASH_BITS 10 #define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS) static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE]; +static struct latch_tree_root tree __cacheline_aligned; /* serializes access to trampoline_table */ static DEFINE_MUTEX(trampoline_mutex); +static __always_inline bool tree_less(struct latch_tree_node *a, + struct latch_tree_node *b) +{ + struct bpf_trampoline *ta = container_of(a, struct bpf_trampoline, tnode); + struct bpf_trampoline *tb = container_of(b, struct bpf_trampoline, tnode); + + return ta->image < tb->image; +} + +static __always_inline int tree_comp(void *addr, struct latch_tree_node *n) +{ + struct bpf_trampoline *tr = container_of(n, struct bpf_trampoline, tnode); + + if (addr < tr->image) + return -1; + if (addr >= tr->image + PAGE_SIZE) + return 1; + + return 0; +} + +static const struct latch_tree_ops tree_ops = { + .less = tree_less, + .comp = tree_comp, +}; + void *bpf_jit_alloc_exec_page(void) { void *image; @@ -30,6 +58,11 @@ void *bpf_jit_alloc_exec_page(void) return image; } +bool is_bpf_trampoline(void *addr) +{ + return latch_tree_find(addr, &tree, &tree_ops) != NULL; +} + struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { struct bpf_trampoline *tr; @@ -65,6 +98,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key) for (i = 0; i < BPF_TRAMP_MAX; i++) INIT_HLIST_HEAD(&tr->progs_hlist[i]); tr->image = image; + latch_tree_insert(&tr->tnode, &tree, &tree_ops); out: mutex_unlock(&trampoline_mutex); return tr; @@ -252,6 +286,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr) goto out; bpf_jit_free_exec(tr->image); hlist_del(&tr->hlist); + latch_tree_erase(&tr->tnode, &tree, &tree_ops); kfree(tr); out: mutex_unlock(&trampoline_mutex);