From patchwork Wed Dec 6 06:31:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 845035 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.b="qV0rUdrk"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3ys80C3pFZz9s72 for ; Wed, 6 Dec 2017 17:31:39 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752990AbdLFGbh (ORCPT ); Wed, 6 Dec 2017 01:31:37 -0500 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:58820 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752760AbdLFGbc (ORCPT ); Wed, 6 Dec 2017 01:31:32 -0500 Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id vB66U4En008302 for ; Tue, 5 Dec 2017 22:31:32 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=XuxOXla4pBYZYYsRXhSVoX8shYYs5GBGvuqbyukFAJQ=; b=qV0rUdrkjb6xffr7pvmCSBgLj2CDo1bS0xR4N6nHvDw+pyPqVsSoFsYKFFA0VR4srzG/ 4OZK0nNKdl742pi9u7ndTxsNQ+VfqU5szKJzGeZce2Y3aCPHbItJuz5ZOg/9hqQE6QiX 75Ovj8ZPQpSFkhx8xW/oZTKCUMOXo+iv8FY= Received: from mail.thefacebook.com ([199.201.64.23]) by mx0a-00082601.pphosted.com with ESMTP id 2ep4yms1gw-1 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT) for ; Tue, 05 Dec 2017 22:31:32 -0800 Received: from mx-out.facebook.com (192.168.52.123) by PRN-CHUB09.TheFacebook.com (192.168.16.19) with Microsoft SMTP Server id 14.3.361.1; Tue, 5 Dec 2017 22:31:31 -0800 Received: by devbig474.prn1.facebook.com (Postfix, from userid 128203) id 4D62AE41454; Tue, 5 Dec 2017 22:31:30 -0800 (PST) Smtp-Origin-Hostprefix: devbig From: Yonghong Song Smtp-Origin-Hostname: devbig474.prn1.facebook.com To: , , , , , CC: Smtp-Origin-Cluster: prn1c29 Subject: [PATCH net-next v2 1/2] bpf/tracing: allow user space to query prog array on the same tp Date: Tue, 5 Dec 2017 22:31:28 -0800 Message-ID: <20171206063129.3730876-2-yhs@fb.com> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20171206063129.3730876-1-yhs@fb.com> References: <20171206063129.3730876-1-yhs@fb.com> X-FB-Internal: Safe MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-12-06_02:, , signatures=0 X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Commit e87c6bc3852b ("bpf: permit multiple bpf attachments for a single perf event") added support to attach multiple bpf programs to a single perf event. Commit 2541517c32be ("tracing, perf: Implement BPF programs attached to kprobes") utilized the existing perf ioctl interface and added the command PERF_EVENT_IOC_SET_BPF to attach a bpf program to a tracepoint. This patch adds a new ioctl command, given a perf event fd, to query the bpf program array attached to the same perf tracepoint event. The new uapi ioctl command: PERF_EVENT_IOC_QUERY_BPF The new uapi/linux/perf_event.h structure: struct perf_event_query_bpf { __u64 prog_ids; __u32 prog_cnt; }; The usage: struct perf_event_query_bpf query; query.prog_ids = (__u64)usr_prog_ids_buf; query.prog_cnt = usr_prog_ids_buf_len; err = ioctl(pmu_efd, PERF_EVENT_IOC_QUERY_BPF, &query); Signed-off-by: Yonghong Song Acked-by: Alexei Starovoitov --- include/linux/bpf.h | 4 ++++ include/uapi/linux/perf_event.h | 6 ++++++ kernel/bpf/core.c | 21 +++++++++++++++++++++ kernel/events/core.c | 3 +++ kernel/trace/bpf_trace.c | 23 +++++++++++++++++++++++ 5 files changed, 57 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index e55e425..f812ac5 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -254,6 +254,7 @@ typedef unsigned long (*bpf_ctx_copy_t)(void *dst, const void *src, u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size, void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy); +int bpf_event_query_prog_array(struct perf_event *event, void __user *info); int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, union bpf_attr __user *uattr); @@ -285,6 +286,9 @@ int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *progs, void bpf_prog_array_delete_safe(struct bpf_prog_array __rcu *progs, struct bpf_prog *old_prog); +int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array, + __u32 __user *prog_ids, u32 request_cnt, + __u32 __user *prog_cnt); int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array, struct bpf_prog *exclude_prog, struct bpf_prog *include_prog, diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index b9a4953..fee0b43 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -418,6 +418,11 @@ struct perf_event_attr { __u16 __reserved_2; /* align to __u64 */ }; +struct perf_event_query_bpf { + __u64 prog_ids; + __u32 prog_cnt; +}; + #define perf_flags(attr) (*(&(attr)->read_format + 1)) /* @@ -433,6 +438,7 @@ struct perf_event_attr { #define PERF_EVENT_IOC_ID _IOR('$', 7, __u64 *) #define PERF_EVENT_IOC_SET_BPF _IOW('$', 8, __u32) #define PERF_EVENT_IOC_PAUSE_OUTPUT _IOW('$', 9, __u32) +#define PERF_EVENT_IOC_QUERY_BPF _IOWR('$', 10, struct perf_event_query_bpf *) enum perf_event_ioc_flags { PERF_IOC_FLAG_GROUP = 1U << 0, diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 86b50aa..35b427aa 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1462,6 +1462,8 @@ int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *progs, rcu_read_lock(); prog = rcu_dereference(progs)->progs; for (; *prog; prog++) { + if (*prog == &dummy_bpf_prog.prog) + continue; id = (*prog)->aux->id; if (copy_to_user(prog_ids + i, &id, sizeof(id))) { rcu_read_unlock(); @@ -1545,6 +1547,25 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array, return 0; } +int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array, + __u32 __user *prog_ids, u32 request_cnt, + __u32 __user *prog_cnt) +{ + u32 cnt = 0; + + if (array) + cnt = bpf_prog_array_length(array); + + if (copy_to_user(prog_cnt, &cnt, sizeof(cnt))) + return -EFAULT; + + /* return early if user requested only program count or nothing to copy */ + if (!request_cnt || !prog_ids || !cnt) + return 0; + + return bpf_prog_array_copy_to_user(array, prog_ids, request_cnt); +} + static void bpf_prog_free_deferred(struct work_struct *work) { struct bpf_prog_aux *aux; diff --git a/kernel/events/core.c b/kernel/events/core.c index 16beab4..f10609e 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4723,6 +4723,9 @@ static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned lon rcu_read_unlock(); return 0; } + + case PERF_EVENT_IOC_QUERY_BPF: + return bpf_event_query_prog_array(event, (void __user *)arg); default: return -ENOTTY; } diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 0ce99c3..81eedb2 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -820,3 +820,26 @@ void perf_event_detach_bpf_prog(struct perf_event *event) unlock: mutex_unlock(&bpf_event_mutex); } + +int bpf_event_query_prog_array(struct perf_event *event, void __user *info) +{ + struct perf_event_query_bpf __user *uquery = info; + struct perf_event_query_bpf query = {}; + int ret; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (event->attr.type != PERF_TYPE_TRACEPOINT) + return -EINVAL; + if (copy_from_user(&query, uquery, sizeof(query))) + return -EFAULT; + + mutex_lock(&bpf_event_mutex); + ret = bpf_prog_array_copy_info(event->tp_event->prog_array, + u64_to_user_ptr(query.prog_ids), + query.prog_cnt, + &uquery->prog_cnt); + mutex_unlock(&bpf_event_mutex); + + return ret; +}