From patchwork Wed May 22 20:53:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanislav Fomichev X-Patchwork-Id: 1103666 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="jDf21bgh"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 458Pxf4ZvLz9s9y for ; Thu, 23 May 2019 06:53:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730274AbfEVUx4 (ORCPT ); Wed, 22 May 2019 16:53:56 -0400 Received: from mail-yw1-f74.google.com ([209.85.161.74]:49400 "EHLO mail-yw1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729728AbfEVUx4 (ORCPT ); Wed, 22 May 2019 16:53:56 -0400 Received: by mail-yw1-f74.google.com with SMTP id y144so3216571ywg.16 for ; Wed, 22 May 2019 13:53:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=DVGoSBbuSpH+e3WLjrm4JCzo4XlQbDlgH+gaP+mOUGk=; b=jDf21bghdvPavpAiB4qncBrzG9xXbKDUDUzNd491r/2RCa54zMKJJTUB3hXKAsqL0E g/+3l4encMEoH12bkPQAEuxCr5F9d0Gm0l/oKx0PNXfmt65ui40V16HSJA/6ZB5X6Adr Uv+eyVxBtYEjA1TLJ01ULFtqZ4cP/ZWZDDxFDwCAcmt+mJMqEhq6wXTXxq3aRLXfGBDe MJdh+mV5Y8KzTB9SwgdYbqzjzz0RWhFUeV+qy5Qh5/EIhfqkh+efQIIsKgYqzmrxTpQn iOybtSUFdlMRf93zir0N8pYk4si2cXzx6HpNK2OfOjgHLKXXxCpxHm7hZ6pyg2uMtoLi 1Z4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=DVGoSBbuSpH+e3WLjrm4JCzo4XlQbDlgH+gaP+mOUGk=; b=gcysS5dEIG/e0fSG636pHZbpLHpm5k662Iyg2si4oTP4woIZlMhkRxODfq1Vw3DxIN 4BzZ8ORFpZ/Ez11dD+TbD3wM0+bIdVM3J3XpBc4deC3x3yG6r+Zqbq8n0isTcGXFvbNq 4xLwrxCj0OVmXNlWMThcGQcn4bqTW78yIwETM6BkTXVCyLm/5tkZZQvDQm9k0ExppcdB fRRynzPFNDJmZL0FQ1ad2UT/D3gdEQZeen4jja30Lr4uIq0+N3K8wieORdyf2j+0x1GE KUWXuRlxYtIV4bECaBXExbW5JGdrxA8mC6H/OgLPbGcfXqTa7UzdfXFGB5cPyCyNKl0r OvYg== X-Gm-Message-State: APjAAAVVDdy735kkUbXat06oLNL9UsocMz+0CoRho0pXX/CxQXxMYyKe iQx22k0t0XqEImntu+Q75YgIEPm75hjSyPvGkJ08eshzbmOcxvMsLTPa5PBu4+/2JCGnL5wcQ2v saHpsJAg0T88UKrSXNIEiBAhj9Or+4Laa93dDMcfjeBe7CoJEX9GqKw== X-Google-Smtp-Source: APXvYqxYwfzjHrzGCfYPGAiIhIGShiyO8nmp/97NXlJv1XT2CANUkMzwfeUS2Rp+R2kyiIUNrSaJtv8= X-Received: by 2002:a81:980b:: with SMTP id p11mr10645808ywg.48.1558558435637; Wed, 22 May 2019 13:53:55 -0700 (PDT) Date: Wed, 22 May 2019 13:53:50 -0700 Message-Id: <20190522205353.140648-1-sdf@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.21.0.1020.gf2820cf01a-goog Subject: [PATCH bpf-next v2 1/4] bpf: remove __rcu annotations from bpf_prog_array From: Stanislav Fomichev To: netdev@vger.kernel.org, bpf@vger.kernel.org Cc: davem@davemloft.net, ast@kernel.org, daniel@iogearbox.net, Stanislav Fomichev , Roman Gushchin Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Drop __rcu annotations and rcu read sections from bpf_prog_array helper functions. They are not needed since all existing callers call those helpers from the rcu update side while holding a mutex. This guarantees that use-after-free could not happen. In the next patches I'll fix the callers with missing rcu_dereference_protected to make sparse/lockdep happy, the proper way to use these helpers is: struct bpf_prog_array __rcu *progs = ...; struct bpf_prog_array *p; mutex_lock(&mtx); p = rcu_dereference_protected(progs, lockdep_is_held(&mtx)); bpf_prog_array_length(p); bpf_prog_array_copy_to_user(p, ...); bpf_prog_array_delete_safe(p, ...); bpf_prog_array_copy_info(p, ...); bpf_prog_array_copy(p, ...); bpf_prog_array_free(p); mutex_unlock(&mtx); No functional changes! rcu_dereference_protected with lockdep_is_held should catch any cases where we update prog array without a mutex (I've looked at existing call sites and I think we hold a mutex everywhere). One possible complication might be with Roman's set of patches to decouple cgroup_bpf lifetime from the cgroup. In that case we can use rcu_dereference_check(..., 1) since we know that there should not be any existing users when we dismantle the cgroup. v2: * remove comment about potential race; that can't happen because all callers are in rcu-update section Cc: Roman Gushchin Signed-off-by: Stanislav Fomichev --- include/linux/bpf.h | 12 ++++++------ kernel/bpf/core.c | 37 +++++++++++++------------------------ 2 files changed, 19 insertions(+), 30 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 4fb3aa2dc975..88ea32358593 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -513,17 +513,17 @@ struct bpf_prog_array { }; struct bpf_prog_array *bpf_prog_array_alloc(u32 prog_cnt, gfp_t flags); -void bpf_prog_array_free(struct bpf_prog_array __rcu *progs); -int bpf_prog_array_length(struct bpf_prog_array __rcu *progs); -int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *progs, +void bpf_prog_array_free(struct bpf_prog_array *progs); +int bpf_prog_array_length(struct bpf_prog_array *progs); +int bpf_prog_array_copy_to_user(struct bpf_prog_array *progs, __u32 __user *prog_ids, u32 cnt); -void bpf_prog_array_delete_safe(struct bpf_prog_array __rcu *progs, +void bpf_prog_array_delete_safe(struct bpf_prog_array *progs, struct bpf_prog *old_prog); -int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array, +int bpf_prog_array_copy_info(struct bpf_prog_array *array, u32 *prog_ids, u32 request_cnt, u32 *prog_cnt); -int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array, +int bpf_prog_array_copy(struct bpf_prog_array *old_array, struct bpf_prog *exclude_prog, struct bpf_prog *include_prog, struct bpf_prog_array **new_array); diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 242a643af82f..aad86c8a0d61 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1795,38 +1795,33 @@ struct bpf_prog_array *bpf_prog_array_alloc(u32 prog_cnt, gfp_t flags) return &empty_prog_array.hdr; } -void bpf_prog_array_free(struct bpf_prog_array __rcu *progs) +void bpf_prog_array_free(struct bpf_prog_array *progs) { - if (!progs || - progs == (struct bpf_prog_array __rcu *)&empty_prog_array.hdr) + if (!progs || progs == &empty_prog_array.hdr) return; kfree_rcu(progs, rcu); } -int bpf_prog_array_length(struct bpf_prog_array __rcu *array) +int bpf_prog_array_length(struct bpf_prog_array *array) { struct bpf_prog_array_item *item; u32 cnt = 0; - rcu_read_lock(); - item = rcu_dereference(array)->items; - for (; item->prog; item++) + for (item = array->items; item->prog; item++) if (item->prog != &dummy_bpf_prog.prog) cnt++; - rcu_read_unlock(); return cnt; } -static bool bpf_prog_array_copy_core(struct bpf_prog_array __rcu *array, +static bool bpf_prog_array_copy_core(struct bpf_prog_array *array, u32 *prog_ids, u32 request_cnt) { struct bpf_prog_array_item *item; int i = 0; - item = rcu_dereference_check(array, 1)->items; - for (; item->prog; item++) { + for (item = array->items; item->prog; item++) { if (item->prog == &dummy_bpf_prog.prog) continue; prog_ids[i] = item->prog->aux->id; @@ -1839,7 +1834,7 @@ static bool bpf_prog_array_copy_core(struct bpf_prog_array __rcu *array, return !!(item->prog); } -int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *array, +int bpf_prog_array_copy_to_user(struct bpf_prog_array *array, __u32 __user *prog_ids, u32 cnt) { unsigned long err = 0; @@ -1850,18 +1845,12 @@ int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *array, * cnt = bpf_prog_array_length(); * if (cnt > 0) * bpf_prog_array_copy_to_user(..., cnt); - * so below kcalloc doesn't need extra cnt > 0 check, but - * bpf_prog_array_length() releases rcu lock and - * prog array could have been swapped with empty or larger array, - * so always copy 'cnt' prog_ids to the user. - * In a rare race the user will see zero prog_ids + * so below kcalloc doesn't need extra cnt > 0 check. */ ids = kcalloc(cnt, sizeof(u32), GFP_USER | __GFP_NOWARN); if (!ids) return -ENOMEM; - rcu_read_lock(); nospc = bpf_prog_array_copy_core(array, ids, cnt); - rcu_read_unlock(); err = copy_to_user(prog_ids, ids, cnt * sizeof(u32)); kfree(ids); if (err) @@ -1871,19 +1860,19 @@ int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *array, return 0; } -void bpf_prog_array_delete_safe(struct bpf_prog_array __rcu *array, +void bpf_prog_array_delete_safe(struct bpf_prog_array *array, struct bpf_prog *old_prog) { - struct bpf_prog_array_item *item = array->items; + struct bpf_prog_array_item *item; - for (; item->prog; item++) + for (item = array->items; item->prog; item++) if (item->prog == old_prog) { WRITE_ONCE(item->prog, &dummy_bpf_prog.prog); break; } } -int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array, +int bpf_prog_array_copy(struct bpf_prog_array *old_array, struct bpf_prog *exclude_prog, struct bpf_prog *include_prog, struct bpf_prog_array **new_array) @@ -1947,7 +1936,7 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array, return 0; } -int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array, +int bpf_prog_array_copy_info(struct bpf_prog_array *array, u32 *prog_ids, u32 request_cnt, u32 *prog_cnt) { From patchwork Wed May 22 20:53:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanislav Fomichev X-Patchwork-Id: 1103668 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="YNNVPbSx"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 458Pxh3rkLz9sBK for ; Thu, 23 May 2019 06:54:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730195AbfEVUx7 (ORCPT ); Wed, 22 May 2019 16:53:59 -0400 Received: from mail-qt1-f201.google.com ([209.85.160.201]:55493 "EHLO mail-qt1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729728AbfEVUx7 (ORCPT ); Wed, 22 May 2019 16:53:59 -0400 Received: by mail-qt1-f201.google.com with SMTP id v16so3253175qtk.22 for ; Wed, 22 May 2019 13:53:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ukNWyZpQEscAlXdUip/QWXhs5cbXEziRRf5YyJhah4M=; b=YNNVPbSx7FsOwk16jCPbtBA2G+ZPpu8SLkw0MknnLWZEzpx3k3DQ24qU05oO58WL6/ Z3l/BWk0m2bqB2K7kjkcu2kFTiiSFUPaofTqH+Ct/OXdNXdY5jrYgET3PSnvoKLV6YsV HPayFURnBcfIEAK0D4VLrSsrs+q/kTNes9QTKFDjUVSpa/Yju1xDE7/Xlid8W0NEqX7I BRPnjfmY9tImK6Runjp2fQl0RHt53RpT1MPLuqRn333dWhZgbJRDa41ppTyfbyzAZ6fz URyIlnrJyVefkwaIo5PNLvNKJ8riAjHPlW1+VpMWGXP9P565Hqa4wpp1GScQEybwh+u2 hmlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ukNWyZpQEscAlXdUip/QWXhs5cbXEziRRf5YyJhah4M=; b=DrEz5JlP6PqNDmSy6FfAvA6cLcByJ2LE8EeotMjf/UG5eXXrvb5lNvruba67njAiSI ovQRuofXEJY+9gx1B2Pm7r0F+R9+K/C2XxvKfyeRNCuEfKzrQe2FFUY2ADmzSUmzHvt+ B+QafjWp4k4pVHy2o2P8p+UvSkIPGgsWdGp6fs+krcFFBg7kJsvMi5AnTTzeXS98OIHr wmAtI3SGrrzTJB40R8HoTYS5W4s5tn4nYzXciwrqteQZ1d7C/AQdOd3TBTBNJkStEDxo wmv2mvdP/L2YKcs4q0XrTqqczA4P4mO99uUV5hwwojbGq2C/F+jp1HlFicjik+TZb+df ua5w== X-Gm-Message-State: APjAAAV/K3U6y5+gLwzydeNhd4oLpVFSQqnIRd4XiYJ8gOwglt0IQuvU vpgV+8ansO0B7PJv92UOVvDsIDk= X-Google-Smtp-Source: APXvYqx9x1/i9RddXIFy8UKRAw7tdSzqIGW+3fgcjqiypNTjBQ0QyE3Nol6HJXkiMDN9Q2UXUhI54Uo= X-Received: by 2002:ac8:1a04:: with SMTP id v4mr77233094qtj.63.1558558438027; Wed, 22 May 2019 13:53:58 -0700 (PDT) Date: Wed, 22 May 2019 13:53:51 -0700 In-Reply-To: <20190522205353.140648-1-sdf@google.com> Message-Id: <20190522205353.140648-2-sdf@google.com> Mime-Version: 1.0 References: <20190522205353.140648-1-sdf@google.com> X-Mailer: git-send-email 2.21.0.1020.gf2820cf01a-goog Subject: [PATCH bpf-next v2 2/4] bpf: media: properly use bpf_prog_array api From: Stanislav Fomichev To: netdev@vger.kernel.org, bpf@vger.kernel.org Cc: davem@davemloft.net, ast@kernel.org, daniel@iogearbox.net, Stanislav Fomichev , linux-media@vger.kernel.org, Mauro Carvalho Chehab , Sean Young Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Now that we don't have __rcu markers on the bpf_prog_array helpers, let's use proper rcu_dereference_protected to obtain array pointer under mutex. Cc: linux-media@vger.kernel.org Cc: Mauro Carvalho Chehab Cc: Sean Young Signed-off-by: Stanislav Fomichev --- drivers/media/rc/bpf-lirc.c | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c index ee657003c1a1..0a0ce620e4a2 100644 --- a/drivers/media/rc/bpf-lirc.c +++ b/drivers/media/rc/bpf-lirc.c @@ -8,6 +8,9 @@ #include #include "rc-core-priv.h" +#define lirc_rcu_dereference(p) \ + rcu_dereference_protected(p, lockdep_is_held(&ir_raw_handler_lock)) + /* * BPF interface for raw IR */ @@ -136,7 +139,7 @@ const struct bpf_verifier_ops lirc_mode2_verifier_ops = { static int lirc_bpf_attach(struct rc_dev *rcdev, struct bpf_prog *prog) { - struct bpf_prog_array __rcu *old_array; + struct bpf_prog_array *old_array; struct bpf_prog_array *new_array; struct ir_raw_event_ctrl *raw; int ret; @@ -154,12 +157,12 @@ static int lirc_bpf_attach(struct rc_dev *rcdev, struct bpf_prog *prog) goto unlock; } - if (raw->progs && bpf_prog_array_length(raw->progs) >= BPF_MAX_PROGS) { + old_array = lirc_rcu_dereference(raw->progs); + if (old_array && bpf_prog_array_length(old_array) >= BPF_MAX_PROGS) { ret = -E2BIG; goto unlock; } - old_array = raw->progs; ret = bpf_prog_array_copy(old_array, NULL, prog, &new_array); if (ret < 0) goto unlock; @@ -174,7 +177,7 @@ static int lirc_bpf_attach(struct rc_dev *rcdev, struct bpf_prog *prog) static int lirc_bpf_detach(struct rc_dev *rcdev, struct bpf_prog *prog) { - struct bpf_prog_array __rcu *old_array; + struct bpf_prog_array *old_array; struct bpf_prog_array *new_array; struct ir_raw_event_ctrl *raw; int ret; @@ -192,7 +195,7 @@ static int lirc_bpf_detach(struct rc_dev *rcdev, struct bpf_prog *prog) goto unlock; } - old_array = raw->progs; + old_array = lirc_rcu_dereference(raw->progs); ret = bpf_prog_array_copy(old_array, prog, NULL, &new_array); /* * Do not use bpf_prog_array_delete_safe() as we would end up @@ -223,21 +226,22 @@ void lirc_bpf_run(struct rc_dev *rcdev, u32 sample) /* * This should be called once the rc thread has been stopped, so there can be * no concurrent bpf execution. + * + * Should be called with the ir_raw_handler_lock held. */ void lirc_bpf_free(struct rc_dev *rcdev) { struct bpf_prog_array_item *item; + struct bpf_prog_array *array; - if (!rcdev->raw->progs) + array = lirc_rcu_dereference(rcdev->raw->progs); + if (!array) return; - item = rcu_dereference(rcdev->raw->progs)->items; - while (item->prog) { + for (item = array->items; item->prog; item++) bpf_prog_put(item->prog); - item++; - } - bpf_prog_array_free(rcdev->raw->progs); + bpf_prog_array_free(array); } int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog) @@ -290,7 +294,7 @@ int lirc_prog_detach(const union bpf_attr *attr) int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr) { __u32 __user *prog_ids = u64_to_user_ptr(attr->query.prog_ids); - struct bpf_prog_array __rcu *progs; + struct bpf_prog_array *progs; struct rc_dev *rcdev; u32 cnt, flags = 0; int ret; @@ -311,7 +315,7 @@ int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr) if (ret) goto put; - progs = rcdev->raw->progs; + progs = lirc_rcu_dereference(rcdev->raw->progs); cnt = progs ? bpf_prog_array_length(progs) : 0; if (copy_to_user(&uattr->query.prog_cnt, &cnt, sizeof(cnt))) { From patchwork Wed May 22 20:53:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanislav Fomichev X-Patchwork-Id: 1103670 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="rLnFSOub"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 458Pxl163lz9s7h for ; Thu, 23 May 2019 06:54:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729980AbfEVUyC (ORCPT ); Wed, 22 May 2019 16:54:02 -0400 Received: from mail-qk1-f202.google.com ([209.85.222.202]:33800 "EHLO mail-qk1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730021AbfEVUyB (ORCPT ); Wed, 22 May 2019 16:54:01 -0400 Received: by mail-qk1-f202.google.com with SMTP id h11so3481548qkk.1 for ; Wed, 22 May 2019 13:54:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gsZVfuoX2X6vkK0f8knEjN8r22iclX6zTitF4Gg7914=; b=rLnFSOub8FseXXP4FKAuV3xcyqb7T01KY5xSQWRMySc74eMzGpHlLNDNG+ETzW6XHv vfncHzJYqzGAEuKrF6fB4A0RFo8PA/UgMHvFmE6CxD+/Evb/1QRDEcd9oGCq7MgEgiz4 8iH6FbcsZcaF+T7aR2gDy1eINHPazK9ajSVRIRzCw1XcU/UUSc1d5l7NohGCo5sTpBua 3w0K2IY1PngGEWL0KA0hS6wJMRKE9Al6RTGqZ73v82I+VpuSANCJ5CezflMQE63s4prK IeTdY8TKPfne6/8zLIeY3pA8aQopeanhksCix4knC2F7tTHbh+c/16e30vrHaXTbLARs 37OA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gsZVfuoX2X6vkK0f8knEjN8r22iclX6zTitF4Gg7914=; b=k12cuN4mjMYNdN+zyTV/NbILprONO8L8eCd0jrQtVnBtkBaosg1L03FjAFtwbr6EAe 6corACPCJ5hBQcerluSfAVtW4PwLZu7f5eOWgI4t1HA7hiZb8zfSjvvsSBMlyRm+nQgH OWtOgNNsoXJCLIgP8SDRR5RJx47FYMvOxlUHza5oeUumzmYWom9tsT9mCUpH/HZx+bjI xKRgSxzMn2AyCT4ur61BRnMVMJikDZkDQVXV9LV6v2uvOt2L9f3T/DFd426xFSkXbLaW dMGkoNFclH9fIu1XQkXGQ8nzNiw0z9aFoFj7PW8Gy2ngVd7olXT90uH3lG9gkqx0qJLJ 9atw== X-Gm-Message-State: APjAAAXj+un+aZ65qIqVayw4wQfLHvRbQOy0HmxyEHbLFjqv47z9Pywc 7AgZ9D6qagQ8Kz/Jme8AoOMY+/U= X-Google-Smtp-Source: APXvYqxBo/hUZg4iTG8z2ER2JUQjeYCduro3bmwMCqELOH4UwCzNqh6ADqBQmk7m8xmKh/yD/c9hUcQ= X-Received: by 2002:ac8:1671:: with SMTP id x46mr54412495qtk.240.1558558440381; Wed, 22 May 2019 13:54:00 -0700 (PDT) Date: Wed, 22 May 2019 13:53:52 -0700 In-Reply-To: <20190522205353.140648-1-sdf@google.com> Message-Id: <20190522205353.140648-3-sdf@google.com> Mime-Version: 1.0 References: <20190522205353.140648-1-sdf@google.com> X-Mailer: git-send-email 2.21.0.1020.gf2820cf01a-goog Subject: [PATCH bpf-next v2 3/4] bpf: cgroup: properly use bpf_prog_array api From: Stanislav Fomichev To: netdev@vger.kernel.org, bpf@vger.kernel.org Cc: davem@davemloft.net, ast@kernel.org, daniel@iogearbox.net, Stanislav Fomichev , Roman Gushchin Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Now that we don't have __rcu markers on the bpf_prog_array helpers, let's use proper rcu_dereference_protected to obtain array pointer under mutex. We also don't need __rcu annotations on cgroup_bpf.inactive since it's not read/updated concurrently. v2: * replace xchg with rcu_swap_protected Cc: Roman Gushchin Signed-off-by: Stanislav Fomichev --- include/linux/bpf-cgroup.h | 2 +- kernel/bpf/cgroup.c | 30 +++++++++++++++++++----------- 2 files changed, 20 insertions(+), 12 deletions(-) diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h index cb3c6b3b89c8..94a7bca3a6c4 100644 --- a/include/linux/bpf-cgroup.h +++ b/include/linux/bpf-cgroup.h @@ -71,7 +71,7 @@ struct cgroup_bpf { u32 flags[MAX_BPF_ATTACH_TYPE]; /* temp storage for effective prog array used by prog_attach/detach */ - struct bpf_prog_array __rcu *inactive; + struct bpf_prog_array *inactive; }; void cgroup_bpf_put(struct cgroup *cgrp); diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index fcde0f7b2585..67525683e982 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -22,6 +22,12 @@ DEFINE_STATIC_KEY_FALSE(cgroup_bpf_enabled_key); EXPORT_SYMBOL(cgroup_bpf_enabled_key); +#define cgroup_rcu_dereference(p) \ + rcu_dereference_protected(p, lockdep_is_held(&cgroup_mutex)) + +#define cgroup_rcu_swap(rcu_ptr, ptr) \ + rcu_swap_protected(rcu_ptr, ptr, lockdep_is_held(&cgroup_mutex)) + /** * cgroup_bpf_put() - put references of all bpf programs * @cgrp: the cgroup to modify @@ -29,6 +35,7 @@ EXPORT_SYMBOL(cgroup_bpf_enabled_key); void cgroup_bpf_put(struct cgroup *cgrp) { enum bpf_cgroup_storage_type stype; + struct bpf_prog_array *old_array; unsigned int type; for (type = 0; type < ARRAY_SIZE(cgrp->bpf.progs); type++) { @@ -45,7 +52,8 @@ void cgroup_bpf_put(struct cgroup *cgrp) kfree(pl); static_branch_dec(&cgroup_bpf_enabled_key); } - bpf_prog_array_free(cgrp->bpf.effective[type]); + old_array = cgroup_rcu_dereference(cgrp->bpf.effective[type]); + bpf_prog_array_free(old_array); } } @@ -101,7 +109,7 @@ static bool hierarchy_allows_attach(struct cgroup *cgrp, */ static int compute_effective_progs(struct cgroup *cgrp, enum bpf_attach_type type, - struct bpf_prog_array __rcu **array) + struct bpf_prog_array **array) { enum bpf_cgroup_storage_type stype; struct bpf_prog_array *progs; @@ -139,17 +147,15 @@ static int compute_effective_progs(struct cgroup *cgrp, } } while ((p = cgroup_parent(p))); - rcu_assign_pointer(*array, progs); + *array = progs; return 0; } static void activate_effective_progs(struct cgroup *cgrp, enum bpf_attach_type type, - struct bpf_prog_array __rcu *array) + struct bpf_prog_array *old_array) { - struct bpf_prog_array __rcu *old_array; - - old_array = xchg(&cgrp->bpf.effective[type], array); + cgroup_rcu_swap(cgrp->bpf.effective[type], old_array); /* free prog array after grace period, since __cgroup_bpf_run_*() * might be still walking the array */ @@ -166,7 +172,7 @@ int cgroup_bpf_inherit(struct cgroup *cgrp) * that array below is variable length */ #define NR ARRAY_SIZE(cgrp->bpf.effective) - struct bpf_prog_array __rcu *arrays[NR] = {}; + struct bpf_prog_array *arrays[NR] = {}; int i; for (i = 0; i < NR; i++) @@ -444,10 +450,13 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr, enum bpf_attach_type type = attr->query.attach_type; struct list_head *progs = &cgrp->bpf.progs[type]; u32 flags = cgrp->bpf.flags[type]; + struct bpf_prog_array *effective; int cnt, ret = 0, i; + effective = cgroup_rcu_dereference(cgrp->bpf.effective[type]); + if (attr->query.query_flags & BPF_F_QUERY_EFFECTIVE) - cnt = bpf_prog_array_length(cgrp->bpf.effective[type]); + cnt = bpf_prog_array_length(effective); else cnt = prog_list_length(progs); @@ -464,8 +473,7 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr, } if (attr->query.query_flags & BPF_F_QUERY_EFFECTIVE) { - return bpf_prog_array_copy_to_user(cgrp->bpf.effective[type], - prog_ids, cnt); + return bpf_prog_array_copy_to_user(effective, prog_ids, cnt); } else { struct bpf_prog_list *pl; u32 id; From patchwork Wed May 22 20:53:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanislav Fomichev X-Patchwork-Id: 1103672 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="AVOGP9LV"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 458Pxn0jlCz9s7h for ; Thu, 23 May 2019 06:54:05 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730421AbfEVUyE (ORCPT ); Wed, 22 May 2019 16:54:04 -0400 Received: from mail-yb1-f202.google.com ([209.85.219.202]:45727 "EHLO mail-yb1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730412AbfEVUyE (ORCPT ); Wed, 22 May 2019 16:54:04 -0400 Received: by mail-yb1-f202.google.com with SMTP id y3so3257541ybg.12 for ; Wed, 22 May 2019 13:54:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QY9qWZ3WpI+bDR3XH+UhavkeFdxps5N78C1PHSmhgK0=; b=AVOGP9LV/N+ZVWHDWD0mSzFZrmwTYPQ7SLzywYWeUu/GZc+4zJe4tQA2SfEKa9JljB +oWxxDm5d7bIVS9H8MaAKDtwaVGXS7+vi/zJaRyWbla1vl0ylZCAhHbP7rdmpu2l1N47 L0+3FDI3KaHhS6czKzBWyWS+5wNxrCaG1T9uyO5h9PZigO6/igV/q7507yzvKaudec4z IWmhfwdskvnEJNz8X89Lm4Z8UEaK1kHdEortE26pfmoCScFxjXYyse83itDwa8J9ChQS 9Fam8VCPUfBaV+t98mUtkURbHGT9mYGb5lXoV9ReUge8QXGKP9rtGXkNkR3nmymDUMyw 7qrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QY9qWZ3WpI+bDR3XH+UhavkeFdxps5N78C1PHSmhgK0=; b=M11ilTpY2vknX0FgwSRZb0RSHUeomK0aDD0T/6SFhv2fdtI80HecFUp1cC80+Q5zwa ysGkL3/+7yyMP5ArcsVLmI+kSG4TTlh9LVE2dI62U2pLyx2aesW/zwUe3U6R/jzSSvQW FMB7nNxYRmTSsOyPzoxiAty7q7xhUflFPeGS4lU40gcQE1q/aopxlMa0/zYmBmxAadK0 6WQi1RvRcBrO0CYxNcXQ6/HbDO1kEFy1mV8bJFiJS5vCwPQkS/0+l0POEnOq9MAHiu/o 0Ej0pn2V19yUhRoeWdStYAJPIr0m0tEDW4lIpDHca7NUFK/cGoo6mp07AVjaC51uKrWw E77A== X-Gm-Message-State: APjAAAUmTulxjnUghes9G1rPttv5Hf+JL4idlY97QERxKbakq4GR0QbY gXGQk2HOTDcVQ2xUt4kjHLSukyw= X-Google-Smtp-Source: APXvYqwSnafodNT/CMgErqLqyBdVSTztpb4cK0Vtne4k1bgBpFEQfNvAF1TAttpDOdd67zwvEN8Jg/Y= X-Received: by 2002:a81:1e15:: with SMTP id e21mr20744303ywe.200.1558558443387; Wed, 22 May 2019 13:54:03 -0700 (PDT) Date: Wed, 22 May 2019 13:53:53 -0700 In-Reply-To: <20190522205353.140648-1-sdf@google.com> Message-Id: <20190522205353.140648-4-sdf@google.com> Mime-Version: 1.0 References: <20190522205353.140648-1-sdf@google.com> X-Mailer: git-send-email 2.21.0.1020.gf2820cf01a-goog Subject: [PATCH bpf-next v2 4/4] bpf: tracing: properly use bpf_prog_array api From: Stanislav Fomichev To: netdev@vger.kernel.org, bpf@vger.kernel.org Cc: davem@davemloft.net, ast@kernel.org, daniel@iogearbox.net, Stanislav Fomichev , Steven Rostedt , Ingo Molnar Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Now that we don't have __rcu markers on the bpf_prog_array helpers, let's use proper rcu_dereference_protected to obtain array pointer under mutex. Cc: Steven Rostedt Cc: Ingo Molnar Signed-off-by: Stanislav Fomichev --- kernel/trace/bpf_trace.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index f92d6ad5e080..766e42730318 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -19,6 +19,9 @@ #include "trace_probe.h" #include "trace.h" +#define bpf_event_rcu_dereference(p) \ + rcu_dereference_protected(p, lockdep_is_held(&bpf_event_mutex)) + #ifdef CONFIG_MODULES struct bpf_trace_module { struct module *module; @@ -1034,7 +1037,7 @@ static DEFINE_MUTEX(bpf_event_mutex); int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog) { - struct bpf_prog_array __rcu *old_array; + struct bpf_prog_array *old_array; struct bpf_prog_array *new_array; int ret = -EEXIST; @@ -1052,7 +1055,7 @@ int perf_event_attach_bpf_prog(struct perf_event *event, if (event->prog) goto unlock; - old_array = event->tp_event->prog_array; + old_array = bpf_event_rcu_dereference(event->tp_event->prog_array); if (old_array && bpf_prog_array_length(old_array) >= BPF_TRACE_MAX_PROGS) { ret = -E2BIG; @@ -1075,7 +1078,7 @@ int perf_event_attach_bpf_prog(struct perf_event *event, void perf_event_detach_bpf_prog(struct perf_event *event) { - struct bpf_prog_array __rcu *old_array; + struct bpf_prog_array *old_array; struct bpf_prog_array *new_array; int ret; @@ -1084,7 +1087,7 @@ void perf_event_detach_bpf_prog(struct perf_event *event) if (!event->prog) goto unlock; - old_array = event->tp_event->prog_array; + old_array = bpf_event_rcu_dereference(event->tp_event->prog_array); ret = bpf_prog_array_copy(old_array, event->prog, NULL, &new_array); if (ret == -ENOENT) goto unlock; @@ -1106,6 +1109,7 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info) { struct perf_event_query_bpf __user *uquery = info; struct perf_event_query_bpf query = {}; + struct bpf_prog_array *progs; u32 *ids, prog_cnt, ids_len; int ret; @@ -1130,10 +1134,8 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info) */ mutex_lock(&bpf_event_mutex); - ret = bpf_prog_array_copy_info(event->tp_event->prog_array, - ids, - ids_len, - &prog_cnt); + progs = bpf_event_rcu_dereference(event->tp_event->prog_array); + ret = bpf_prog_array_copy_info(progs, ids, ids_len, &prog_cnt); mutex_unlock(&bpf_event_mutex); if (copy_to_user(&uquery->prog_cnt, &prog_cnt, sizeof(prog_cnt)) ||