diff mbox series

[RFC] bpf: Fix potential call bpf_link_free() in atomic context

Message ID 20200917074453.20621-1-songmuchun@bytedance.com
State Accepted
Delegated to: BPF Maintainers
Headers show
Series [RFC] bpf: Fix potential call bpf_link_free() in atomic context | expand

Commit Message

Muchun Song Sept. 17, 2020, 7:44 a.m. UTC
The in_atomic macro cannot always detect atomic context. In particular,
it cannot know about held spinlocks in non-preemptible kernels. Although,
there is no user call bpf_link_put() with holding spinlock now. Be the
safe side, we can avoid this in the feature.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 kernel/bpf/syscall.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

Comments

Song Liu Sept. 17, 2020, 10:37 p.m. UTC | #1
On Thu, Sep 17, 2020 at 12:46 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> The in_atomic macro cannot always detect atomic context. In particular,
> it cannot know about held spinlocks in non-preemptible kernels. Although,
> there is no user call bpf_link_put() with holding spinlock now. Be the
> safe side, we can avoid this in the feature.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Acked-by: Song Liu <songliubraving@fb.com>

This is a little weird, but I guess that is OK, as bpf_link_put() is
not in the critical
path. Is the plan to eliminate in_atomic() (as much as possible)?
Muchun Song Sept. 19, 2020, 5:26 a.m. UTC | #2
On Fri, Sep 18, 2020 at 6:37 AM Song Liu <song@kernel.org> wrote:
>
> On Thu, Sep 17, 2020 at 12:46 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > The in_atomic macro cannot always detect atomic context. In particular,
> > it cannot know about held spinlocks in non-preemptible kernels. Although,
> > there is no user call bpf_link_put() with holding spinlock now. Be the
> > safe side, we can avoid this in the feature.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> Acked-by: Song Liu <songliubraving@fb.com>
>
> This is a little weird, but I guess that is OK, as bpf_link_put() is
> not in the critical

Yeah, bpf_link_put() is OK now because there is no user call it
with a holding spinlock.

> path. Is the plan to eliminate in_atomic() (as much as possible)?

Most other users of in_atomic() just for WARN_ON. It seems
there is no problem :).





--
Yours,
Muchun
Andrii Nakryiko Sept. 21, 2020, 5:29 p.m. UTC | #3
On Thu, Sep 17, 2020 at 12:46 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> The in_atomic macro cannot always detect atomic context. In particular,
> it cannot know about held spinlocks in non-preemptible kernels. Although,
> there is no user call bpf_link_put() with holding spinlock now. Be the
> safe side, we can avoid this in the feature.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---

This change seems unnecessary (or at least premature), as if we ever
get a use case that does bpf_link_put() from under held spinlock, we
should see a warning about that (and in that case I bet code can be
rewritten to not hold spinlock during bpf_link_put()). But on the
other hand it makes bpf_link_put() to follow the pattern of
bpf_map_put(), which always defers the work, so I'm ok with this. As
Song mentioned, this is not called from a performance-critical hot
path, so doesn't matter all that much.

Acked-by: Andrii Nakryiko <andriin@fb.com>

>  kernel/bpf/syscall.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 178c147350f5..6347be0a5c82 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -2345,12 +2345,8 @@ void bpf_link_put(struct bpf_link *link)
>         if (!atomic64_dec_and_test(&link->refcnt))
>                 return;
>
> -       if (in_atomic()) {
> -               INIT_WORK(&link->work, bpf_link_put_deferred);
> -               schedule_work(&link->work);
> -       } else {
> -               bpf_link_free(link);
> -       }
> +       INIT_WORK(&link->work, bpf_link_put_deferred);
> +       schedule_work(&link->work);
>  }
>
>  static int bpf_link_release(struct inode *inode, struct file *filp)
> --
> 2.20.1
>
Andrii Nakryiko Sept. 21, 2020, 6:31 p.m. UTC | #4
On Mon, Sep 21, 2020 at 10:29 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Sep 17, 2020 at 12:46 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > The in_atomic macro cannot always detect atomic context. In particular,
> > it cannot know about held spinlocks in non-preemptible kernels. Although,
> > there is no user call bpf_link_put() with holding spinlock now. Be the
> > safe side, we can avoid this in the feature.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
>
> This change seems unnecessary (or at least premature), as if we ever
> get a use case that does bpf_link_put() from under held spinlock, we
> should see a warning about that (and in that case I bet code can be
> rewritten to not hold spinlock during bpf_link_put()). But on the
> other hand it makes bpf_link_put() to follow the pattern of
> bpf_map_put(), which always defers the work, so I'm ok with this. As
> Song mentioned, this is not called from a performance-critical hot
> path, so doesn't matter all that much.
>
> Acked-by: Andrii Nakryiko <andriin@fb.com>
>

btw, you probably need to resubmit this patch as a non-RFC one for it
to be applied?..

> >  kernel/bpf/syscall.c | 8 ++------
> >  1 file changed, 2 insertions(+), 6 deletions(-)
> >
> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > index 178c147350f5..6347be0a5c82 100644
> > --- a/kernel/bpf/syscall.c
> > +++ b/kernel/bpf/syscall.c
> > @@ -2345,12 +2345,8 @@ void bpf_link_put(struct bpf_link *link)
> >         if (!atomic64_dec_and_test(&link->refcnt))
> >                 return;
> >
> > -       if (in_atomic()) {
> > -               INIT_WORK(&link->work, bpf_link_put_deferred);
> > -               schedule_work(&link->work);
> > -       } else {
> > -               bpf_link_free(link);
> > -       }
> > +       INIT_WORK(&link->work, bpf_link_put_deferred);
> > +       schedule_work(&link->work);
> >  }
> >
> >  static int bpf_link_release(struct inode *inode, struct file *filp)
> > --
> > 2.20.1
> >
Daniel Borkmann Sept. 21, 2020, 7:27 p.m. UTC | #5
On 9/21/20 8:31 PM, Andrii Nakryiko wrote:
> On Mon, Sep 21, 2020 at 10:29 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
>>
>> On Thu, Sep 17, 2020 at 12:46 AM Muchun Song <songmuchun@bytedance.com> wrote:
>>>
>>> The in_atomic macro cannot always detect atomic context. In particular,
>>> it cannot know about held spinlocks in non-preemptible kernels. Although,
>>> there is no user call bpf_link_put() with holding spinlock now. Be the
>>> safe side, we can avoid this in the feature.
>>>
>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>>> ---
>>
>> This change seems unnecessary (or at least premature), as if we ever
>> get a use case that does bpf_link_put() from under held spinlock, we
>> should see a warning about that (and in that case I bet code can be
>> rewritten to not hold spinlock during bpf_link_put()). But on the
>> other hand it makes bpf_link_put() to follow the pattern of
>> bpf_map_put(), which always defers the work, so I'm ok with this. As
>> Song mentioned, this is not called from a performance-critical hot
>> path, so doesn't matter all that much.
>>
>> Acked-by: Andrii Nakryiko <andriin@fb.com>

Agree, SGTM.

> btw, you probably need to resubmit this patch as a non-RFC one for it
> to be applied?..

Given first time BPF contributor & it has already several ACKs, I took it
into bpf-next, thanks!
diff mbox series

Patch

diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 178c147350f5..6347be0a5c82 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2345,12 +2345,8 @@  void bpf_link_put(struct bpf_link *link)
 	if (!atomic64_dec_and_test(&link->refcnt))
 		return;
 
-	if (in_atomic()) {
-		INIT_WORK(&link->work, bpf_link_put_deferred);
-		schedule_work(&link->work);
-	} else {
-		bpf_link_free(link);
-	}
+	INIT_WORK(&link->work, bpf_link_put_deferred);
+	schedule_work(&link->work);
 }
 
 static int bpf_link_release(struct inode *inode, struct file *filp)