diff mbox series

bpf: devmap: Pass lockdep expression to RCU lists

Message ID 20200123120437.26506-1-frextrite@gmail.com
State Accepted
Delegated to: BPF Maintainers
Headers show
Series bpf: devmap: Pass lockdep expression to RCU lists | expand

Commit Message

Amol Grover Jan. 23, 2020, 12:04 p.m. UTC
head is traversed using hlist_for_each_entry_rcu outside an
RCU read-side critical section but under the protection
of dtab->index_lock.

Hence, add corresponding lockdep expression to silence false-positive
lockdep warnings, and harden RCU lists.

Signed-off-by: Amol Grover <frextrite@gmail.com>
---
 kernel/bpf/devmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Jesper Dangaard Brouer Jan. 23, 2020, 1:37 p.m. UTC | #1
On Thu, 23 Jan 2020 17:34:38 +0530
Amol Grover <frextrite@gmail.com> wrote:

> head is traversed using hlist_for_each_entry_rcu outside an
> RCU read-side critical section but under the protection
> of dtab->index_lock.

We do hold the lock in update and delete cases, but not in the lookup
cases.  Is it then still okay to add the lockdep_is_held() annotation?

If it is then it looks fine to me:

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
 
> Hence, add corresponding lockdep expression to silence false-positive
> lockdep warnings, and harden RCU lists.
> 
> Signed-off-by: Amol Grover <frextrite@gmail.com>
> ---
>  kernel/bpf/devmap.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
> index 3d3d61b5985b..b4b6b77f309c 100644
> --- a/kernel/bpf/devmap.c
> +++ b/kernel/bpf/devmap.c
> @@ -293,7 +293,8 @@ struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key)
>  	struct hlist_head *head = dev_map_index_hash(dtab, key);
>  	struct bpf_dtab_netdev *dev;
>  
> -	hlist_for_each_entry_rcu(dev, head, index_hlist)
> +	hlist_for_each_entry_rcu(dev, head, index_hlist,
> +				 lockdep_is_held(&dtab->index_lock))
>  		if (dev->idx == key)
>  			return dev;
>
Toke Høiland-Jørgensen Jan. 23, 2020, 1:38 p.m. UTC | #2
Amol Grover <frextrite@gmail.com> writes:

> head is traversed using hlist_for_each_entry_rcu outside an
> RCU read-side critical section but under the protection
> of dtab->index_lock.
>
> Hence, add corresponding lockdep expression to silence false-positive
> lockdep warnings, and harden RCU lists.
>
> Signed-off-by: Amol Grover <frextrite@gmail.com>

Could you please add an appropriate Fixes: tag?

Otherwise:
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Toke Høiland-Jørgensen Jan. 23, 2020, 1:42 p.m. UTC | #3
Jesper Dangaard Brouer <brouer@redhat.com> writes:

> On Thu, 23 Jan 2020 17:34:38 +0530
> Amol Grover <frextrite@gmail.com> wrote:
>
>> head is traversed using hlist_for_each_entry_rcu outside an
>> RCU read-side critical section but under the protection
>> of dtab->index_lock.
>
> We do hold the lock in update and delete cases, but not in the lookup
> cases.  Is it then still okay to add the lockdep_is_held() annotation?

I concluded 'yes' from the comment on hlist_for_each_entry_rcu():

The lockdep condition gets passed to this:

#define __list_check_rcu(dummy, cond, extra...)				\
	({								\
	check_arg_count_one(extra);					\
	RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(),		\
			 "RCU-list traversed in non-reader section!");	\
	 })


so that seems fine :)

-Toke
Daniel Borkmann Jan. 23, 2020, 3:59 p.m. UTC | #4
On 1/23/20 2:38 PM, Toke Høiland-Jørgensen wrote:
> Amol Grover <frextrite@gmail.com> writes:
> 
>> head is traversed using hlist_for_each_entry_rcu outside an
>> RCU read-side critical section but under the protection
>> of dtab->index_lock.
>>
>> Hence, add corresponding lockdep expression to silence false-positive
>> lockdep warnings, and harden RCU lists.
>>
>> Signed-off-by: Amol Grover <frextrite@gmail.com>
> 
> Could you please add an appropriate Fixes: tag?

+1, please reply with Fixes: tag (no need to resend).
Amol Grover Jan. 23, 2020, 5:10 p.m. UTC | #5
On Thu, Jan 23, 2020 at 02:42:03PM +0100, Toke Høiland-Jørgensen wrote:
> Jesper Dangaard Brouer <brouer@redhat.com> writes:
> 
> > On Thu, 23 Jan 2020 17:34:38 +0530
> > Amol Grover <frextrite@gmail.com> wrote:
> >
> >> head is traversed using hlist_for_each_entry_rcu outside an
> >> RCU read-side critical section but under the protection
> >> of dtab->index_lock.
> >
> > We do hold the lock in update and delete cases, but not in the lookup
> > cases.  Is it then still okay to add the lockdep_is_held() annotation?
> 
> I concluded 'yes' from the comment on hlist_for_each_entry_rcu():
> 
> The lockdep condition gets passed to this:
> 
> #define __list_check_rcu(dummy, cond, extra...)				\
> 	({								\
> 	check_arg_count_one(extra);					\
> 	RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(),		\
> 			 "RCU-list traversed in non-reader section!");	\
> 	 })
> 
> 
> so that seems fine :)
> 

Yes, adding a lockdep expression will be okay. This is because an
implicit check is done to check if list_for_each_entry_rcu() is
traversed under RCU read-side critical section. In case the traversal is
outside RCU read-side critical section, the lockdep expression makes
sure the traversal is done under the mentioned lock.

Thanks
Amol

> -Toke
>
Amol Grover Jan. 23, 2020, 5:18 p.m. UTC | #6
On Thu, Jan 23, 2020 at 05:34:38PM +0530, Amol Grover wrote:
> head is traversed using hlist_for_each_entry_rcu outside an
> RCU read-side critical section but under the protection
> of dtab->index_lock.
> 
> Hence, add corresponding lockdep expression to silence false-positive
> lockdep warnings, and harden RCU lists.
> 
Fixes: 6f9d451ab1a3 ("xdp: Add devmap_hash map type for looking up devices by hashed index")
> Signed-off-by: Amol Grover <frextrite@gmail.com>
> ---
>  kernel/bpf/devmap.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
> index 3d3d61b5985b..b4b6b77f309c 100644
> --- a/kernel/bpf/devmap.c
> +++ b/kernel/bpf/devmap.c
> @@ -293,7 +293,8 @@ struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key)
>  	struct hlist_head *head = dev_map_index_hash(dtab, key);
>  	struct bpf_dtab_netdev *dev;
>  
> -	hlist_for_each_entry_rcu(dev, head, index_hlist)
> +	hlist_for_each_entry_rcu(dev, head, index_hlist,
> +				 lockdep_is_held(&dtab->index_lock))
>  		if (dev->idx == key)
>  			return dev;
>  
> -- 
> 2.24.1
>
Daniel Borkmann Jan. 23, 2020, 10:04 p.m. UTC | #7
On 1/23/20 6:18 PM, Amol Grover wrote:
> On Thu, Jan 23, 2020 at 05:34:38PM +0530, Amol Grover wrote:
>> head is traversed using hlist_for_each_entry_rcu outside an
>> RCU read-side critical section but under the protection
>> of dtab->index_lock.
>>
>> Hence, add corresponding lockdep expression to silence false-positive
>> lockdep warnings, and harden RCU lists.
>>
> Fixes: 6f9d451ab1a3 ("xdp: Add devmap_hash map type for looking up devices by hashed index")
>> Signed-off-by: Amol Grover <frextrite@gmail.com>

Applied, thanks!
diff mbox series

Patch

diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 3d3d61b5985b..b4b6b77f309c 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -293,7 +293,8 @@  struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key)
 	struct hlist_head *head = dev_map_index_hash(dtab, key);
 	struct bpf_dtab_netdev *dev;
 
-	hlist_for_each_entry_rcu(dev, head, index_hlist)
+	hlist_for_each_entry_rcu(dev, head, index_hlist,
+				 lockdep_is_held(&dtab->index_lock))
 		if (dev->idx == key)
 			return dev;