diff mbox series

[v1,1/6] rcu: Add support for consolidated-RCU reader checking

Message ID 20190711234401.220336-2-joel@joelfernandes.org
State Not Applicable
Headers show
Series Harden list_for_each_entry_rcu() and family | expand

Commit Message

Joel Fernandes July 11, 2019, 11:43 p.m. UTC
This patch adds support for checking RCU reader sections in list
traversal macros. Optionally, if the list macro is called under SRCU or
other lock/mutex protection, then appropriate lockdep expressions can be
passed to make the checks pass.

Existing list_for_each_entry_rcu() invocations don't need to pass the
optional fourth argument (cond) unless they are under some non-RCU
protection and needs to make lockdep check pass.

Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
 include/linux/rculist.h  | 29 ++++++++++++++++++++++++-----
 include/linux/rcupdate.h |  7 +++++++
 kernel/rcu/Kconfig.debug | 11 +++++++++++
 kernel/rcu/update.c      | 26 ++++++++++++++++++++++++++
 4 files changed, 68 insertions(+), 5 deletions(-)

Comments

Joel Fernandes July 12, 2019, 4:49 a.m. UTC | #1
On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> This patch adds support for checking RCU reader sections in list
> traversal macros. Optionally, if the list macro is called under SRCU or
> other lock/mutex protection, then appropriate lockdep expressions can be
> passed to make the checks pass.
> 
> Existing list_for_each_entry_rcu() invocations don't need to pass the
> optional fourth argument (cond) unless they are under some non-RCU
> protection and needs to make lockdep check pass.
> 
> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> ---
>  include/linux/rculist.h  | 29 ++++++++++++++++++++++++-----
>  include/linux/rcupdate.h |  7 +++++++
>  kernel/rcu/Kconfig.debug | 11 +++++++++++
>  kernel/rcu/update.c      | 26 ++++++++++++++++++++++++++
>  4 files changed, 68 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/rculist.h b/include/linux/rculist.h
> index e91ec9ddcd30..78c15ec6b2c9 100644
> --- a/include/linux/rculist.h
> +++ b/include/linux/rculist.h
> @@ -40,6 +40,23 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
>   */
>  #define list_next_rcu(list)	(*((struct list_head __rcu **)(&(list)->next)))
>  
> +/*
> + * Check during list traversal that we are within an RCU reader
> + */
> +
> +#define SIXTH_ARG(a1, a2, a3, a4, a5, a6, ...) a6
> +#define COUNT_VARGS(...) SIXTH_ARG(dummy, ## __VA_ARGS__, 4, 3, 2, 1, 0)

Fyi, I made a cosmetic change by deleting the above 2 unused macros.

- Joel
Peter Zijlstra July 12, 2019, 11:01 a.m. UTC | #2
On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> This patch adds support for checking RCU reader sections in list
> traversal macros. Optionally, if the list macro is called under SRCU or
> other lock/mutex protection, then appropriate lockdep expressions can be
> passed to make the checks pass.
> 
> Existing list_for_each_entry_rcu() invocations don't need to pass the
> optional fourth argument (cond) unless they are under some non-RCU
> protection and needs to make lockdep check pass.
> 
> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> ---
>  include/linux/rculist.h  | 29 ++++++++++++++++++++++++-----
>  include/linux/rcupdate.h |  7 +++++++
>  kernel/rcu/Kconfig.debug | 11 +++++++++++
>  kernel/rcu/update.c      | 26 ++++++++++++++++++++++++++
>  4 files changed, 68 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/rculist.h b/include/linux/rculist.h
> index e91ec9ddcd30..78c15ec6b2c9 100644
> --- a/include/linux/rculist.h
> +++ b/include/linux/rculist.h
> @@ -40,6 +40,23 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
>   */
>  #define list_next_rcu(list)	(*((struct list_head __rcu **)(&(list)->next)))
>  
> +/*
> + * Check during list traversal that we are within an RCU reader
> + */
> +
> +#define SIXTH_ARG(a1, a2, a3, a4, a5, a6, ...) a6
> +#define COUNT_VARGS(...) SIXTH_ARG(dummy, ## __VA_ARGS__, 4, 3, 2, 1, 0)

You don't seem to actually use it in this patch; also linux/kernel.h has
COUNT_ARGS().
Peter Zijlstra July 12, 2019, 11:11 a.m. UTC | #3
On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> +int rcu_read_lock_any_held(void)
> +{
> +	int lockdep_opinion = 0;
> +
> +	if (!debug_lockdep_rcu_enabled())
> +		return 1;
> +	if (!rcu_is_watching())
> +		return 0;
> +	if (!rcu_lockdep_current_cpu_online())
> +		return 0;
> +
> +	/* Preemptible RCU flavor */
> +	if (lock_is_held(&rcu_lock_map))

you forgot debug_locks here.

> +		return 1;
> +
> +	/* BH flavor */
> +	if (in_softirq() || irqs_disabled())

I'm not sure I'd put irqs_disabled() under BH, also this entire
condition is superfluous, see below.

> +		return 1;
> +
> +	/* Sched flavor */
> +	if (debug_locks)
> +		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> +	return lockdep_opinion || !preemptible();

that !preemptible() turns into:

  !(preempt_count()==0 && !irqs_disabled())

which is:

  preempt_count() != 0 || irqs_disabled()

and already includes irqs_disabled() and in_softirq().

> +}

So maybe something lke:

	if (debug_locks && (lock_is_held(&rcu_lock_map) ||
			    lock_is_held(&rcu_sched_lock_map)))
		return true;

	return !preemptible();
Oleg Nesterov July 12, 2019, 12:12 p.m. UTC | #4
On 07/11, Joel Fernandes (Google) wrote:
>
> +int rcu_read_lock_any_held(void)

rcu_sync_is_idle() wants it. You have my ack in advance ;)

Oleg.
Joel Fernandes July 12, 2019, 1:56 p.m. UTC | #5
On Fri, Jul 12, 2019 at 02:12:00PM +0200, Oleg Nesterov wrote:
> On 07/11, Joel Fernandes (Google) wrote:
> >
> > +int rcu_read_lock_any_held(void)
> 
> rcu_sync_is_idle() wants it. You have my ack in advance ;)

Cool, thanks ;)

- Joel
Joel Fernandes July 12, 2019, 2:49 p.m. UTC | #6
On Fri, Jul 12, 2019 at 01:01:42PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> > This patch adds support for checking RCU reader sections in list
> > traversal macros. Optionally, if the list macro is called under SRCU or
> > other lock/mutex protection, then appropriate lockdep expressions can be
> > passed to make the checks pass.
> > 
> > Existing list_for_each_entry_rcu() invocations don't need to pass the
> > optional fourth argument (cond) unless they are under some non-RCU
> > protection and needs to make lockdep check pass.
> > 
> > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > ---
> >  include/linux/rculist.h  | 29 ++++++++++++++++++++++++-----
> >  include/linux/rcupdate.h |  7 +++++++
> >  kernel/rcu/Kconfig.debug | 11 +++++++++++
> >  kernel/rcu/update.c      | 26 ++++++++++++++++++++++++++
> >  4 files changed, 68 insertions(+), 5 deletions(-)
> > 
> > diff --git a/include/linux/rculist.h b/include/linux/rculist.h
> > index e91ec9ddcd30..78c15ec6b2c9 100644
> > --- a/include/linux/rculist.h
> > +++ b/include/linux/rculist.h
> > @@ -40,6 +40,23 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
> >   */
> >  #define list_next_rcu(list)	(*((struct list_head __rcu **)(&(list)->next)))
> >  
> > +/*
> > + * Check during list traversal that we are within an RCU reader
> > + */
> > +
> > +#define SIXTH_ARG(a1, a2, a3, a4, a5, a6, ...) a6
> > +#define COUNT_VARGS(...) SIXTH_ARG(dummy, ## __VA_ARGS__, 4, 3, 2, 1, 0)
> 
> You don't seem to actually use it in this patch; also linux/kernel.h has
> COUNT_ARGS().

Yes, I replied after sending patches that I fixed this. I will remove them.


thanks,

 - Joel
Joel Fernandes July 12, 2019, 3:10 p.m. UTC | #7
On Fri, Jul 12, 2019 at 01:11:25PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> > +int rcu_read_lock_any_held(void)
> > +{
> > +	int lockdep_opinion = 0;
> > +
> > +	if (!debug_lockdep_rcu_enabled())
> > +		return 1;
> > +	if (!rcu_is_watching())
> > +		return 0;
> > +	if (!rcu_lockdep_current_cpu_online())
> > +		return 0;
> > +
> > +	/* Preemptible RCU flavor */
> > +	if (lock_is_held(&rcu_lock_map))
> 
> you forgot debug_locks here.

Actually, it turns out debug_locks checking is not even needed. If
debug_locks == 0, then debug_lockdep_rcu_enabled() returns 0 and we would not
get to this point.

> > +		return 1;
> > +
> > +	/* BH flavor */
> > +	if (in_softirq() || irqs_disabled())
> 
> I'm not sure I'd put irqs_disabled() under BH, also this entire
> condition is superfluous, see below.
> 
> > +		return 1;
> > +
> > +	/* Sched flavor */
> > +	if (debug_locks)
> > +		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > +	return lockdep_opinion || !preemptible();
> 
> that !preemptible() turns into:
> 
>   !(preempt_count()==0 && !irqs_disabled())
> 
> which is:
> 
>   preempt_count() != 0 || irqs_disabled()
> 
> and already includes irqs_disabled() and in_softirq().
> 
> > +}
> 
> So maybe something lke:
> 
> 	if (debug_locks && (lock_is_held(&rcu_lock_map) ||
> 			    lock_is_held(&rcu_sched_lock_map)))
> 		return true;

Agreed, I will do it this way (without the debug_locks) like:

---8<-----------------------

diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index ba861d1716d3..339aebc330db 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -296,27 +296,15 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
 
 int rcu_read_lock_any_held(void)
 {
-	int lockdep_opinion = 0;
-
 	if (!debug_lockdep_rcu_enabled())
 		return 1;
 	if (!rcu_is_watching())
 		return 0;
 	if (!rcu_lockdep_current_cpu_online())
 		return 0;
-
-	/* Preemptible RCU flavor */
-	if (lock_is_held(&rcu_lock_map))
-		return 1;
-
-	/* BH flavor */
-	if (in_softirq() || irqs_disabled())
-		return 1;
-
-	/* Sched flavor */
-	if (debug_locks)
-		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
-	return lockdep_opinion || !preemptible();
+	if (lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map))
+		return 1;
+	return !preemptible();
 }
 EXPORT_SYMBOL_GPL(rcu_read_lock_any_held);
Peter Zijlstra July 12, 2019, 3:58 p.m. UTC | #8
On Fri, Jul 12, 2019 at 11:10:51AM -0400, Joel Fernandes wrote:
> Agreed, I will do it this way (without the debug_locks) like:
> 
> ---8<-----------------------
> 
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index ba861d1716d3..339aebc330db 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -296,27 +296,15 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
>  
>  int rcu_read_lock_any_held(void)
>  {
>  	if (!debug_lockdep_rcu_enabled())
>  		return 1;
>  	if (!rcu_is_watching())
>  		return 0;
>  	if (!rcu_lockdep_current_cpu_online())
>  		return 0;
> +	if (lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map))
> +		return 1;
> +	return !preemptible();
>  }
>  EXPORT_SYMBOL_GPL(rcu_read_lock_any_held);

OK, that looks sane. Thanks!
Paul E. McKenney July 12, 2019, 4:45 p.m. UTC | #9
On Fri, Jul 12, 2019 at 11:10:51AM -0400, Joel Fernandes wrote:
> On Fri, Jul 12, 2019 at 01:11:25PM +0200, Peter Zijlstra wrote:
> > On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> > > +int rcu_read_lock_any_held(void)
> > > +{
> > > +	int lockdep_opinion = 0;
> > > +
> > > +	if (!debug_lockdep_rcu_enabled())
> > > +		return 1;
> > > +	if (!rcu_is_watching())
> > > +		return 0;
> > > +	if (!rcu_lockdep_current_cpu_online())
> > > +		return 0;
> > > +
> > > +	/* Preemptible RCU flavor */
> > > +	if (lock_is_held(&rcu_lock_map))
> > 
> > you forgot debug_locks here.
> 
> Actually, it turns out debug_locks checking is not even needed. If
> debug_locks == 0, then debug_lockdep_rcu_enabled() returns 0 and we would not
> get to this point.
> 
> > > +		return 1;
> > > +
> > > +	/* BH flavor */
> > > +	if (in_softirq() || irqs_disabled())
> > 
> > I'm not sure I'd put irqs_disabled() under BH, also this entire
> > condition is superfluous, see below.
> > 
> > > +		return 1;
> > > +
> > > +	/* Sched flavor */
> > > +	if (debug_locks)
> > > +		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > > +	return lockdep_opinion || !preemptible();
> > 
> > that !preemptible() turns into:
> > 
> >   !(preempt_count()==0 && !irqs_disabled())
> > 
> > which is:
> > 
> >   preempt_count() != 0 || irqs_disabled()
> > 
> > and already includes irqs_disabled() and in_softirq().
> > 
> > > +}
> > 
> > So maybe something lke:
> > 
> > 	if (debug_locks && (lock_is_held(&rcu_lock_map) ||
> > 			    lock_is_held(&rcu_sched_lock_map)))
> > 		return true;
> 
> Agreed, I will do it this way (without the debug_locks) like:
> 
> ---8<-----------------------
> 
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index ba861d1716d3..339aebc330db 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -296,27 +296,15 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
>  
>  int rcu_read_lock_any_held(void)
>  {
> -	int lockdep_opinion = 0;
> -
>  	if (!debug_lockdep_rcu_enabled())
>  		return 1;
>  	if (!rcu_is_watching())
>  		return 0;
>  	if (!rcu_lockdep_current_cpu_online())
>  		return 0;
> -
> -	/* Preemptible RCU flavor */
> -	if (lock_is_held(&rcu_lock_map))
> -		return 1;
> -
> -	/* BH flavor */
> -	if (in_softirq() || irqs_disabled())
> -		return 1;
> -
> -	/* Sched flavor */
> -	if (debug_locks)
> -		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> -	return lockdep_opinion || !preemptible();
> +	if (lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map))

OK, I will bite...  Why not also lock_is_held(&rcu_bh_lock_map)?

							Thanx, Paul

> +		return 1;
> +	return !preemptible();
>  }
>  EXPORT_SYMBOL_GPL(rcu_read_lock_any_held);
>  
>
Joel Fernandes July 12, 2019, 5:06 p.m. UTC | #10
On Fri, Jul 12, 2019 at 09:45:31AM -0700, Paul E. McKenney wrote:
> On Fri, Jul 12, 2019 at 11:10:51AM -0400, Joel Fernandes wrote:
> > On Fri, Jul 12, 2019 at 01:11:25PM +0200, Peter Zijlstra wrote:
> > > On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> > > > +int rcu_read_lock_any_held(void)
> > > > +{
> > > > +	int lockdep_opinion = 0;
> > > > +
> > > > +	if (!debug_lockdep_rcu_enabled())
> > > > +		return 1;
> > > > +	if (!rcu_is_watching())
> > > > +		return 0;
> > > > +	if (!rcu_lockdep_current_cpu_online())
> > > > +		return 0;
> > > > +
> > > > +	/* Preemptible RCU flavor */
> > > > +	if (lock_is_held(&rcu_lock_map))
> > > 
> > > you forgot debug_locks here.
> > 
> > Actually, it turns out debug_locks checking is not even needed. If
> > debug_locks == 0, then debug_lockdep_rcu_enabled() returns 0 and we would not
> > get to this point.
> > 
> > > > +		return 1;
> > > > +
> > > > +	/* BH flavor */
> > > > +	if (in_softirq() || irqs_disabled())
> > > 
> > > I'm not sure I'd put irqs_disabled() under BH, also this entire
> > > condition is superfluous, see below.
> > > 
> > > > +		return 1;
> > > > +
> > > > +	/* Sched flavor */
> > > > +	if (debug_locks)
> > > > +		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > > > +	return lockdep_opinion || !preemptible();
> > > 
> > > that !preemptible() turns into:
> > > 
> > >   !(preempt_count()==0 && !irqs_disabled())
> > > 
> > > which is:
> > > 
> > >   preempt_count() != 0 || irqs_disabled()
> > > 
> > > and already includes irqs_disabled() and in_softirq().
> > > 
> > > > +}
> > > 
> > > So maybe something lke:
> > > 
> > > 	if (debug_locks && (lock_is_held(&rcu_lock_map) ||
> > > 			    lock_is_held(&rcu_sched_lock_map)))
> > > 		return true;
> > 
> > Agreed, I will do it this way (without the debug_locks) like:
> > 
> > ---8<-----------------------
> > 
> > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > index ba861d1716d3..339aebc330db 100644
> > --- a/kernel/rcu/update.c
> > +++ b/kernel/rcu/update.c
> > @@ -296,27 +296,15 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
> >  
> >  int rcu_read_lock_any_held(void)
> >  {
> > -	int lockdep_opinion = 0;
> > -
> >  	if (!debug_lockdep_rcu_enabled())
> >  		return 1;
> >  	if (!rcu_is_watching())
> >  		return 0;
> >  	if (!rcu_lockdep_current_cpu_online())
> >  		return 0;
> > -
> > -	/* Preemptible RCU flavor */
> > -	if (lock_is_held(&rcu_lock_map))
> > -		return 1;
> > -
> > -	/* BH flavor */
> > -	if (in_softirq() || irqs_disabled())
> > -		return 1;
> > -
> > -	/* Sched flavor */
> > -	if (debug_locks)
> > -		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > -	return lockdep_opinion || !preemptible();
> > +	if (lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map))
> 
> OK, I will bite...  Why not also lock_is_held(&rcu_bh_lock_map)?

Hmm, I was borrowing the strategy from rcu_read_lock_bh_held() which does not
check for a lock held in this map.

Honestly, even  lock_is_held(&rcu_sched_lock_map) seems unnecessary per-se
since !preemptible() will catch that? rcu_read_lock_sched() disables
preemption already, so lockdep's opinion of the matter seems redundant there.

Sorry I already sent out patches again before seeing your comment but I can
rework and resend them based on any other suggestions.

thanks,

 - Joel
Paul E. McKenney July 12, 2019, 5:46 p.m. UTC | #11
On Fri, Jul 12, 2019 at 01:06:31PM -0400, Joel Fernandes wrote:
> On Fri, Jul 12, 2019 at 09:45:31AM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 12, 2019 at 11:10:51AM -0400, Joel Fernandes wrote:
> > > On Fri, Jul 12, 2019 at 01:11:25PM +0200, Peter Zijlstra wrote:
> > > > On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> > > > > +int rcu_read_lock_any_held(void)
> > > > > +{
> > > > > +	int lockdep_opinion = 0;
> > > > > +
> > > > > +	if (!debug_lockdep_rcu_enabled())
> > > > > +		return 1;
> > > > > +	if (!rcu_is_watching())
> > > > > +		return 0;
> > > > > +	if (!rcu_lockdep_current_cpu_online())
> > > > > +		return 0;
> > > > > +
> > > > > +	/* Preemptible RCU flavor */
> > > > > +	if (lock_is_held(&rcu_lock_map))
> > > > 
> > > > you forgot debug_locks here.
> > > 
> > > Actually, it turns out debug_locks checking is not even needed. If
> > > debug_locks == 0, then debug_lockdep_rcu_enabled() returns 0 and we would not
> > > get to this point.
> > > 
> > > > > +		return 1;
> > > > > +
> > > > > +	/* BH flavor */
> > > > > +	if (in_softirq() || irqs_disabled())
> > > > 
> > > > I'm not sure I'd put irqs_disabled() under BH, also this entire
> > > > condition is superfluous, see below.
> > > > 
> > > > > +		return 1;
> > > > > +
> > > > > +	/* Sched flavor */
> > > > > +	if (debug_locks)
> > > > > +		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > > > > +	return lockdep_opinion || !preemptible();
> > > > 
> > > > that !preemptible() turns into:
> > > > 
> > > >   !(preempt_count()==0 && !irqs_disabled())
> > > > 
> > > > which is:
> > > > 
> > > >   preempt_count() != 0 || irqs_disabled()
> > > > 
> > > > and already includes irqs_disabled() and in_softirq().
> > > > 
> > > > > +}
> > > > 
> > > > So maybe something lke:
> > > > 
> > > > 	if (debug_locks && (lock_is_held(&rcu_lock_map) ||
> > > > 			    lock_is_held(&rcu_sched_lock_map)))
> > > > 		return true;
> > > 
> > > Agreed, I will do it this way (without the debug_locks) like:
> > > 
> > > ---8<-----------------------
> > > 
> > > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > > index ba861d1716d3..339aebc330db 100644
> > > --- a/kernel/rcu/update.c
> > > +++ b/kernel/rcu/update.c
> > > @@ -296,27 +296,15 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
> > >  
> > >  int rcu_read_lock_any_held(void)
> > >  {
> > > -	int lockdep_opinion = 0;
> > > -
> > >  	if (!debug_lockdep_rcu_enabled())
> > >  		return 1;
> > >  	if (!rcu_is_watching())
> > >  		return 0;
> > >  	if (!rcu_lockdep_current_cpu_online())
> > >  		return 0;
> > > -
> > > -	/* Preemptible RCU flavor */
> > > -	if (lock_is_held(&rcu_lock_map))
> > > -		return 1;
> > > -
> > > -	/* BH flavor */
> > > -	if (in_softirq() || irqs_disabled())
> > > -		return 1;
> > > -
> > > -	/* Sched flavor */
> > > -	if (debug_locks)
> > > -		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > > -	return lockdep_opinion || !preemptible();
> > > +	if (lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map))
> > 
> > OK, I will bite...  Why not also lock_is_held(&rcu_bh_lock_map)?
> 
> Hmm, I was borrowing the strategy from rcu_read_lock_bh_held() which does not
> check for a lock held in this map.
> 
> Honestly, even  lock_is_held(&rcu_sched_lock_map) seems unnecessary per-se
> since !preemptible() will catch that? rcu_read_lock_sched() disables
> preemption already, so lockdep's opinion of the matter seems redundant there.

Good point!  At least as long as the lockdep splats list RCU-bh among
the locks held, which they did last I checked.

Of course, you could make the same argument for getting rid of
rcu_sched_lock_map.  Does it make sense to have the one without
the other?

> Sorry I already sent out patches again before seeing your comment but I can
> rework and resend them based on any other suggestions.

Not a problem!

							Thax, Paul
Joel Fernandes July 12, 2019, 7:40 p.m. UTC | #12
On Fri, Jul 12, 2019 at 10:46:30AM -0700, Paul E. McKenney wrote:
> On Fri, Jul 12, 2019 at 01:06:31PM -0400, Joel Fernandes wrote:
> > On Fri, Jul 12, 2019 at 09:45:31AM -0700, Paul E. McKenney wrote:
> > > On Fri, Jul 12, 2019 at 11:10:51AM -0400, Joel Fernandes wrote:
> > > > On Fri, Jul 12, 2019 at 01:11:25PM +0200, Peter Zijlstra wrote:
> > > > > On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> > > > > > +int rcu_read_lock_any_held(void)
> > > > > > +{
> > > > > > +	int lockdep_opinion = 0;
> > > > > > +
> > > > > > +	if (!debug_lockdep_rcu_enabled())
> > > > > > +		return 1;
> > > > > > +	if (!rcu_is_watching())
> > > > > > +		return 0;
> > > > > > +	if (!rcu_lockdep_current_cpu_online())
> > > > > > +		return 0;
> > > > > > +
> > > > > > +	/* Preemptible RCU flavor */
> > > > > > +	if (lock_is_held(&rcu_lock_map))
> > > > > 
> > > > > you forgot debug_locks here.
> > > > 
> > > > Actually, it turns out debug_locks checking is not even needed. If
> > > > debug_locks == 0, then debug_lockdep_rcu_enabled() returns 0 and we would not
> > > > get to this point.
> > > > 
> > > > > > +		return 1;
> > > > > > +
> > > > > > +	/* BH flavor */
> > > > > > +	if (in_softirq() || irqs_disabled())
> > > > > 
> > > > > I'm not sure I'd put irqs_disabled() under BH, also this entire
> > > > > condition is superfluous, see below.
> > > > > 
> > > > > > +		return 1;
> > > > > > +
> > > > > > +	/* Sched flavor */
> > > > > > +	if (debug_locks)
> > > > > > +		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > > > > > +	return lockdep_opinion || !preemptible();
> > > > > 
> > > > > that !preemptible() turns into:
> > > > > 
> > > > >   !(preempt_count()==0 && !irqs_disabled())
> > > > > 
> > > > > which is:
> > > > > 
> > > > >   preempt_count() != 0 || irqs_disabled()
> > > > > 
> > > > > and already includes irqs_disabled() and in_softirq().
> > > > > 
> > > > > > +}
> > > > > 
> > > > > So maybe something lke:
> > > > > 
> > > > > 	if (debug_locks && (lock_is_held(&rcu_lock_map) ||
> > > > > 			    lock_is_held(&rcu_sched_lock_map)))
> > > > > 		return true;
> > > > 
> > > > Agreed, I will do it this way (without the debug_locks) like:
> > > > 
> > > > ---8<-----------------------
> > > > 
> > > > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > > > index ba861d1716d3..339aebc330db 100644
> > > > --- a/kernel/rcu/update.c
> > > > +++ b/kernel/rcu/update.c
> > > > @@ -296,27 +296,15 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
> > > >  
> > > >  int rcu_read_lock_any_held(void)
> > > >  {
> > > > -	int lockdep_opinion = 0;
> > > > -
> > > >  	if (!debug_lockdep_rcu_enabled())
> > > >  		return 1;
> > > >  	if (!rcu_is_watching())
> > > >  		return 0;
> > > >  	if (!rcu_lockdep_current_cpu_online())
> > > >  		return 0;
> > > > -
> > > > -	/* Preemptible RCU flavor */
> > > > -	if (lock_is_held(&rcu_lock_map))
> > > > -		return 1;
> > > > -
> > > > -	/* BH flavor */
> > > > -	if (in_softirq() || irqs_disabled())
> > > > -		return 1;
> > > > -
> > > > -	/* Sched flavor */
> > > > -	if (debug_locks)
> > > > -		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > > > -	return lockdep_opinion || !preemptible();
> > > > +	if (lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map))
> > > 
> > > OK, I will bite...  Why not also lock_is_held(&rcu_bh_lock_map)?
> > 
> > Hmm, I was borrowing the strategy from rcu_read_lock_bh_held() which does not
> > check for a lock held in this map.
> > 
> > Honestly, even  lock_is_held(&rcu_sched_lock_map) seems unnecessary per-se
> > since !preemptible() will catch that? rcu_read_lock_sched() disables
> > preemption already, so lockdep's opinion of the matter seems redundant there.
> 
> Good point!  At least as long as the lockdep splats list RCU-bh among
> the locks held, which they did last I checked.
> 
> Of course, you could make the same argument for getting rid of
> rcu_sched_lock_map.  Does it make sense to have the one without
> the other?

It probably makes it inconsistent in the least. I will add the check for
the rcu_bh_lock_map in a separate patch, if that's Ok with you - since I also
want to update the rcu_read_lock_bh_held() logic in the same patch.

That rcu_read_lock_bh_held() could also just return !preemptible as Peter
suggested for the bh case.

> > Sorry I already sent out patches again before seeing your comment but I can
> > rework and resend them based on any other suggestions.
> 
> Not a problem!

Thanks. Depending on whether there is any other feedback, I will work on the
bh_ stuff as a separate patch on top of this series, or work it into the next
series revision if I'm reposting. Hopefully that sounds Ok to you.

thanks,

 - Joel
Paul E. McKenney July 12, 2019, 11:27 p.m. UTC | #13
On Fri, Jul 12, 2019 at 03:40:40PM -0400, Joel Fernandes wrote:
> On Fri, Jul 12, 2019 at 10:46:30AM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 12, 2019 at 01:06:31PM -0400, Joel Fernandes wrote:
> > > On Fri, Jul 12, 2019 at 09:45:31AM -0700, Paul E. McKenney wrote:
> > > > On Fri, Jul 12, 2019 at 11:10:51AM -0400, Joel Fernandes wrote:
> > > > > On Fri, Jul 12, 2019 at 01:11:25PM +0200, Peter Zijlstra wrote:
> > > > > > On Thu, Jul 11, 2019 at 07:43:56PM -0400, Joel Fernandes (Google) wrote:
> > > > > > > +int rcu_read_lock_any_held(void)
> > > > > > > +{
> > > > > > > +	int lockdep_opinion = 0;
> > > > > > > +
> > > > > > > +	if (!debug_lockdep_rcu_enabled())
> > > > > > > +		return 1;
> > > > > > > +	if (!rcu_is_watching())
> > > > > > > +		return 0;
> > > > > > > +	if (!rcu_lockdep_current_cpu_online())
> > > > > > > +		return 0;
> > > > > > > +
> > > > > > > +	/* Preemptible RCU flavor */
> > > > > > > +	if (lock_is_held(&rcu_lock_map))
> > > > > > 
> > > > > > you forgot debug_locks here.
> > > > > 
> > > > > Actually, it turns out debug_locks checking is not even needed. If
> > > > > debug_locks == 0, then debug_lockdep_rcu_enabled() returns 0 and we would not
> > > > > get to this point.
> > > > > 
> > > > > > > +		return 1;
> > > > > > > +
> > > > > > > +	/* BH flavor */
> > > > > > > +	if (in_softirq() || irqs_disabled())
> > > > > > 
> > > > > > I'm not sure I'd put irqs_disabled() under BH, also this entire
> > > > > > condition is superfluous, see below.
> > > > > > 
> > > > > > > +		return 1;
> > > > > > > +
> > > > > > > +	/* Sched flavor */
> > > > > > > +	if (debug_locks)
> > > > > > > +		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > > > > > > +	return lockdep_opinion || !preemptible();
> > > > > > 
> > > > > > that !preemptible() turns into:
> > > > > > 
> > > > > >   !(preempt_count()==0 && !irqs_disabled())
> > > > > > 
> > > > > > which is:
> > > > > > 
> > > > > >   preempt_count() != 0 || irqs_disabled()
> > > > > > 
> > > > > > and already includes irqs_disabled() and in_softirq().
> > > > > > 
> > > > > > > +}
> > > > > > 
> > > > > > So maybe something lke:
> > > > > > 
> > > > > > 	if (debug_locks && (lock_is_held(&rcu_lock_map) ||
> > > > > > 			    lock_is_held(&rcu_sched_lock_map)))
> > > > > > 		return true;
> > > > > 
> > > > > Agreed, I will do it this way (without the debug_locks) like:
> > > > > 
> > > > > ---8<-----------------------
> > > > > 
> > > > > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > > > > index ba861d1716d3..339aebc330db 100644
> > > > > --- a/kernel/rcu/update.c
> > > > > +++ b/kernel/rcu/update.c
> > > > > @@ -296,27 +296,15 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
> > > > >  
> > > > >  int rcu_read_lock_any_held(void)
> > > > >  {
> > > > > -	int lockdep_opinion = 0;
> > > > > -
> > > > >  	if (!debug_lockdep_rcu_enabled())
> > > > >  		return 1;
> > > > >  	if (!rcu_is_watching())
> > > > >  		return 0;
> > > > >  	if (!rcu_lockdep_current_cpu_online())
> > > > >  		return 0;
> > > > > -
> > > > > -	/* Preemptible RCU flavor */
> > > > > -	if (lock_is_held(&rcu_lock_map))
> > > > > -		return 1;
> > > > > -
> > > > > -	/* BH flavor */
> > > > > -	if (in_softirq() || irqs_disabled())
> > > > > -		return 1;
> > > > > -
> > > > > -	/* Sched flavor */
> > > > > -	if (debug_locks)
> > > > > -		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
> > > > > -	return lockdep_opinion || !preemptible();
> > > > > +	if (lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map))
> > > > 
> > > > OK, I will bite...  Why not also lock_is_held(&rcu_bh_lock_map)?
> > > 
> > > Hmm, I was borrowing the strategy from rcu_read_lock_bh_held() which does not
> > > check for a lock held in this map.
> > > 
> > > Honestly, even  lock_is_held(&rcu_sched_lock_map) seems unnecessary per-se
> > > since !preemptible() will catch that? rcu_read_lock_sched() disables
> > > preemption already, so lockdep's opinion of the matter seems redundant there.
> > 
> > Good point!  At least as long as the lockdep splats list RCU-bh among
> > the locks held, which they did last I checked.
> > 
> > Of course, you could make the same argument for getting rid of
> > rcu_sched_lock_map.  Does it make sense to have the one without
> > the other?
> 
> It probably makes it inconsistent in the least. I will add the check for
> the rcu_bh_lock_map in a separate patch, if that's Ok with you - since I also
> want to update the rcu_read_lock_bh_held() logic in the same patch.
> 
> That rcu_read_lock_bh_held() could also just return !preemptible as Peter
> suggested for the bh case.

Although that seems reasonable, please check the call sites.

> > > Sorry I already sent out patches again before seeing your comment but I can
> > > rework and resend them based on any other suggestions.
> > 
> > Not a problem!
> 
> Thanks. Depending on whether there is any other feedback, I will work on the
> bh_ stuff as a separate patch on top of this series, or work it into the next
> series revision if I'm reposting. Hopefully that sounds Ok to you.

Agreed -- let's separate concerns.  And promote bisectability.

							Thanx, Paul
diff mbox series

Patch

diff --git a/include/linux/rculist.h b/include/linux/rculist.h
index e91ec9ddcd30..78c15ec6b2c9 100644
--- a/include/linux/rculist.h
+++ b/include/linux/rculist.h
@@ -40,6 +40,23 @@  static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
  */
 #define list_next_rcu(list)	(*((struct list_head __rcu **)(&(list)->next)))
 
+/*
+ * Check during list traversal that we are within an RCU reader
+ */
+
+#define SIXTH_ARG(a1, a2, a3, a4, a5, a6, ...) a6
+#define COUNT_VARGS(...) SIXTH_ARG(dummy, ## __VA_ARGS__, 4, 3, 2, 1, 0)
+
+#ifdef CONFIG_PROVE_RCU_LIST
+#define __list_check_rcu(dummy, cond, ...)				\
+	({								\
+	RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(),		\
+			 "RCU-list traversed in non-reader section!");	\
+	 })
+#else
+#define __list_check_rcu(dummy, cond, ...) ({})
+#endif
+
 /*
  * Insert a new entry between two known consecutive entries.
  *
@@ -348,9 +365,10 @@  static inline void list_splice_tail_init_rcu(struct list_head *list,
  * the _rcu list-mutation primitives such as list_add_rcu()
  * as long as the traversal is guarded by rcu_read_lock().
  */
-#define list_for_each_entry_rcu(pos, head, member) \
-	for (pos = list_entry_rcu((head)->next, typeof(*pos), member); \
-		&pos->member != (head); \
+#define list_for_each_entry_rcu(pos, head, member, cond...)		\
+	for (__list_check_rcu(dummy, ## cond, 0),			\
+	     pos = list_entry_rcu((head)->next, typeof(*pos), member);	\
+		&pos->member != (head);					\
 		pos = list_entry_rcu(pos->member.next, typeof(*pos), member))
 
 /**
@@ -621,8 +639,9 @@  static inline void hlist_add_behind_rcu(struct hlist_node *n,
  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
  * as long as the traversal is guarded by rcu_read_lock().
  */
-#define hlist_for_each_entry_rcu(pos, head, member)			\
-	for (pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
+#define hlist_for_each_entry_rcu(pos, head, member, cond...)		\
+	for (__list_check_rcu(dummy, ## cond, 0),			\
+	     pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
 			typeof(*(pos)), member);			\
 		pos;							\
 		pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(\
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 922bb6848813..712b464ab960 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -223,6 +223,7 @@  int debug_lockdep_rcu_enabled(void);
 int rcu_read_lock_held(void);
 int rcu_read_lock_bh_held(void);
 int rcu_read_lock_sched_held(void);
+int rcu_read_lock_any_held(void);
 
 #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
@@ -243,6 +244,12 @@  static inline int rcu_read_lock_sched_held(void)
 {
 	return !preemptible();
 }
+
+static inline int rcu_read_lock_any_held(void)
+{
+	return !preemptible();
+}
+
 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
 #ifdef CONFIG_PROVE_RCU
diff --git a/kernel/rcu/Kconfig.debug b/kernel/rcu/Kconfig.debug
index 0ec7d1d33a14..b20d0e2903d1 100644
--- a/kernel/rcu/Kconfig.debug
+++ b/kernel/rcu/Kconfig.debug
@@ -7,6 +7,17 @@  menu "RCU Debugging"
 config PROVE_RCU
 	def_bool PROVE_LOCKING
 
+config PROVE_RCU_LIST
+	bool "RCU list lockdep debugging"
+	depends on PROVE_RCU
+	default n
+	help
+	  Enable RCU lockdep checking for list usages. By default it is
+	  turned off since there are several list RCU users that still
+	  need to be converted to pass a lockdep expression. To prevent
+	  false-positive splats, we keep it default disabled but once all
+	  users are converted, we can remove this config option.
+
 config TORTURE_TEST
 	tristate
 	default n
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index c3bf44ba42e5..9cb30006a5e1 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -298,6 +298,32 @@  int rcu_read_lock_bh_held(void)
 }
 EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
 
+int rcu_read_lock_any_held(void)
+{
+	int lockdep_opinion = 0;
+
+	if (!debug_lockdep_rcu_enabled())
+		return 1;
+	if (!rcu_is_watching())
+		return 0;
+	if (!rcu_lockdep_current_cpu_online())
+		return 0;
+
+	/* Preemptible RCU flavor */
+	if (lock_is_held(&rcu_lock_map))
+		return 1;
+
+	/* BH flavor */
+	if (in_softirq() || irqs_disabled())
+		return 1;
+
+	/* Sched flavor */
+	if (debug_locks)
+		lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
+	return lockdep_opinion || !preemptible();
+}
+EXPORT_SYMBOL_GPL(rcu_read_lock_any_held);
+
 #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
 /**