diff mbox

[PATCHv4,2/2] memory barrier: adding smp_mb__after_lock

Message ID 20090702063624.GC3429@jolsa.lab.eng.brq.redhat.com
State Superseded, archived
Delegated to: David Miller
Headers show

Commit Message

Jiri Olsa July 2, 2009, 6:36 a.m. UTC
Adding smp_mb__after_lock define to be used as a smp_mb call after
a lock.  

Making it nop for x86, since {read|write|spin}_lock() on x86 are 
full memory barriers.

wbr,
jirka


Signed-off-by: Jiri Olsa <jolsa@redhat.com>

---
 arch/x86/include/asm/spinlock.h |    3 +++
 include/linux/spinlock.h        |    5 +++++
 include/net/sock.h              |    2 +-
 3 files changed, 9 insertions(+), 1 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Eric Dumazet July 2, 2009, 6:53 a.m. UTC | #1
Jiri Olsa a écrit :
> Adding smp_mb__after_lock define to be used as a smp_mb call after
> a lock.  
> 
> Making it nop for x86, since {read|write|spin}_lock() on x86 are 
> full memory barriers.
> 
> wbr,
> jirka
> 
> 
> Signed-off-by: Jiri Olsa <jolsa@redhat.com>


Maybe we should remind that sk_has_helper() is always called
right after a call to read_lock() as in :

	read_lock(&sk->sk_callback_lock);
	if (sk_has_sleeper(sk))
		wake_up_interruptible_all(sk->sk_sleep);

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>

Thanks Jiri

> 
> ---
>  arch/x86/include/asm/spinlock.h |    3 +++
>  include/linux/spinlock.h        |    5 +++++
>  include/net/sock.h              |    2 +-
>  3 files changed, 9 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index b7e5db8..39ecc5f 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -302,4 +302,7 @@ static inline void __raw_write_unlock(raw_rwlock_t *rw)
>  #define _raw_read_relax(lock)	cpu_relax()
>  #define _raw_write_relax(lock)	cpu_relax()
>  
> +/* The {read|write|spin}_lock() on x86 are full memory barriers. */
> +#define smp_mb__after_lock() do { } while (0)
> +
>  #endif /* _ASM_X86_SPINLOCK_H */
> diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> index 252b245..ae053bd 100644
> --- a/include/linux/spinlock.h
> +++ b/include/linux/spinlock.h
> @@ -132,6 +132,11 @@ do {								\
>  #endif /*__raw_spin_is_contended*/
>  #endif
>  
> +/* The lock does not imply full memory barrier. */
> +#ifndef smp_mb__after_lock
> +#define smp_mb__after_lock() smp_mb()
> +#endif
> +
>  /**
>   * spin_unlock_wait - wait until the spinlock gets unlocked
>   * @lock: the spinlock in question.
> diff --git a/include/net/sock.h b/include/net/sock.h
> index 4eb8409..b3e96a4 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -1280,7 +1280,7 @@ static inline int sk_has_sleeper(struct sock *sk)
>  	 *
>  	 * This memory barrier is paired in the sock_poll_wait.
>  	 */
> -	smp_mb();
> +	smp_mb__after_lock();
>  	return sk->sk_sleep && waitqueue_active(sk->sk_sleep);
>  }
>  
> --

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Davide Libenzi July 2, 2009, 2:39 p.m. UTC | #2
On Thu, 2 Jul 2009, Eric Dumazet wrote:

> Jiri Olsa a écrit :
> > Adding smp_mb__after_lock define to be used as a smp_mb call after
> > a lock.  
> > 
> > Making it nop for x86, since {read|write|spin}_lock() on x86 are 
> > full memory barriers.
> > 
> > wbr,
> > jirka
> > 
> > 
> > Signed-off-by: Jiri Olsa <jolsa@redhat.com>
> 
> 
> Maybe we should remind that sk_has_helper() is always called
> right after a call to read_lock() as in :
> 
> 	read_lock(&sk->sk_callback_lock);
> 	if (sk_has_sleeper(sk))
> 		wake_up_interruptible_all(sk->sk_sleep);

Agreed, that'd be to have it in the source code comment.


- Davide
Jiri Olsa July 3, 2009, 7:41 a.m. UTC | #3
On Thu, Jul 02, 2009 at 07:39:04AM -0700, Davide Libenzi wrote:
> On Thu, 2 Jul 2009, Eric Dumazet wrote:
> 
> > Jiri Olsa a écrit :
> > > Adding smp_mb__after_lock define to be used as a smp_mb call after
> > > a lock.  
> > > 
> > > Making it nop for x86, since {read|write|spin}_lock() on x86 are 
> > > full memory barriers.
> > > 
> > > wbr,
> > > jirka
> > > 
> > > 
> > > Signed-off-by: Jiri Olsa <jolsa@redhat.com>
> > 
> > 
> > Maybe we should remind that sk_has_helper() is always called
> > right after a call to read_lock() as in :
> > 
> > 	read_lock(&sk->sk_callback_lock);
> > 	if (sk_has_sleeper(sk))
> > 		wake_up_interruptible_all(sk->sk_sleep);
> 
> Agreed, that'd be to have it in the source code comment.
> 
> 
> - Davide
> 

ok, I'll add it to the 1/2 part in v5

jirka
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jarek Poplawski July 3, 2009, 7:47 a.m. UTC | #4
On Fri, Jul 03, 2009 at 09:41:26AM +0200, Jiri Olsa wrote:
> On Thu, Jul 02, 2009 at 07:39:04AM -0700, Davide Libenzi wrote:
> > On Thu, 2 Jul 2009, Eric Dumazet wrote:
> > 
> > > Jiri Olsa a écrit :
> > > > Adding smp_mb__after_lock define to be used as a smp_mb call after
> > > > a lock.  
> > > > 
> > > > Making it nop for x86, since {read|write|spin}_lock() on x86 are 
> > > > full memory barriers.
> > > > 
> > > > wbr,
> > > > jirka
> > > > 
> > > > 
> > > > Signed-off-by: Jiri Olsa <jolsa@redhat.com>
> > > 
> > > 
> > > Maybe we should remind that sk_has_helper() is always called
> > > right after a call to read_lock() as in :
> > > 
> > > 	read_lock(&sk->sk_callback_lock);
> > > 	if (sk_has_sleeper(sk))
> > > 		wake_up_interruptible_all(sk->sk_sleep);
> > 
> > Agreed, that'd be to have it in the source code comment.
> > 
> > 
> > - Davide
> > 
> 
> ok, I'll add it to the 1/2 part in v5
> 

Btw., there is a tiny typo:

- receive callbacks. Adding fuctions sock_poll_wait and sock_has_sleeper
+ receive callbacks. Adding fuctions sock_poll_wait and sk_has_sleeper

Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jiri Olsa July 3, 2009, 7:50 a.m. UTC | #5
On Fri, Jul 03, 2009 at 09:41:26AM +0200, Jiri Olsa wrote:
> On Thu, Jul 02, 2009 at 07:39:04AM -0700, Davide Libenzi wrote:
> > On Thu, 2 Jul 2009, Eric Dumazet wrote:
> > 
> > > Jiri Olsa a écrit :
> > > > Adding smp_mb__after_lock define to be used as a smp_mb call after
> > > > a lock.  
> > > > 
> > > > Making it nop for x86, since {read|write|spin}_lock() on x86 are 
> > > > full memory barriers.
> > > > 
> > > > wbr,
> > > > jirka
> > > > 
> > > > 
> > > > Signed-off-by: Jiri Olsa <jolsa@redhat.com>
> > > 
> > > 
> > > Maybe we should remind that sk_has_helper() is always called
> > > right after a call to read_lock() as in :
> > > 
> > > 	read_lock(&sk->sk_callback_lock);
> > > 	if (sk_has_sleeper(sk))
> > > 		wake_up_interruptible_all(sk->sk_sleep);
> > 
> > Agreed, that'd be to have it in the source code comment.
> > 
> > 
> > - Davide
> > 
> 
> ok, I'll add it to the 1/2 part in v5
> 
> jirka

actually I see the 2/2 would be better :)

jirka
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jiri Olsa July 3, 2009, 7:51 a.m. UTC | #6
On Fri, Jul 03, 2009 at 07:47:31AM +0000, Jarek Poplawski wrote:
> On Fri, Jul 03, 2009 at 09:41:26AM +0200, Jiri Olsa wrote:
> > On Thu, Jul 02, 2009 at 07:39:04AM -0700, Davide Libenzi wrote:
> > > On Thu, 2 Jul 2009, Eric Dumazet wrote:
> > > 
> > > > Jiri Olsa a écrit :
> > > > > Adding smp_mb__after_lock define to be used as a smp_mb call after
> > > > > a lock.  
> > > > > 
> > > > > Making it nop for x86, since {read|write|spin}_lock() on x86 are 
> > > > > full memory barriers.
> > > > > 
> > > > > wbr,
> > > > > jirka
> > > > > 
> > > > > 
> > > > > Signed-off-by: Jiri Olsa <jolsa@redhat.com>
> > > > 
> > > > 
> > > > Maybe we should remind that sk_has_helper() is always called
> > > > right after a call to read_lock() as in :
> > > > 
> > > > 	read_lock(&sk->sk_callback_lock);
> > > > 	if (sk_has_sleeper(sk))
> > > > 		wake_up_interruptible_all(sk->sk_sleep);
> > > 
> > > Agreed, that'd be to have it in the source code comment.
> > > 
> > > 
> > > - Davide
> > > 
> > 
> > ok, I'll add it to the 1/2 part in v5
> > 
> 
> Btw., there is a tiny typo:
> 
> - receive callbacks. Adding fuctions sock_poll_wait and sock_has_sleeper
> + receive callbacks. Adding fuctions sock_poll_wait and sk_has_sleeper
> 
> Jarek P.

thanks, jirka
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index b7e5db8..39ecc5f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -302,4 +302,7 @@  static inline void __raw_write_unlock(raw_rwlock_t *rw)
 #define _raw_read_relax(lock)	cpu_relax()
 #define _raw_write_relax(lock)	cpu_relax()
 
+/* The {read|write|spin}_lock() on x86 are full memory barriers. */
+#define smp_mb__after_lock() do { } while (0)
+
 #endif /* _ASM_X86_SPINLOCK_H */
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 252b245..ae053bd 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -132,6 +132,11 @@  do {								\
 #endif /*__raw_spin_is_contended*/
 #endif
 
+/* The lock does not imply full memory barrier. */
+#ifndef smp_mb__after_lock
+#define smp_mb__after_lock() smp_mb()
+#endif
+
 /**
  * spin_unlock_wait - wait until the spinlock gets unlocked
  * @lock: the spinlock in question.
diff --git a/include/net/sock.h b/include/net/sock.h
index 4eb8409..b3e96a4 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1280,7 +1280,7 @@  static inline int sk_has_sleeper(struct sock *sk)
 	 *
 	 * This memory barrier is paired in the sock_poll_wait.
 	 */
-	smp_mb();
+	smp_mb__after_lock();
 	return sk->sk_sleep && waitqueue_active(sk->sk_sleep);
 }