From patchwork Tue Jun 30 10:41:06 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 29314 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@bilbo.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from ozlabs.org (ozlabs.org [203.10.76.45]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx.ozlabs.org", Issuer "CA Cert Signing Authority" (verified OK)) by bilbo.ozlabs.org (Postfix) with ESMTPS id E616EB70BD for ; Tue, 30 Jun 2009 20:42:00 +1000 (EST) Received: by ozlabs.org (Postfix) id D6CC8DDD0C; Tue, 30 Jun 2009 20:42:00 +1000 (EST) Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 66AEFDDD04 for ; Tue, 30 Jun 2009 20:42:00 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753307AbZF3KlJ (ORCPT ); Tue, 30 Jun 2009 06:41:09 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751523AbZF3KlI (ORCPT ); Tue, 30 Jun 2009 06:41:08 -0400 Received: from mx2.redhat.com ([66.187.237.31]:54353 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751508AbZF3KlH (ORCPT ); Tue, 30 Jun 2009 06:41:07 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n5UAfA8c026178; Tue, 30 Jun 2009 06:41:10 -0400 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n5UAf9ci009313; Tue, 30 Jun 2009 06:41:09 -0400 Received: from jolsa.lab.eng.brq.redhat.com (dhcp-lab-122.englab.brq.redhat.com [10.34.33.122]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id n5UAf71b017003; Tue, 30 Jun 2009 06:41:07 -0400 Date: Tue, 30 Jun 2009 12:41:06 +0200 From: Jiri Olsa To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, fbl@redhat.com, nhorman@redhat.com, davem@redhat.com, htejun@gmail.com, jarkao2@gmail.com, oleg@redhat.com, davidel@xmailserver.org, eric.dumazet@gmail.com Subject: [PATCHv2 2/2] memory barrier: adding smp_mb__after_lock Message-ID: <20090630104106.GD9657@jolsa.lab.eng.brq.redhat.com> References: <20090630103642.GB9657@jolsa.lab.eng.brq.redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20090630103642.GB9657@jolsa.lab.eng.brq.redhat.com> User-Agent: Mutt/1.5.18 (2008-05-17) X-Scanned-By: MIMEDefang 2.58 on 172.16.27.26 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Adding smp_mb__after_lock define to be used as a smp_mb call after a lock. Making it nop for x86, since {read|write|spin}_lock() on x86 are full memory barriers. wbr, jirka Signed-off-by: Jiri Olsa --- arch/x86/include/asm/spinlock.h | 3 +++ include/linux/spinlock.h | 5 +++++ include/net/sock.h | 2 +- 3 files changed, 9 insertions(+), 1 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index b7e5db8..39ecc5f 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -302,4 +302,7 @@ static inline void __raw_write_unlock(raw_rwlock_t *rw) #define _raw_read_relax(lock) cpu_relax() #define _raw_write_relax(lock) cpu_relax() +/* The {read|write|spin}_lock() on x86 are full memory barriers. */ +#define smp_mb__after_lock() do { } while (0) + #endif /* _ASM_X86_SPINLOCK_H */ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 252b245..ae053bd 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -132,6 +132,11 @@ do { \ #endif /*__raw_spin_is_contended*/ #endif +/* The lock does not imply full memory barrier. */ +#ifndef smp_mb__after_lock +#define smp_mb__after_lock() smp_mb() +#endif + /** * spin_unlock_wait - wait until the spinlock gets unlocked * @lock: the spinlock in question. diff --git a/include/net/sock.h b/include/net/sock.h index a12df10..0d57e83 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1277,7 +1277,7 @@ static inline void sock_poll_wait(struct file *filp, * * This memory barrier is paired in the sk_has_sleeper. */ - smp_mb(); + smp_mb__after_lock(); } }