diff mbox

[2/3,net-next] rhashtable: Use spin_lock_bh_nested() consistently

Message ID ad7f4c80fc041ae4aae4e2ed5294ba1f218e6afb.1426257747.git.tgraf@suug.ch
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Thomas Graf March 13, 2015, 2:45 p.m. UTC
No change in behaviour as the outer lock already disables softirq
but it prevents bugs down the line as this lock logically requires
the BH variant.

Signed-off-by: Thomas Graf <tgraf@suug.ch>
---
 lib/rhashtable.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

David Miller March 13, 2015, 4:54 p.m. UTC | #1
From: Thomas Graf <tgraf@suug.ch>
Date: Fri, 13 Mar 2015 15:45:20 +0100

> No change in behaviour as the outer lock already disables softirq
> but it prevents bugs down the line as this lock logically requires
> the BH variant.
> 
> Signed-off-by: Thomas Graf <tgraf@suug.ch>

I would prefer you don't do this.

x_bh() may be relatively cheap, but it is not zero cost.

If there is an invariant that when we are called here BH
is disabled, make it explicit.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 14, 2015, 2:21 a.m. UTC | #2
On Fri, Mar 13, 2015 at 12:54:11PM -0400, David Miller wrote:
> From: Thomas Graf <tgraf@suug.ch>
> Date: Fri, 13 Mar 2015 15:45:20 +0100
> 
> > No change in behaviour as the outer lock already disables softirq
> > but it prevents bugs down the line as this lock logically requires
> > the BH variant.
> > 
> > Signed-off-by: Thomas Graf <tgraf@suug.ch>
> 
> I would prefer you don't do this.
> 
> x_bh() may be relatively cheap, but it is not zero cost.
> 
> If there is an invariant that when we are called here BH
> is disabled, make it explicit.

Agreed.  I dropped the _bh precisely for this reason when I did
the arbitrary rehash.  Please don't add it back because it serves
zero purpose.  Only the outside lock should do _bh while the
nested one should not.

Cheers,
Thomas Graf March 16, 2015, 8:44 a.m. UTC | #3
On 03/14/15 at 01:21pm, Herbert Xu wrote:
> On Fri, Mar 13, 2015 at 12:54:11PM -0400, David Miller wrote:
> > From: Thomas Graf <tgraf@suug.ch>
> > Date: Fri, 13 Mar 2015 15:45:20 +0100
> > 
> > > No change in behaviour as the outer lock already disables softirq
> > > but it prevents bugs down the line as this lock logically requires
> > > the BH variant.
> > > 
> > > Signed-off-by: Thomas Graf <tgraf@suug.ch>
> > 
> > I would prefer you don't do this.

OK, I'm dropping this patch.

> > x_bh() may be relatively cheap, but it is not zero cost.
> > 
> > If there is an invariant that when we are called here BH
> > is disabled, make it explicit.

I assume you are referring to the preempt disabled case. Fair enough.

> Agreed.  I dropped the _bh precisely for this reason when I did
> the arbitrary rehash.  Please don't add it back because it serves
> zero purpose.  Only the outside lock should do _bh while the
> nested one should not.

A note in the commit would have helped, it looked like an
accidental change.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 963aa03..4de97e1f 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -233,7 +233,7 @@  static int rhashtable_rehash_one(struct rhashtable *ht, unsigned old_hash)
 
 	new_bucket_lock = bucket_lock(new_tbl, new_hash);
 
-	spin_lock_nested(new_bucket_lock, SINGLE_DEPTH_NESTING);
+	spin_lock_bh_nested(new_bucket_lock, SINGLE_DEPTH_NESTING);
 	head = rht_dereference_bucket(new_tbl->buckets[new_hash],
 				      new_tbl, new_hash);
 
@@ -243,7 +243,7 @@  static int rhashtable_rehash_one(struct rhashtable *ht, unsigned old_hash)
 		RCU_INIT_POINTER(entry->next, head);
 
 	rcu_assign_pointer(new_tbl->buckets[new_hash], entry);
-	spin_unlock(new_bucket_lock);
+	spin_unlock_bh(new_bucket_lock);
 
 	rcu_assign_pointer(*pprev, next);
 
@@ -404,7 +404,7 @@  static bool __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
 	tbl = rht_dereference_rcu(old_tbl->future_tbl, ht) ?: old_tbl;
 	if (tbl != old_tbl) {
 		hash = head_hashfn(ht, tbl, obj);
-		spin_lock_nested(bucket_lock(tbl, hash), SINGLE_DEPTH_NESTING);
+		spin_lock_bh_nested(bucket_lock(tbl, hash), SINGLE_DEPTH_NESTING);
 	}
 
 	if (compare &&
@@ -431,7 +431,7 @@  static bool __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
 
 exit:
 	if (tbl != old_tbl)
-		spin_unlock(bucket_lock(tbl, hash));
+		spin_unlock_bh(bucket_lock(tbl, hash));
 
 	spin_unlock_bh(old_lock);