diff mbox

[net-next] vxlan: release lock after each bucket in vxlan_cleanup

Message ID 1432626124-24676-1-git-send-email-sorin@returnze.ro
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Sorin Dumitru May 26, 2015, 7:42 a.m. UTC
We're seeing some softlockups from this function when there
are a lot fdb entries on a vxlan device. Taking the lock for
each bucket instead of the whole table is enough to fix that.

Signed-off-by: Sorin Dumitru <sdumitru@ixiacom.com>
---
 drivers/net/vxlan.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

David Miller May 27, 2015, 5:33 p.m. UTC | #1
From: Sorin Dumitru <sorin@returnze.ro>
Date: Tue, 26 May 2015 10:42:04 +0300

> We're seeing some softlockups from this function when there
> are a lot fdb entries on a vxlan device. Taking the lock for
> each bucket instead of the whole table is enough to fix that.
> 
> Signed-off-by: Sorin Dumitru <sdumitru@ixiacom.com>

Applied, thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Cong Wang May 27, 2015, 5:44 p.m. UTC | #2
On Tue, May 26, 2015 at 12:42 AM, Sorin Dumitru <sorin@returnze.ro> wrote:
> We're seeing some softlockups from this function when there
> are a lot fdb entries on a vxlan device. Taking the lock for
> each bucket instead of the whole table is enough to fix that.
>

Hmm, then the spinlock could be moved into each bucket, right?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sorin Dumitru May 27, 2015, 6:09 p.m. UTC | #3
On Wed, May 27, 2015 at 8:44 PM, Cong Wang <cwang@twopensource.com> wrote:
> On Tue, May 26, 2015 at 12:42 AM, Sorin Dumitru <sorin@returnze.ro> wrote:
>> We're seeing some softlockups from this function when there
>> are a lot fdb entries on a vxlan device. Taking the lock for
>> each bucket instead of the whole table is enough to fix that.
>>
>
> Hmm, then the spinlock could be moved into each bucket, right?

Yes, it could, but I'm not sure if it will benefit us too much. I didn't see
too much contention on this lock while adding and removing fdb entries.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller May 27, 2015, 6:31 p.m. UTC | #4
From: Cong Wang <cwang@twopensource.com>
Date: Wed, 27 May 2015 10:44:26 -0700

> On Tue, May 26, 2015 at 12:42 AM, Sorin Dumitru <sorin@returnze.ro> wrote:
>> We're seeing some softlockups from this function when there
>> are a lot fdb entries on a vxlan device. Taking the lock for
>> each bucket instead of the whole table is enough to fix that.
>>
> 
> Hmm, then the spinlock could be moved into each bucket, right?

Just because this one big one-time cleanup operation holds the lock
for a long time, doesn't justify making it more granular.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
index 5eddbc0..34c519e 100644
--- a/drivers/net/vxlan.c
+++ b/drivers/net/vxlan.c
@@ -2131,9 +2131,10 @@  static void vxlan_cleanup(unsigned long arg)
 	if (!netif_running(vxlan->dev))
 		return;
 
-	spin_lock_bh(&vxlan->hash_lock);
 	for (h = 0; h < FDB_HASH_SIZE; ++h) {
 		struct hlist_node *p, *n;
+
+		spin_lock_bh(&vxlan->hash_lock);
 		hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
 			struct vxlan_fdb *f
 				= container_of(p, struct vxlan_fdb, hlist);
@@ -2152,8 +2153,8 @@  static void vxlan_cleanup(unsigned long arg)
 			} else if (time_before(timeout, next_timer))
 				next_timer = timeout;
 		}
+		spin_unlock_bh(&vxlan->hash_lock);
 	}
-	spin_unlock_bh(&vxlan->hash_lock);
 
 	mod_timer(&vxlan->age_timer, next_timer);
 }