diff mbox

niu: fix SMP race protecting rx_rings and tx_rings

Message ID 1296758717-18406-1-git-send-email-fleitner@redhat.com
State Rejected, archived
Delegated to: David Miller
Headers show

Commit Message

Flavio Leitner Feb. 3, 2011, 6:45 p.m. UTC
It's possible to trigger a crash if one CPU is opening
the device while another CPU gets its statistics.

It happens because niu_alloc_channels() called from
niu_open() sets num_tx/rx_rings before allocating the
ring, so the other thread crashes when accessing
np->tx_rings[i].tx_packets at niu_get_tx_stats().

Signed-off-by: Flavio Leitner <fleitner@redhat.com>
---
 drivers/net/niu.c |   29 ++++++++++++++++++++++++++---
 drivers/net/niu.h |    1 +
 2 files changed, 27 insertions(+), 3 deletions(-)

Compile tested only because I don't have the hardware.

Comments

David Miller Feb. 3, 2011, 10:21 p.m. UTC | #1
From: Flavio Leitner <fleitner@redhat.com>
Date: Thu,  3 Feb 2011 16:45:17 -0200

> It's possible to trigger a crash if one CPU is opening
> the device while another CPU gets its statistics.
> 
> It happens because niu_alloc_channels() called from
> niu_open() sets num_tx/rx_rings before allocating the
> ring, so the other thread crashes when accessing
> np->tx_rings[i].tx_packets at niu_get_tx_stats().
> 
> Signed-off-by: Flavio Leitner <fleitner@redhat.com>
 ...
> Compile tested only because I don't have the hardware.

I'll apply this and give it a quick test, thanks.

Can you have the person who reported this crash to you test the patch
out at least?  That's how you learned about this problem, right,
someone else hit the crash?

In such cases I'd really appreciate it if you got positive testing
feedback from the reporter before posting the patch.

Longer term a better way to fix this is to RCU free the ring data
structures, and use a quick NULL test at the top of the get stats
implementation.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Flavio Leitner Feb. 3, 2011, 11:18 p.m. UTC | #2
On Thu, Feb 03, 2011 at 02:21:45PM -0800, David Miller wrote:
> From: Flavio Leitner <fleitner@redhat.com>
> Date: Thu,  3 Feb 2011 16:45:17 -0200
> 
> > It's possible to trigger a crash if one CPU is opening
> > the device while another CPU gets its statistics.
> > 
> > It happens because niu_alloc_channels() called from
> > niu_open() sets num_tx/rx_rings before allocating the
> > ring, so the other thread crashes when accessing
> > np->tx_rings[i].tx_packets at niu_get_tx_stats().
> > 
> > Signed-off-by: Flavio Leitner <fleitner@redhat.com>
>  ...
> > Compile tested only because I don't have the hardware.
> 
> I'll apply this and give it a quick test, thanks.
> 
> Can you have the person who reported this crash to you test the patch
> out at least?  That's how you learned about this problem, right,
> someone else hit the crash?
> 
> In such cases I'd really appreciate it if you got positive testing
> feedback from the reporter before posting the patch.
> 
> Longer term a better way to fix this is to RCU free the ring data
> structures, and use a quick NULL test at the top of the get stats
> implementation.

Ok, I can write a new version using RCU if you like.
I've chose rwsem because I think those operations aren't
frequent. The RCU leaves an operation to run later during
a safe context, so I thought the cost doesn't worth it.

I'm trying to get testing feedback as well, not sure yet
if it will be possible though. Anyway, I'll update here
when I heard something.

thanks!
David Miller Feb. 3, 2011, 11:22 p.m. UTC | #3
From: Flavio Leitner <fleitner@redhat.com>
Date: Thu, 3 Feb 2011 21:18:05 -0200

> Ok, I can write a new version using RCU if you like.

Don't worry, I'll take care of it, I actually have the
hardware to test on :-)

> I've chose rwsem because I think those operations aren't
> frequent. The RCU leaves an operation to run later during
> a safe context, so I thought the cost doesn't worth it.

One thing I'm worried about with your patch is that the statistics
method can be called in basically any context, therefore a sleeping
lock like the rw_semaphore is probably not usable.

In fact I'm going to revert your patch for now because of this issue.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Flavio Leitner Feb. 3, 2011, 11:47 p.m. UTC | #4
On Thu, Feb 03, 2011 at 03:22:14PM -0800, David Miller wrote:
> From: Flavio Leitner <fleitner@redhat.com>
> Date: Thu, 3 Feb 2011 21:18:05 -0200
> 
> > Ok, I can write a new version using RCU if you like.
> 
> Don't worry, I'll take care of it, I actually have the
> hardware to test on :-)

Ok. Having the hardware around indeed helps :)

> > I've chose rwsem because I think those operations aren't
> > frequent. The RCU leaves an operation to run later during
> > a safe context, so I thought the cost doesn't worth it.
> 
> One thing I'm worried about with your patch is that the statistics
> method can be called in basically any context, therefore a sleeping
> lock like the rw_semaphore is probably not usable.
> 
> In fact I'm going to revert your patch for now because of this issue.

Oops, I didn't see it there.  I thought all calls there were process
context.  Anyway, RCU then...

thanks for reviewing it,
David Miller Feb. 3, 2011, 11:53 p.m. UTC | #5
From: Flavio Leitner <fleitner@redhat.com>
Date: Thu, 3 Feb 2011 21:47:48 -0200

> On Thu, Feb 03, 2011 at 03:22:14PM -0800, David Miller wrote:
>> One thing I'm worried about with your patch is that the statistics
>> method can be called in basically any context, therefore a sleeping
>> lock like the rw_semaphore is probably not usable.
>> 
>> In fact I'm going to revert your patch for now because of this issue.
> 
> Oops, I didn't see it there.  I thought all calls there were process
> context.  Anyway, RCU then...

Yes, one example is the dev_get_stats() call made in
drivers/net/bonding/bond_main.c, which occurs with a
rw spinlock held and BH disabled.

> thanks for reviewing it,

No problem.  I think I even have a way to fix this without using RCU
(directly).  Just some memory barriers during allocation of the
channels.

I'll post the patch after I'm done with it, thanks again Falvio.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/niu.c b/drivers/net/niu.c
index 2541321..bfa494e 100644
--- a/drivers/net/niu.c
+++ b/drivers/net/niu.c
@@ -4328,7 +4328,7 @@  static void niu_free_tx_ring_info(struct niu *np, struct tx_ring_info *rp)
 	}
 }
 
-static void niu_free_channels(struct niu *np)
+static void __niu_free_channels(struct niu *np)
 {
 	int i;
 
@@ -4355,6 +4355,13 @@  static void niu_free_channels(struct niu *np)
 	}
 }
 
+static void niu_free_channels(struct niu *np)
+{
+	down_write(&np->rings_rwlock);
+	__niu_free_channels(np);
+	up_write(&np->rings_rwlock);
+}
+
 static int niu_alloc_rx_ring_info(struct niu *np,
 				  struct rx_ring_info *rp)
 {
@@ -4485,7 +4492,7 @@  static void niu_size_rbr(struct niu *np, struct rx_ring_info *rp)
 	rp->rbr_sizes[3] = rp->rbr_block_size;
 }
 
-static int niu_alloc_channels(struct niu *np)
+static int __niu_alloc_channels(struct niu *np)
 {
 	struct niu_parent *parent = np->parent;
 	int first_rx_channel, first_tx_channel;
@@ -4558,10 +4565,21 @@  static int niu_alloc_channels(struct niu *np)
 	return 0;
 
 out_err:
-	niu_free_channels(np);
+	__niu_free_channels(np);
 	return err;
 }
 
+static int niu_alloc_channels(struct niu *np)
+{
+	int ret;
+
+	down_write(&np->rings_rwlock);
+	ret = __niu_alloc_channels(np);
+	up_write(&np->rings_rwlock);
+
+	return ret;
+}
+
 static int niu_tx_cs_sng_poll(struct niu *np, int channel)
 {
 	int limit = 1000;
@@ -6249,6 +6267,7 @@  static void niu_get_rx_stats(struct niu *np)
 	int i;
 
 	pkts = dropped = errors = bytes = 0;
+	down_read(&np->rings_rwlock);
 	for (i = 0; i < np->num_rx_rings; i++) {
 		struct rx_ring_info *rp = &np->rx_rings[i];
 
@@ -6259,6 +6278,7 @@  static void niu_get_rx_stats(struct niu *np)
 		dropped += rp->rx_dropped;
 		errors += rp->rx_errors;
 	}
+	up_read(&np->rings_rwlock);
 	np->dev->stats.rx_packets = pkts;
 	np->dev->stats.rx_bytes = bytes;
 	np->dev->stats.rx_dropped = dropped;
@@ -6271,6 +6291,7 @@  static void niu_get_tx_stats(struct niu *np)
 	int i;
 
 	pkts = errors = bytes = 0;
+	down_read(&np->rings_rwlock);
 	for (i = 0; i < np->num_tx_rings; i++) {
 		struct tx_ring_info *rp = &np->tx_rings[i];
 
@@ -6278,6 +6299,7 @@  static void niu_get_tx_stats(struct niu *np)
 		bytes += rp->tx_bytes;
 		errors += rp->tx_errors;
 	}
+	up_read(&np->rings_rwlock);
 	np->dev->stats.tx_packets = pkts;
 	np->dev->stats.tx_bytes = bytes;
 	np->dev->stats.tx_errors = errors;
@@ -9680,6 +9702,7 @@  static struct net_device * __devinit niu_alloc_and_init(
 
 	np->msg_enable = niu_debug;
 
+	init_rwsem(&np->rings_rwlock);
 	spin_lock_init(&np->lock);
 	INIT_WORK(&np->reset_task, niu_reset_task);
 
diff --git a/drivers/net/niu.h b/drivers/net/niu.h
index a41fa8e..555bae1 100644
--- a/drivers/net/niu.h
+++ b/drivers/net/niu.h
@@ -3265,6 +3265,7 @@  struct niu {
 	const struct niu_ops		*ops;
 	union niu_mac_stats		mac_stats;
 
+	struct rw_semaphore		rings_rwlock;
 	struct rx_ring_info		*rx_rings;
 	struct tx_ring_info		*tx_rings;
 	int				num_rx_rings;