diff mbox

virtio_net: set/cancel work on ndo_open/ndo_stop

Message ID 87pqf7fyld.fsf@rustcorp.com.au
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Rusty Russell Dec. 29, 2011, 10:42 a.m. UTC
Michael S. Tsirkin noticed that we could run the refill work after
ndo_close, which can re-enable napi - we don't disable it until
virtnet_remove.  This is clearly wrong, so move the workqueue control
to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close).

One subtle point: virtnet_probe() could simply fail if it couldn't
allocate a receive buffer, but that's less polite in virtnet_open() so
we schedule a refill as we do in the normal receive path if we run out
of memory.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
---
 drivers/net/virtio_net.c |   17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

David Miller Dec. 29, 2011, 7:38 p.m. UTC | #1
Michael will you integrate these patches from Rusty and submit them
to me along with other stuff you might have?

Or would you like me to apply them to net-next directly?

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michael S. Tsirkin Dec. 29, 2011, 8:31 p.m. UTC | #2
On Thu, Dec 29, 2011 at 02:38:06PM -0500, David Miller wrote:
> 
> Michael will you integrate these patches from Rusty and submit them
> to me along with other stuff you might have?
> 
> Or would you like me to apply them to net-next directly?
> 
> Thanks.

For personal reasons, my availability in this merge cycle is limited, so
net-next directly's better.

Thanks,
David Miller Dec. 29, 2011, 9:44 p.m. UTC | #3
From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 29 Dec 2011 22:31:50 +0200

> On Thu, Dec 29, 2011 at 02:38:06PM -0500, David Miller wrote:
>> 
>> Michael will you integrate these patches from Rusty and submit them
>> to me along with other stuff you might have?
>> 
>> Or would you like me to apply them to net-next directly?
>> 
>> Thanks.
> 
> For personal reasons, my availability in this merge cycle is limited, so
> net-next directly's better.

Ok.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michael S. Tsirkin April 4, 2012, 9:32 a.m. UTC | #4
On Thu, Dec 29, 2011 at 09:12:38PM +1030, Rusty Russell wrote:
> Michael S. Tsirkin noticed that we could run the refill work after
> ndo_close, which can re-enable napi - we don't disable it until
> virtnet_remove.  This is clearly wrong, so move the workqueue control
> to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close).
> 
> One subtle point: virtnet_probe() could simply fail if it couldn't
> allocate a receive buffer, but that's less polite in virtnet_open() so
> we schedule a refill as we do in the normal receive path if we run out
> of memory.
> 
> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

Doh.
napi_disable does not prevent the following
napi_schedule, does it?

Can someone confirm that I am not seeing things please?

And this means this hack does not work:
try_fill_recv can still run in parallel with
napi, corrupting the vq.

I suspect we need to resurrect a patch that used a
dedicated flag to avoid this race.

Comments?

> ---
>  drivers/net/virtio_net.c |   17 +++++++++++++----
>  1 file changed, 13 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -439,7 +439,13 @@ static int add_recvbuf_mergeable(struct 
>  	return err;
>  }
>  
> -/* Returns false if we couldn't fill entirely (OOM). */
> +/*
> + * Returns false if we couldn't fill entirely (OOM).
> + *
> + * Normally run in the receive path, but can also be run from ndo_open
> + * before we're receiving packets, or from refill_work which is
> + * careful to disable receiving (using napi_disable).
> + */
>  static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp)
>  {
>  	int err;
> @@ -719,6 +725,10 @@ static int virtnet_open(struct net_devic
>  {
>  	struct virtnet_info *vi = netdev_priv(dev);
>  
> +	/* Make sure we have some buffers: if oom use wq. */
> +	if (!try_fill_recv(vi, GFP_KERNEL))
> +		schedule_delayed_work(&vi->refill, 0);
> +
>  	virtnet_napi_enable(vi);
>  	return 0;
>  }
> @@ -772,6 +782,8 @@ static int virtnet_close(struct net_devi
>  {
>  	struct virtnet_info *vi = netdev_priv(dev);
>  
> +	/* Make sure refill_work doesn't re-enable napi! */
> +	cancel_delayed_work_sync(&vi->refill);
>  	napi_disable(&vi->napi);
>  
>  	return 0;
> @@ -1082,7 +1094,6 @@ static int virtnet_probe(struct virtio_d
>  
>  unregister:
>  	unregister_netdev(dev);
> -	cancel_delayed_work_sync(&vi->refill);
>  free_vqs:
>  	vdev->config->del_vqs(vdev);
>  free_stats:
> @@ -1121,9 +1132,7 @@ static void __devexit virtnet_remove(str
>  	/* Stop all the virtqueues. */
>  	vdev->config->reset(vdev);
>  
> -
>  	unregister_netdev(vi->dev);
> -	cancel_delayed_work_sync(&vi->refill);
>  
>  	/* Free unused buffers in both send and recv, if any. */
>  	free_unused_bufs(vi);
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michael S. Tsirkin April 4, 2012, 9:47 a.m. UTC | #5
On Wed, Apr 04, 2012 at 12:32:29PM +0300, Michael S. Tsirkin wrote:
> On Thu, Dec 29, 2011 at 09:12:38PM +1030, Rusty Russell wrote:
> > Michael S. Tsirkin noticed that we could run the refill work after
> > ndo_close, which can re-enable napi - we don't disable it until
> > virtnet_remove.  This is clearly wrong, so move the workqueue control
> > to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close).
> > 
> > One subtle point: virtnet_probe() could simply fail if it couldn't
> > allocate a receive buffer, but that's less polite in virtnet_open() so
> > we schedule a refill as we do in the normal receive path if we run out
> > of memory.
> > 
> > Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
> 
> Doh.
> napi_disable does not prevent the following
> napi_schedule, does it?
> 
> Can someone confirm that I am not seeing things please?

Yes, I *was* seeing things. After napi_disable, NAPI_STATE_SCHED
is set to napi_schedule does nothing.
Sorry about the noise.

> And this means this hack does not work:
> try_fill_recv can still run in parallel with
> napi, corrupting the vq.
> 
> I suspect we need to resurrect a patch that used a
> dedicated flag to avoid this race.
> 
> Comments?
> 
> > ---
> >  drivers/net/virtio_net.c |   17 +++++++++++++----
> >  1 file changed, 13 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -439,7 +439,13 @@ static int add_recvbuf_mergeable(struct 
> >  	return err;
> >  }
> >  
> > -/* Returns false if we couldn't fill entirely (OOM). */
> > +/*
> > + * Returns false if we couldn't fill entirely (OOM).
> > + *
> > + * Normally run in the receive path, but can also be run from ndo_open
> > + * before we're receiving packets, or from refill_work which is
> > + * careful to disable receiving (using napi_disable).
> > + */
> >  static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp)
> >  {
> >  	int err;
> > @@ -719,6 +725,10 @@ static int virtnet_open(struct net_devic
> >  {
> >  	struct virtnet_info *vi = netdev_priv(dev);
> >  
> > +	/* Make sure we have some buffers: if oom use wq. */
> > +	if (!try_fill_recv(vi, GFP_KERNEL))
> > +		schedule_delayed_work(&vi->refill, 0);
> > +
> >  	virtnet_napi_enable(vi);
> >  	return 0;
> >  }
> > @@ -772,6 +782,8 @@ static int virtnet_close(struct net_devi
> >  {
> >  	struct virtnet_info *vi = netdev_priv(dev);
> >  
> > +	/* Make sure refill_work doesn't re-enable napi! */
> > +	cancel_delayed_work_sync(&vi->refill);
> >  	napi_disable(&vi->napi);
> >  
> >  	return 0;
> > @@ -1082,7 +1094,6 @@ static int virtnet_probe(struct virtio_d
> >  
> >  unregister:
> >  	unregister_netdev(dev);
> > -	cancel_delayed_work_sync(&vi->refill);
> >  free_vqs:
> >  	vdev->config->del_vqs(vdev);
> >  free_stats:
> > @@ -1121,9 +1132,7 @@ static void __devexit virtnet_remove(str
> >  	/* Stop all the virtqueues. */
> >  	vdev->config->reset(vdev);
> >  
> > -
> >  	unregister_netdev(vi->dev);
> > -	cancel_delayed_work_sync(&vi->refill);
> >  
> >  	/* Free unused buffers in both send and recv, if any. */
> >  	free_unused_bufs(vi);
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason Wang April 5, 2012, 6:32 a.m. UTC | #6
On 04/04/2012 05:32 PM, Michael S. Tsirkin wrote:
> On Thu, Dec 29, 2011 at 09:12:38PM +1030, Rusty Russell wrote:
>> >  Michael S. Tsirkin noticed that we could run the refill work after
>> >  ndo_close, which can re-enable napi - we don't disable it until
>> >  virtnet_remove.  This is clearly wrong, so move the workqueue control
>> >  to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close).
>> >  
>> >  One subtle point: virtnet_probe() could simply fail if it couldn't
>> >  allocate a receive buffer, but that's less polite in virtnet_open() so
>> >  we schedule a refill as we do in the normal receive path if we run out
>> >  of memory.
>> >  
>> >  Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
> Doh.
> napi_disable does not prevent the following
> napi_schedule, does it?
>
> Can someone confirm that I am not seeing things please?
Looks like napi_disable() does prevent the following scheduling, as 
napi_schedule_prep() returns true only when there's an 0 -> 1 transition 
of NAPI_STATE_SCHED bit.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -439,7 +439,13 @@  static int add_recvbuf_mergeable(struct 
 	return err;
 }
 
-/* Returns false if we couldn't fill entirely (OOM). */
+/*
+ * Returns false if we couldn't fill entirely (OOM).
+ *
+ * Normally run in the receive path, but can also be run from ndo_open
+ * before we're receiving packets, or from refill_work which is
+ * careful to disable receiving (using napi_disable).
+ */
 static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp)
 {
 	int err;
@@ -719,6 +725,10 @@  static int virtnet_open(struct net_devic
 {
 	struct virtnet_info *vi = netdev_priv(dev);
 
+	/* Make sure we have some buffers: if oom use wq. */
+	if (!try_fill_recv(vi, GFP_KERNEL))
+		schedule_delayed_work(&vi->refill, 0);
+
 	virtnet_napi_enable(vi);
 	return 0;
 }
@@ -772,6 +782,8 @@  static int virtnet_close(struct net_devi
 {
 	struct virtnet_info *vi = netdev_priv(dev);
 
+	/* Make sure refill_work doesn't re-enable napi! */
+	cancel_delayed_work_sync(&vi->refill);
 	napi_disable(&vi->napi);
 
 	return 0;
@@ -1082,7 +1094,6 @@  static int virtnet_probe(struct virtio_d
 
 unregister:
 	unregister_netdev(dev);
-	cancel_delayed_work_sync(&vi->refill);
 free_vqs:
 	vdev->config->del_vqs(vdev);
 free_stats:
@@ -1121,9 +1132,7 @@  static void __devexit virtnet_remove(str
 	/* Stop all the virtqueues. */
 	vdev->config->reset(vdev);
 
-
 	unregister_netdev(vi->dev);
-	cancel_delayed_work_sync(&vi->refill);
 
 	/* Free unused buffers in both send and recv, if any. */
 	free_unused_bufs(vi);