diff mbox

SRU: [Lucid] KVM: add schedule check to napi_enable call

Message ID 135186.75396.qm@web110310.mail.gq1.yahoo.com
State Accepted
Headers show

Commit Message

Ken Stailey Feb. 8, 2011, 9:43 p.m. UTC
--- On Tue, 2/8/11, Bruce Rogers <brogers@novell.com> wrote:

> From: Bruce Rogers <brogers@novell.com>
> Subject: Re: SRU: [Lucid] KVM: add schedule check to napi_enable call
> To: "Ken Stailey" <kstailey@yahoo.com>
> Cc: "Stefan Bader" <stefan.bader@canonical.com>, kernel-team@lists.ubuntu.com
> Date: Tuesday, February 8, 2011, 1:24 PM
>  >>> On 2/7/2011 at 09:23
> AM, Ken Stailey <kstailey@yahoo.com>
> wrote: 
> > Hi Bruce,
> > 
> > I would like to thank you for your contribution to
> virtio-net, specifically 
> > the "[PATCH] KVM: add schedule check to napi_enable
> call" as it appears to 
> > stabilize virtio-net on Ubuntu Lucid 10.04 LTS.
> > 
> > Stefan Bader is curious to know why that patch is not
> appearing in upstream 
> > linux kernels.  Can you offer any explanation?
> > 
> > Thank you,
> > Ken Stailey
> > 
> 
> I thought it had gone upstream, but apparently not. I was
> working with Greg K.H. on this and it must have fallen
> through the cracks between the two of us.
> 
> I apologize for not following through with that better.
> 
> Bruce
> 

I touched up the patch for 2.6.38
diff mbox

Patch

--- drivers/net/virtio_net.c.orig	2011-02-08 14:34:51.444099190 -0500
+++ drivers/net/virtio_net.c	2011-02-08 14:18:00.484400134 -0500
@@ -446,6 +446,20 @@ 
 	}
 }
 
+static void virtnet_napi_enable(struct virtnet_info *vi)
+{
+	napi_enable(&vi->napi);
+
+	/* If all buffers were filled by other side before we napi_enabled, we
+	 * won't get another interrupt, so process any outstanding packets
+	 * now.  virtnet_poll wants re-enable the queue, so we disable here.
+	 * We synchronize against interrupts via NAPI_STATE_SCHED */
+	if (napi_schedule_prep(&vi->napi)) {
+		virtqueue_disable_cb(vi->rvq);
+		__napi_schedule(&vi->napi);
+	}
+}
+
 static void refill_work(struct work_struct *work)
 {
 	struct virtnet_info *vi;
@@ -454,7 +468,7 @@ 
 	vi = container_of(work, struct virtnet_info, refill.work);
 	napi_disable(&vi->napi);
 	still_empty = !try_fill_recv(vi, GFP_KERNEL);
-	napi_enable(&vi->napi);
+	virtnet_napi_enable(vi);
 
 	/* In theory, this can happen: if we don't get any buffers in
 	 * we will *never* try to fill again. */
@@ -638,16 +652,7 @@ 
 {
 	struct virtnet_info *vi = netdev_priv(dev);
 
-	napi_enable(&vi->napi);
-
-	/* If all buffers were filled by other side before we napi_enabled, we
-	 * won't get another interrupt, so process any outstanding packets
-	 * now.  virtnet_poll wants re-enable the queue, so we disable here.
-	 * We synchronize against interrupts via NAPI_STATE_SCHED */
-	if (napi_schedule_prep(&vi->napi)) {
-		virtqueue_disable_cb(vi->rvq);
-		__napi_schedule(&vi->napi);
-	}
+	virtnet_napi_enable(vi);
 	return 0;
 }