diff mbox

[net-next,8/8] vhost-net: reduce vq polling on tx zerocopy

Message ID 3e57e2cde71a4270a8fef395c31f5f0e8a130c28.1351524502.git.mst@redhat.com
State Superseded, archived
Delegated to: David Miller
Headers show

Commit Message

Michael S. Tsirkin Oct. 29, 2012, 3:49 p.m. UTC
It seems that to avoid deadlocks it is enough to poll vq before
 we are going to use the last buffer.  This should be faster than
c70aa540c7a9f67add11ad3161096fb95233aa2e.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/vhost/net.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

Comments

Vlad Yasevich Oct. 30, 2012, 3:47 p.m. UTC | #1
On 10/29/2012 11:49 AM, Michael S. Tsirkin wrote:
> It seems that to avoid deadlocks it is enough to poll vq before
>   we are going to use the last buffer.  This should be faster than
> c70aa540c7a9f67add11ad3161096fb95233aa2e.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>   drivers/vhost/net.c | 12 ++++++++++--
>   1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 8e9de79..3967f82 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -197,8 +197,16 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, int status)
>   {
>   	struct vhost_ubuf_ref *ubufs = ubuf->ctx;
>   	struct vhost_virtqueue *vq = ubufs->vq;
> -
> -	vhost_poll_queue(&vq->poll);
> +	int cnt = atomic_read(&ubufs->kref.refcount);
> +
> +	/*
> +	 * Trigger polling thread if guest stopped submitting new buffers:
> +	 * in this case, the refcount after decrement will eventually reach 1
> +	 * so here it is 2.
> +	 * We also trigger polling periodically after each 16 packets.
> +	 */
> +	if (cnt <= 2 || !(cnt % 16))

Why 16?  Does it make sense to make it configurable?

-vlad

> +		vhost_poll_queue(&vq->poll);
>   	/* set len to mark this desc buffers done DMA */
>   	vq->heads[ubuf->desc].len = status ?
>   		VHOST_DMA_FAILED_LEN : VHOST_DMA_DONE_LEN;
>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michael S. Tsirkin Oct. 30, 2012, 3:54 p.m. UTC | #2
On Tue, Oct 30, 2012 at 11:47:45AM -0400, Vlad Yasevich wrote:
> On 10/29/2012 11:49 AM, Michael S. Tsirkin wrote:
> >It seems that to avoid deadlocks it is enough to poll vq before
> >  we are going to use the last buffer.  This should be faster than
> >c70aa540c7a9f67add11ad3161096fb95233aa2e.
> >
> >Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >---
> >  drivers/vhost/net.c | 12 ++++++++++--
> >  1 file changed, 10 insertions(+), 2 deletions(-)
> >
> >diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> >index 8e9de79..3967f82 100644
> >--- a/drivers/vhost/net.c
> >+++ b/drivers/vhost/net.c
> >@@ -197,8 +197,16 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, int status)
> >  {
> >  	struct vhost_ubuf_ref *ubufs = ubuf->ctx;
> >  	struct vhost_virtqueue *vq = ubufs->vq;
> >-
> >-	vhost_poll_queue(&vq->poll);
> >+	int cnt = atomic_read(&ubufs->kref.refcount);
> >+
> >+	/*
> >+	 * Trigger polling thread if guest stopped submitting new buffers:
> >+	 * in this case, the refcount after decrement will eventually reach 1
> >+	 * so here it is 2.
> >+	 * We also trigger polling periodically after each 16 packets.
> >+	 */
> >+	if (cnt <= 2 || !(cnt % 16))
> 
> Why 16?  Does it make sense to make it configurable?
> 
> -vlad

Probably not but I'll add a comment explaining why.

> >+		vhost_poll_queue(&vq->poll);
> >  	/* set len to mark this desc buffers done DMA */
> >  	vq->heads[ubuf->desc].len = status ?
> >  		VHOST_DMA_FAILED_LEN : VHOST_DMA_DONE_LEN;
> >
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8e9de79..3967f82 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -197,8 +197,16 @@  static void vhost_zerocopy_callback(struct ubuf_info *ubuf, int status)
 {
 	struct vhost_ubuf_ref *ubufs = ubuf->ctx;
 	struct vhost_virtqueue *vq = ubufs->vq;
-
-	vhost_poll_queue(&vq->poll);
+	int cnt = atomic_read(&ubufs->kref.refcount);
+
+	/*
+	 * Trigger polling thread if guest stopped submitting new buffers:
+	 * in this case, the refcount after decrement will eventually reach 1
+	 * so here it is 2.
+	 * We also trigger polling periodically after each 16 packets.
+	 */
+	if (cnt <= 2 || !(cnt % 16))
+		vhost_poll_queue(&vq->poll);
 	/* set len to mark this desc buffers done DMA */
 	vq->heads[ubuf->desc].len = status ?
 		VHOST_DMA_FAILED_LEN : VHOST_DMA_DONE_LEN;