diff mbox series

[1/3] vhost: fix skb leak in handle_rx()

Message ID 1512107669-27572-2-git-send-email-wexu@redhat.com
State Changes Requested, archived
Delegated to: David Miller
Headers show
Series [1/3] vhost: fix skb leak in handle_rx() | expand

Commit Message

Wei Xu Dec. 1, 2017, 5:54 a.m. UTC
From: Wei Xu <wexu@redhat.com>

Matthew found a roughly 40% tcp throughput regression with commit
c67df11f(vhost_net: try batch dequing from skb array) as discussed
in the following thread:
https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html

Eventually we figured out that it was a skb leak in handle_rx()
when sending packets to the VM. This usually happens when a guest
can not drain out vq as fast as vhost fills in, afterwards it sets
off the traffic jam and leaks skb(s) which occurs as no headcount
to send on the vq from vhost side.

This can be avoided by making sure we have got enough headcount
before actually consuming a skb from the batched rx array while
transmitting, which is simply done by moving checking the zero
headcount a bit ahead.

Signed-off-by: Wei Xu <wexu@redhat.com>
Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
---
 drivers/vhost/net.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

Comments

Jason Wang Dec. 1, 2017, 7:11 a.m. UTC | #1
On 2017年12月01日 13:54, wexu@redhat.com wrote:
> From: Wei Xu <wexu@redhat.com>
>
> Matthew found a roughly 40% tcp throughput regression with commit
> c67df11f(vhost_net: try batch dequing from skb array) as discussed
> in the following thread:
> https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
>
> Eventually we figured out that it was a skb leak in handle_rx()
> when sending packets to the VM. This usually happens when a guest
> can not drain out vq as fast as vhost fills in, afterwards it sets
> off the traffic jam and leaks skb(s) which occurs as no headcount
> to send on the vq from vhost side.
>
> This can be avoided by making sure we have got enough headcount
> before actually consuming a skb from the batched rx array while
> transmitting, which is simply done by moving checking the zero
> headcount a bit ahead.
>
> Signed-off-by: Wei Xu <wexu@redhat.com>
> Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
> ---
>   drivers/vhost/net.c | 20 ++++++++++----------
>   1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 8d626d7..c7bdeb6 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
>   		/* On error, stop handling until the next kick. */
>   		if (unlikely(headcount < 0))
>   			goto out;
> -		if (nvq->rx_array)
> -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> -		/* On overrun, truncate and discard */
> -		if (unlikely(headcount > UIO_MAXIOV)) {
> -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> -			err = sock->ops->recvmsg(sock, &msg,
> -						 1, MSG_DONTWAIT | MSG_TRUNC);
> -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> -			continue;
> -		}
>   		/* OK, now we need to know about added descriptors. */
>   		if (!headcount) {
>   			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> @@ -800,6 +790,16 @@ static void handle_rx(struct vhost_net *net)
>   			 * they refilled. */
>   			goto out;
>   		}
> +		if (nvq->rx_array)
> +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> +		/* On overrun, truncate and discard */
> +		if (unlikely(headcount > UIO_MAXIOV)) {
> +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> +			err = sock->ops->recvmsg(sock, &msg,
> +						 1, MSG_DONTWAIT | MSG_TRUNC);
> +			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> +			continue;
> +		}
>   		/* We don't need to be notified again. */
>   		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
>   		fixup = msg.msg_iter;

I suggest to reorder this patch to 3/3.

Thanks
Michael S. Tsirkin Dec. 1, 2017, 2:37 p.m. UTC | #2
On Fri, Dec 01, 2017 at 03:11:05PM +0800, Jason Wang wrote:
> 
> 
> On 2017年12月01日 13:54, wexu@redhat.com wrote:
> > From: Wei Xu <wexu@redhat.com>
> > 
> > Matthew found a roughly 40% tcp throughput regression with commit
> > c67df11f(vhost_net: try batch dequing from skb array) as discussed
> > in the following thread:
> > https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
> > 
> > Eventually we figured out that it was a skb leak in handle_rx()
> > when sending packets to the VM. This usually happens when a guest
> > can not drain out vq as fast as vhost fills in, afterwards it sets
> > off the traffic jam and leaks skb(s) which occurs as no headcount
> > to send on the vq from vhost side.
> > 
> > This can be avoided by making sure we have got enough headcount
> > before actually consuming a skb from the batched rx array while
> > transmitting, which is simply done by moving checking the zero
> > headcount a bit ahead.
> > 
> > Signed-off-by: Wei Xu <wexu@redhat.com>
> > Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
> > ---
> >   drivers/vhost/net.c | 20 ++++++++++----------
> >   1 file changed, 10 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > index 8d626d7..c7bdeb6 100644
> > --- a/drivers/vhost/net.c
> > +++ b/drivers/vhost/net.c
> > @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
> >   		/* On error, stop handling until the next kick. */
> >   		if (unlikely(headcount < 0))
> >   			goto out;
> > -		if (nvq->rx_array)
> > -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > -		/* On overrun, truncate and discard */
> > -		if (unlikely(headcount > UIO_MAXIOV)) {
> > -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > -			err = sock->ops->recvmsg(sock, &msg,
> > -						 1, MSG_DONTWAIT | MSG_TRUNC);
> > -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> > -			continue;
> > -		}
> >   		/* OK, now we need to know about added descriptors. */
> >   		if (!headcount) {
> >   			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> > @@ -800,6 +790,16 @@ static void handle_rx(struct vhost_net *net)
> >   			 * they refilled. */
> >   			goto out;
> >   		}
> > +		if (nvq->rx_array)
> > +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > +		/* On overrun, truncate and discard */
> > +		if (unlikely(headcount > UIO_MAXIOV)) {
> > +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > +			err = sock->ops->recvmsg(sock, &msg,
> > +						 1, MSG_DONTWAIT | MSG_TRUNC);
> > +			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> > +			continue;
> > +		}
> >   		/* We don't need to be notified again. */
> >   		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
> >   		fixup = msg.msg_iter;
> 
> I suggest to reorder this patch to 3/3.
> 
> Thanks

Why? This doesn't cause any new leaks, does it?
Jason Wang Dec. 4, 2017, 7:18 a.m. UTC | #3
On 2017年12月01日 22:37, Michael S. Tsirkin wrote:
> On Fri, Dec 01, 2017 at 03:11:05PM +0800, Jason Wang wrote:
>>
>> On 2017年12月01日 13:54, wexu@redhat.com wrote:
>>> From: Wei Xu <wexu@redhat.com>
>>>
>>> Matthew found a roughly 40% tcp throughput regression with commit
>>> c67df11f(vhost_net: try batch dequing from skb array) as discussed
>>> in the following thread:
>>> https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
>>>
>>> Eventually we figured out that it was a skb leak in handle_rx()
>>> when sending packets to the VM. This usually happens when a guest
>>> can not drain out vq as fast as vhost fills in, afterwards it sets
>>> off the traffic jam and leaks skb(s) which occurs as no headcount
>>> to send on the vq from vhost side.
>>>
>>> This can be avoided by making sure we have got enough headcount
>>> before actually consuming a skb from the batched rx array while
>>> transmitting, which is simply done by moving checking the zero
>>> headcount a bit ahead.
>>>
>>> Signed-off-by: Wei Xu <wexu@redhat.com>
>>> Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
>>> ---
>>>    drivers/vhost/net.c | 20 ++++++++++----------
>>>    1 file changed, 10 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
>>> index 8d626d7..c7bdeb6 100644
>>> --- a/drivers/vhost/net.c
>>> +++ b/drivers/vhost/net.c
>>> @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
>>>    		/* On error, stop handling until the next kick. */
>>>    		if (unlikely(headcount < 0))
>>>    			goto out;
>>> -		if (nvq->rx_array)
>>> -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
>>> -		/* On overrun, truncate and discard */
>>> -		if (unlikely(headcount > UIO_MAXIOV)) {
>>> -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
>>> -			err = sock->ops->recvmsg(sock, &msg,
>>> -						 1, MSG_DONTWAIT | MSG_TRUNC);
>>> -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
>>> -			continue;
>>> -		}
>>>    		/* OK, now we need to know about added descriptors. */
>>>    		if (!headcount) {
>>>    			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
>>> @@ -800,6 +790,16 @@ static void handle_rx(struct vhost_net *net)
>>>    			 * they refilled. */
>>>    			goto out;
>>>    		}
>>> +		if (nvq->rx_array)
>>> +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
>>> +		/* On overrun, truncate and discard */
>>> +		if (unlikely(headcount > UIO_MAXIOV)) {
>>> +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
>>> +			err = sock->ops->recvmsg(sock, &msg,
>>> +						 1, MSG_DONTWAIT | MSG_TRUNC);
>>> +			pr_debug("Discarded rx packet: len %zd\n", sock_len);
>>> +			continue;
>>> +		}
>>>    		/* We don't need to be notified again. */
>>>    		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
>>>    		fixup = msg.msg_iter;
>> I suggest to reorder this patch to 3/3.
>>
>> Thanks
> Why? This doesn't cause any new leaks, does it?
>

It doesn't, just think it can ease the downstream back porting in case 
patch 2-3 were missed if somebody did a bisect and just backport patch 1.

Thanks
diff mbox series

Patch

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8d626d7..c7bdeb6 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -778,16 +778,6 @@  static void handle_rx(struct vhost_net *net)
 		/* On error, stop handling until the next kick. */
 		if (unlikely(headcount < 0))
 			goto out;
-		if (nvq->rx_array)
-			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
-		/* On overrun, truncate and discard */
-		if (unlikely(headcount > UIO_MAXIOV)) {
-			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
-			err = sock->ops->recvmsg(sock, &msg,
-						 1, MSG_DONTWAIT | MSG_TRUNC);
-			pr_debug("Discarded rx packet: len %zd\n", sock_len);
-			continue;
-		}
 		/* OK, now we need to know about added descriptors. */
 		if (!headcount) {
 			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
@@ -800,6 +790,16 @@  static void handle_rx(struct vhost_net *net)
 			 * they refilled. */
 			goto out;
 		}
+		if (nvq->rx_array)
+			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
+		/* On overrun, truncate and discard */
+		if (unlikely(headcount > UIO_MAXIOV)) {
+			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
+			err = sock->ops->recvmsg(sock, &msg,
+						 1, MSG_DONTWAIT | MSG_TRUNC);
+			pr_debug("Discarded rx packet: len %zd\n", sock_len);
+			continue;
+		}
 		/* We don't need to be notified again. */
 		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
 		fixup = msg.msg_iter;