diff mbox series

[net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi

Message ID 1512620115-7569-1-git-send-email-makita.toshiaki@lab.ntt.co.jp
State Accepted, archived
Delegated to: David Miller
Headers show
Series [net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi | expand

Commit Message

Toshiaki Makita Dec. 7, 2017, 4:15 a.m. UTC
Since commit 39e6c8208d7b ("net: solve a NAPI race") napi has been able
to be rescheduled within napi_complete_done() even in non-busypoll case,
but virtnet_poll() always enabled interrupts before complete, and when
napi was rescheduled within napi_complete_done() it did not disable
interrupts.
This caused more interrupts when event idx is disabled.

According to commit cbdadbbf0c79 ("virtio_net: fix race in RX VQ
processing") we cannot place virtqueue_enable_cb_prepare() after
NAPI_STATE_SCHED is cleared, so disable interrupts again if
napi_complete_done() returned false.

Tested with vhost-user of OVS 2.7 on host, which does not have the event
idx feature.

* Before patch:

$ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.150.253 () port 0 AF_INET
Socket  Message  Elapsed      Messages
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

212992    1472   60.00     32763206      0    6430.32
212992           60.00     23384299           4589.56

Interrupts on guest: 9872369
Packets/interrupt:   2.37

* After patch

$ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.150.253 () port 0 AF_INET
Socket  Message  Elapsed      Messages
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

212992    1472   60.00     32794646      0    6436.49
212992           60.00     32793501           6436.27

Interrupts on guest: 4941299
Packets/interrupt:   6.64

Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
---
 drivers/net/virtio_net.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Comments

Michael S. Tsirkin Dec. 7, 2017, 5:09 a.m. UTC | #1
On Thu, Dec 07, 2017 at 01:15:15PM +0900, Toshiaki Makita wrote:
> Since commit 39e6c8208d7b ("net: solve a NAPI race") napi has been able
> to be rescheduled within napi_complete_done() even in non-busypoll case,
> but virtnet_poll() always enabled interrupts before complete, and when
> napi was rescheduled within napi_complete_done() it did not disable
> interrupts.
> This caused more interrupts when event idx is disabled.
> 
> According to commit cbdadbbf0c79 ("virtio_net: fix race in RX VQ
> processing") we cannot place virtqueue_enable_cb_prepare() after
> NAPI_STATE_SCHED is cleared, so disable interrupts again if
> napi_complete_done() returned false.
> 
> Tested with vhost-user of OVS 2.7 on host, which does not have the event
> idx feature.
> 
> * Before patch:
> 
> $ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472
> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.150.253 () port 0 AF_INET
> Socket  Message  Elapsed      Messages
> Size    Size     Time         Okay Errors   Throughput
> bytes   bytes    secs            #      #   10^6bits/sec
> 
> 212992    1472   60.00     32763206      0    6430.32
> 212992           60.00     23384299           4589.56
> 
> Interrupts on guest: 9872369
> Packets/interrupt:   2.37
> 
> * After patch
> 
> $ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472
> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.150.253 () port 0 AF_INET
> Socket  Message  Elapsed      Messages
> Size    Size     Time         Okay Errors   Throughput
> bytes   bytes    secs            #      #   10^6bits/sec
> 
> 212992    1472   60.00     32794646      0    6436.49
> 212992           60.00     32793501           6436.27
> 
> Interrupts on guest: 4941299
> Packets/interrupt:   6.64
> 
> Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>

Acked-by: Michael S. Tsirkin <mst@redhat.com>

it might make sense in net and not -next since tx napi regressed performance
in some configs, this might bring it back at least partially.
Jason - what do you think?

> ---
>  drivers/net/virtio_net.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 19a985e..c0db48d 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -261,9 +261,12 @@ static void virtqueue_napi_complete(struct napi_struct *napi,
>  	int opaque;
>  
>  	opaque = virtqueue_enable_cb_prepare(vq);
> -	if (napi_complete_done(napi, processed) &&
> -	    unlikely(virtqueue_poll(vq, opaque)))
> -		virtqueue_napi_schedule(napi, vq);
> +	if (napi_complete_done(napi, processed)) {
> +		if (unlikely(virtqueue_poll(vq, opaque)))
> +			virtqueue_napi_schedule(napi, vq);
> +	} else {
> +		virtqueue_disable_cb(vq);
> +	}
>  }
>  
>  static void skb_xmit_done(struct virtqueue *vq)
> -- 
> 1.8.3.1
>
Jason Wang Dec. 7, 2017, 7:08 a.m. UTC | #2
On 2017年12月07日 13:09, Michael S. Tsirkin wrote:
> On Thu, Dec 07, 2017 at 01:15:15PM +0900, Toshiaki Makita wrote:
>> Since commit 39e6c8208d7b ("net: solve a NAPI race") napi has been able
>> to be rescheduled within napi_complete_done() even in non-busypoll case,
>> but virtnet_poll() always enabled interrupts before complete, and when
>> napi was rescheduled within napi_complete_done() it did not disable
>> interrupts.
>> This caused more interrupts when event idx is disabled.
>>
>> According to commit cbdadbbf0c79 ("virtio_net: fix race in RX VQ
>> processing") we cannot place virtqueue_enable_cb_prepare() after
>> NAPI_STATE_SCHED is cleared, so disable interrupts again if
>> napi_complete_done() returned false.
>>
>> Tested with vhost-user of OVS 2.7 on host, which does not have the event
>> idx feature.
>>
>> * Before patch:
>>
>> $ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472
>> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.150.253 () port 0 AF_INET
>> Socket  Message  Elapsed      Messages
>> Size    Size     Time         Okay Errors   Throughput
>> bytes   bytes    secs            #      #   10^6bits/sec
>>
>> 212992    1472   60.00     32763206      0    6430.32
>> 212992           60.00     23384299           4589.56
>>
>> Interrupts on guest: 9872369
>> Packets/interrupt:   2.37
>>
>> * After patch
>>
>> $ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472
>> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.150.253 () port 0 AF_INET
>> Socket  Message  Elapsed      Messages
>> Size    Size     Time         Okay Errors   Throughput
>> bytes   bytes    secs            #      #   10^6bits/sec
>>
>> 212992    1472   60.00     32794646      0    6436.49
>> 212992           60.00     32793501           6436.27
>>
>> Interrupts on guest: 4941299
>> Packets/interrupt:   6.64
>>
>> Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
> it might make sense in net and not -next since tx napi regressed performance
> in some configs, this might bring it back at least partially.
> Jason - what do you think?

No sure, the regression I saw was tested with event idx on. And 
virtqueue_disable_cb() does almost nothing for event idx (or even a 
little bit slower).

The patch looks good.

Acked-by: Jason Wang <jasowang@redhat.com>


>
>> ---
>>   drivers/net/virtio_net.c | 9 ++++++---
>>   1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>> index 19a985e..c0db48d 100644
>> --- a/drivers/net/virtio_net.c
>> +++ b/drivers/net/virtio_net.c
>> @@ -261,9 +261,12 @@ static void virtqueue_napi_complete(struct napi_struct *napi,
>>   	int opaque;
>>   
>>   	opaque = virtqueue_enable_cb_prepare(vq);
>> -	if (napi_complete_done(napi, processed) &&
>> -	    unlikely(virtqueue_poll(vq, opaque)))
>> -		virtqueue_napi_schedule(napi, vq);
>> +	if (napi_complete_done(napi, processed)) {
>> +		if (unlikely(virtqueue_poll(vq, opaque)))
>> +			virtqueue_napi_schedule(napi, vq);
>> +	} else {
>> +		virtqueue_disable_cb(vq);
>> +	}
>>   }
>>   
>>   static void skb_xmit_done(struct virtqueue *vq)
>> -- 
>> 1.8.3.1
>>
David Miller Dec. 8, 2017, 6:19 p.m. UTC | #3
From: Jason Wang <jasowang@redhat.com>
Date: Thu, 7 Dec 2017 15:08:45 +0800

> 
> 
> On 2017年12月07日 13:09, Michael S. Tsirkin wrote:
>> On Thu, Dec 07, 2017 at 01:15:15PM +0900, Toshiaki Makita wrote:
>>> Since commit 39e6c8208d7b ("net: solve a NAPI race") napi has been
>>> able
>>> to be rescheduled within napi_complete_done() even in non-busypoll
>>> case,
>>> but virtnet_poll() always enabled interrupts before complete, and when
>>> napi was rescheduled within napi_complete_done() it did not disable
>>> interrupts.
>>> This caused more interrupts when event idx is disabled.
>>>
>>> According to commit cbdadbbf0c79 ("virtio_net: fix race in RX VQ
>>> processing") we cannot place virtqueue_enable_cb_prepare() after
>>> NAPI_STATE_SCHED is cleared, so disable interrupts again if
>>> napi_complete_done() returned false.
>>>
>>> Tested with vhost-user of OVS 2.7 on host, which does not have the
>>> event
>>> idx feature.
...
>>> Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>>
>> it might make sense in net and not -next since tx napi regressed
>> performance
>> in some configs, this might bring it back at least partially.
>> Jason - what do you think?
> 
> No sure, the regression I saw was tested with event idx on. And
> virtqueue_disable_cb() does almost nothing for event idx (or even a
> little bit slower).
> 
> The patch looks good.
> 
> Acked-by: Jason Wang <jasowang@redhat.com>

I'm going to put this into net-next for now.
diff mbox series

Patch

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 19a985e..c0db48d 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -261,9 +261,12 @@  static void virtqueue_napi_complete(struct napi_struct *napi,
 	int opaque;
 
 	opaque = virtqueue_enable_cb_prepare(vq);
-	if (napi_complete_done(napi, processed) &&
-	    unlikely(virtqueue_poll(vq, opaque)))
-		virtqueue_napi_schedule(napi, vq);
+	if (napi_complete_done(napi, processed)) {
+		if (unlikely(virtqueue_poll(vq, opaque)))
+			virtqueue_napi_schedule(napi, vq);
+	} else {
+		virtqueue_disable_cb(vq);
+	}
 }
 
 static void skb_xmit_done(struct virtqueue *vq)