diff mbox

[for-2.4] virtio-net: Flush incoming queues when DRIVER_OK is being set

Message ID 1436929347-11991-1-git-send-email-famz@redhat.com
State New
Headers show

Commit Message

Fam Zheng July 15, 2015, 3:02 a.m. UTC
This patch fixes network hang after "stop" then "cont", while network
packets keep arriving.

Tested both manually (tap, host pinging guest) and with Jason's qtest
series (plus his "[PATCH 2.4] socket: pass correct size in
net_socket_send()" fix).

As virtio_net_set_status is called when guest driver is setting status
byte and when vm state is changing, it is a good opportunity to flush
queued packets.

This is necessary because during vm stop the backend (e.g. tap) would
stop rx processing after .can_receive returns false, until the queue is
explicitly flushed or purged.

The other interesting condition in .can_receive, virtio_queue_ready(),
is handled by virtio_net_handle_rx() when guest kicks; the 3rd condition
is invalid queue index which doesn't need flushing.

Signed-off-by: Fam Zheng <famz@redhat.com>

---

v2: Limit to "virtio_net_started(n, queue_status) &&
    !n->vhost_started".[MST]
---
 hw/net/virtio-net.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Wen Congyang July 15, 2015, 6:52 a.m. UTC | #1
On 07/15/2015 11:02 AM, Fam Zheng wrote:
> This patch fixes network hang after "stop" then "cont", while network
> packets keep arriving.

I think it also fixes network hand when the guest boots, while network packets
keep arriving.

Thanks
Wen Congyang

> 
> Tested both manually (tap, host pinging guest) and with Jason's qtest
> series (plus his "[PATCH 2.4] socket: pass correct size in
> net_socket_send()" fix).
> 
> As virtio_net_set_status is called when guest driver is setting status
> byte and when vm state is changing, it is a good opportunity to flush
> queued packets.
> 
> This is necessary because during vm stop the backend (e.g. tap) would
> stop rx processing after .can_receive returns false, until the queue is
> explicitly flushed or purged.
> 
> The other interesting condition in .can_receive, virtio_queue_ready(),
> is handled by virtio_net_handle_rx() when guest kicks; the 3rd condition
> is invalid queue index which doesn't need flushing.
> 
> Signed-off-by: Fam Zheng <famz@redhat.com>
> 
> ---
> 
> v2: Limit to "virtio_net_started(n, queue_status) &&
>     !n->vhost_started".[MST]
> ---
>  hw/net/virtio-net.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index d728233..24c7be1 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -162,6 +162,8 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
>      virtio_net_vhost_status(n, status);
>  
>      for (i = 0; i < n->max_queues; i++) {
> +        NetClientState *ncs = qemu_get_subqueue(n->nic, i);
> +        bool queue_started;
>          q = &n->vqs[i];
>  
>          if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
> @@ -169,12 +171,18 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
>          } else {
>              queue_status = status;
>          }
> +        queue_started =
> +            virtio_net_started(n, queue_status) && !n->vhost_started;
> +
> +        if (queue_started) {
> +            qemu_flush_queued_packets(ncs);
> +        }
>  
>          if (!q->tx_waiting) {
>              continue;
>          }
>  
> -        if (virtio_net_started(n, queue_status) && !n->vhost_started) {
> +        if (queue_started) {
>              if (q->tx_timer) {
>                  timer_mod(q->tx_timer,
>                                 qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
>
Jason Wang July 15, 2015, 7:24 a.m. UTC | #2
On 07/15/2015 11:02 AM, Fam Zheng wrote:
> This patch fixes network hang after "stop" then "cont", while network
> packets keep arriving.
>
> Tested both manually (tap, host pinging guest) and with Jason's qtest
> series (plus his "[PATCH 2.4] socket: pass correct size in
> net_socket_send()" fix).
>
> As virtio_net_set_status is called when guest driver is setting status
> byte and when vm state is changing, it is a good opportunity to flush
> queued packets.
>
> This is necessary because during vm stop the backend (e.g. tap) would
> stop rx processing after .can_receive returns false, until the queue is
> explicitly flushed or purged.
>
> The other interesting condition in .can_receive, virtio_queue_ready(),
> is handled by virtio_net_handle_rx() when guest kicks; the 3rd condition
> is invalid queue index which doesn't need flushing.
>
> Signed-off-by: Fam Zheng <famz@redhat.com>
>
> ---

Reviewed-by: Jason Wang <jasowang@redhat.com>

btw, there's another condition in can_receive() which is suspicious:

...
    if (nc->queue_index >= n->curr_queues) {
        return 0;
    }
...

This requires queue to be flushed when the number of queues was
increased. But it looks unnecessary since both guest and vhost does not
care about this. So I think we could safely just remove this condition.
Maybe a patch on top.

Thanks

>
> v2: Limit to "virtio_net_started(n, queue_status) &&
>     !n->vhost_started".[MST]
> ---
>  hw/net/virtio-net.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index d728233..24c7be1 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -162,6 +162,8 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
>      virtio_net_vhost_status(n, status);
>  
>      for (i = 0; i < n->max_queues; i++) {
> +        NetClientState *ncs = qemu_get_subqueue(n->nic, i);
> +        bool queue_started;
>          q = &n->vqs[i];
>  
>          if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
> @@ -169,12 +171,18 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
>          } else {
>              queue_status = status;
>          }
> +        queue_started =
> +            virtio_net_started(n, queue_status) && !n->vhost_started;
> +
> +        if (queue_started) {
> +            qemu_flush_queued_packets(ncs);
> +        }
>  
>          if (!q->tx_waiting) {
>              continue;
>          }
>  
> -        if (virtio_net_started(n, queue_status) && !n->vhost_started) {
> +        if (queue_started) {
>              if (q->tx_timer) {
>                  timer_mod(q->tx_timer,
>                                 qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
diff mbox

Patch

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index d728233..24c7be1 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -162,6 +162,8 @@  static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
     virtio_net_vhost_status(n, status);
 
     for (i = 0; i < n->max_queues; i++) {
+        NetClientState *ncs = qemu_get_subqueue(n->nic, i);
+        bool queue_started;
         q = &n->vqs[i];
 
         if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
@@ -169,12 +171,18 @@  static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
         } else {
             queue_status = status;
         }
+        queue_started =
+            virtio_net_started(n, queue_status) && !n->vhost_started;
+
+        if (queue_started) {
+            qemu_flush_queued_packets(ncs);
+        }
 
         if (!q->tx_waiting) {
             continue;
         }
 
-        if (virtio_net_started(n, queue_status) && !n->vhost_started) {
+        if (queue_started) {
             if (q->tx_timer) {
                 timer_mod(q->tx_timer,
                                qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);