diff mbox series

[ovs-dev,v3,1/3] dpif-netdev: Only poll enabled vhost queues.

Message ID 1556205730-10204-1-git-send-email-david.marchand@redhat.com
State Accepted
Headers show
Series [ovs-dev,v3,1/3] dpif-netdev: Only poll enabled vhost queues. | expand

Commit Message

David Marchand April 25, 2019, 3:22 p.m. UTC
We currently poll all available queues based on the max queue count
exchanged with the vhost peer and rely on the vhost library in DPDK to
check the vring status beneath.
This can lead to some overhead when we have a lot of unused queues.

To enhance the situation, we can skip the disabled queues.
On rxq notifications, we make use of the netdev's change_seq number so
that the pmd thread main loop can cache the queue state periodically.

$ ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
  isolated : true
  port: dpdk0             queue-id:  0 (enabled)   pmd usage:  0 %
pmd thread numa_id 0 core_id 2:
  isolated : true
  port: vhost1            queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhost3            queue-id:  0 (enabled)   pmd usage:  0 %
pmd thread numa_id 0 core_id 15:
  isolated : true
  port: dpdk1             queue-id:  0 (enabled)   pmd usage:  0 %
pmd thread numa_id 0 core_id 16:
  isolated : true
  port: vhost0            queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhost2            queue-id:  0 (enabled)   pmd usage:  0 %

$ while true; do
  ovs-appctl dpif-netdev/pmd-rxq-show |awk '
  /port: / {
    tot++;
    if ($5 == "(enabled)") {
      en++;
    }
  }
  END {
    print "total: " tot ", enabled: " en
  }'
  sleep 1
done

total: 6, enabled: 2
total: 6, enabled: 2
...

 # Started vm, virtio devices are bound to kernel driver which enables
 # F_MQ + all queue pairs
total: 6, enabled: 2
total: 66, enabled: 66
...

 # Unbound vhost0 and vhost1 from the kernel driver
total: 66, enabled: 66
total: 66, enabled: 34
...

 # Configured kernel bound devices to use only 1 queue pair
total: 66, enabled: 34
total: 66, enabled: 19
total: 66, enabled: 4
...

 # While rebooting the vm
total: 66, enabled: 4
total: 66, enabled: 2
...
total: 66, enabled: 66
...

 # After shutting down the vm
total: 66, enabled: 66
total: 66, enabled: 2

Signed-off-by: David Marchand <david.marchand@redhat.com>
---

Changes since v2:
- Ilya comments
- Kevin comments on "dpif-netdev/pmd-rxq-show" output
- Updated unit tests accordingly

Changes since v1:
- only indicate disabled queues in dpif-netdev/pmd-rxq-show output
- Ilya comments
  - no need for a struct as we only need a boolean per rxq
  - "rx_q" is generic, while we only care for this in vhost case,
    renamed as "vhost_rxq_enabled",
  - add missing rte_free on allocation error,
  - vhost_rxq_enabled is freed in vhost destruct only,
  - rxq0 is enabled at the virtio device activation to accomodate
    legacy implementations which would not report per queue states
    later,
  - do not mix boolean with integer,
  - do not use bit operand on boolean,

---
 lib/dpif-netdev.c     | 26 +++++++++++++++++++++++
 lib/netdev-dpdk.c     | 58 +++++++++++++++++++++++++++++++++++++++------------
 lib/netdev-provider.h |  7 +++++++
 lib/netdev.c          | 10 +++++++++
 lib/netdev.h          |  1 +
 tests/pmd.at          | 52 ++++++++++++++++++++++-----------------------
 6 files changed, 115 insertions(+), 39 deletions(-)

Comments

Ilya Maximets May 15, 2019, 10:04 a.m. UTC | #1
On 25.04.2019 18:22, David Marchand wrote:
> We currently poll all available queues based on the max queue count
> exchanged with the vhost peer and rely on the vhost library in DPDK to
> check the vring status beneath.
> This can lead to some overhead when we have a lot of unused queues.
> 
> To enhance the situation, we can skip the disabled queues.
> On rxq notifications, we make use of the netdev's change_seq number so
> that the pmd thread main loop can cache the queue state periodically.
> 
> $ ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 0 core_id 1:
>   isolated : true
>   port: dpdk0             queue-id:  0 (enabled)   pmd usage:  0 %
> pmd thread numa_id 0 core_id 2:
>   isolated : true
>   port: vhost1            queue-id:  0 (enabled)   pmd usage:  0 %
>   port: vhost3            queue-id:  0 (enabled)   pmd usage:  0 %
> pmd thread numa_id 0 core_id 15:
>   isolated : true
>   port: dpdk1             queue-id:  0 (enabled)   pmd usage:  0 %
> pmd thread numa_id 0 core_id 16:
>   isolated : true
>   port: vhost0            queue-id:  0 (enabled)   pmd usage:  0 %
>   port: vhost2            queue-id:  0 (enabled)   pmd usage:  0 %
> 
> $ while true; do
>   ovs-appctl dpif-netdev/pmd-rxq-show |awk '
>   /port: / {
>     tot++;
>     if ($5 == "(enabled)") {
>       en++;
>     }
>   }
>   END {
>     print "total: " tot ", enabled: " en
>   }'
>   sleep 1
> done
> 
> total: 6, enabled: 2
> total: 6, enabled: 2
> ...
> 
>  # Started vm, virtio devices are bound to kernel driver which enables
>  # F_MQ + all queue pairs
> total: 6, enabled: 2
> total: 66, enabled: 66
> ...
> 
>  # Unbound vhost0 and vhost1 from the kernel driver
> total: 66, enabled: 66
> total: 66, enabled: 34
> ...
> 
>  # Configured kernel bound devices to use only 1 queue pair
> total: 66, enabled: 34
> total: 66, enabled: 19
> total: 66, enabled: 4
> ...
> 
>  # While rebooting the vm
> total: 66, enabled: 4
> total: 66, enabled: 2
> ...
> total: 66, enabled: 66
> ...
> 
>  # After shutting down the vm
> total: 66, enabled: 66
> total: 66, enabled: 2
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> 
> Changes since v2:
> - Ilya comments
> - Kevin comments on "dpif-netdev/pmd-rxq-show" output
> - Updated unit tests accordingly
> 
> Changes since v1:
> - only indicate disabled queues in dpif-netdev/pmd-rxq-show output
> - Ilya comments
>   - no need for a struct as we only need a boolean per rxq
>   - "rx_q" is generic, while we only care for this in vhost case,
>     renamed as "vhost_rxq_enabled",
>   - add missing rte_free on allocation error,
>   - vhost_rxq_enabled is freed in vhost destruct only,
>   - rxq0 is enabled at the virtio device activation to accomodate
>     legacy implementations which would not report per queue states
>     later,
>   - do not mix boolean with integer,
>   - do not use bit operand on boolean,
> 
> ---

Hi.

I performed some tests on my usual setup (PVP with bonded phy) without
any disabled queues and see no valuable performance difference. So, it's
OK for me.

I have one style comment inline (which probably could be fixed while
applying the patch).

Beside that:

Acked-by: Ilya Maximets <i.maximets@samsung.com>


>  lib/dpif-netdev.c     | 26 +++++++++++++++++++++++
>  lib/netdev-dpdk.c     | 58 +++++++++++++++++++++++++++++++++++++++------------
>  lib/netdev-provider.h |  7 +++++++
>  lib/netdev.c          | 10 +++++++++
>  lib/netdev.h          |  1 +
>  tests/pmd.at          | 52 ++++++++++++++++++++++-----------------------
>  6 files changed, 115 insertions(+), 39 deletions(-)
> 
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> index f1422b2..6149044 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -591,6 +591,8 @@ struct polled_queue {
>      struct dp_netdev_rxq *rxq;
>      odp_port_t port_no;
>      bool emc_enabled;
> +    bool rxq_enabled;
> +    uint64_t change_seq;
>  };
>  
>  /* Contained by struct dp_netdev_pmd_thread's 'poll_list' member. */
> @@ -1163,6 +1165,8 @@ pmd_info_show_rxq(struct ds *reply, struct dp_netdev_pmd_thread *pmd)
>              }
>              ds_put_format(reply, "  port: %-16s  queue-id: %2d", name,
>                            netdev_rxq_get_queue_id(list[i].rxq->rx));
> +            ds_put_format(reply, " %s", netdev_rxq_enabled(list[i].rxq->rx)
> +                          ? "(enabled) " : "(disabled)");

The second line should be shifted to keep the ternary operator visually consistent:

            ds_put_format(reply, " %s", netdev_rxq_enabled(list[i].rxq->rx)
                                        ? "(enabled) " : "(disabled)");

Best regards, Ilya Maximets.
David Marchand May 17, 2019, 9:30 a.m. UTC | #2
Hello,

On Wed, May 15, 2019 at 12:04 PM Ilya Maximets <i.maximets@samsung.com>
wrote:

> On 25.04.2019 18:22, David Marchand wrote:
> > We currently poll all available queues based on the max queue count
> > exchanged with the vhost peer and rely on the vhost library in DPDK to
> > check the vring status beneath.
> > This can lead to some overhead when we have a lot of unused queues.
> >
> > To enhance the situation, we can skip the disabled queues.
> > On rxq notifications, we make use of the netdev's change_seq number so
> > that the pmd thread main loop can cache the queue state periodically.
> >
> > $ ovs-appctl dpif-netdev/pmd-rxq-show
> > pmd thread numa_id 0 core_id 1:
> >   isolated : true
> >   port: dpdk0             queue-id:  0 (enabled)   pmd usage:  0 %
> > pmd thread numa_id 0 core_id 2:
> >   isolated : true
> >   port: vhost1            queue-id:  0 (enabled)   pmd usage:  0 %
> >   port: vhost3            queue-id:  0 (enabled)   pmd usage:  0 %
> > pmd thread numa_id 0 core_id 15:
> >   isolated : true
> >   port: dpdk1             queue-id:  0 (enabled)   pmd usage:  0 %
> > pmd thread numa_id 0 core_id 16:
> >   isolated : true
> >   port: vhost0            queue-id:  0 (enabled)   pmd usage:  0 %
> >   port: vhost2            queue-id:  0 (enabled)   pmd usage:  0 %
> >
> > $ while true; do
> >   ovs-appctl dpif-netdev/pmd-rxq-show |awk '
> >   /port: / {
> >     tot++;
> >     if ($5 == "(enabled)") {
> >       en++;
> >     }
> >   }
> >   END {
> >     print "total: " tot ", enabled: " en
> >   }'
> >   sleep 1
> > done
> >
> > total: 6, enabled: 2
> > total: 6, enabled: 2
> > ...
> >
> >  # Started vm, virtio devices are bound to kernel driver which enables
> >  # F_MQ + all queue pairs
> > total: 6, enabled: 2
> > total: 66, enabled: 66
> > ...
> >
> >  # Unbound vhost0 and vhost1 from the kernel driver
> > total: 66, enabled: 66
> > total: 66, enabled: 34
> > ...
> >
> >  # Configured kernel bound devices to use only 1 queue pair
> > total: 66, enabled: 34
> > total: 66, enabled: 19
> > total: 66, enabled: 4
> > ...
> >
> >  # While rebooting the vm
> > total: 66, enabled: 4
> > total: 66, enabled: 2
> > ...
> > total: 66, enabled: 66
> > ...
> >
> >  # After shutting down the vm
> > total: 66, enabled: 66
> > total: 66, enabled: 2
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > ---
> >
> > Changes since v2:
> > - Ilya comments
> > - Kevin comments on "dpif-netdev/pmd-rxq-show" output
> > - Updated unit tests accordingly
> >
> > Changes since v1:
> > - only indicate disabled queues in dpif-netdev/pmd-rxq-show output
> > - Ilya comments
> >   - no need for a struct as we only need a boolean per rxq
> >   - "rx_q" is generic, while we only care for this in vhost case,
> >     renamed as "vhost_rxq_enabled",
> >   - add missing rte_free on allocation error,
> >   - vhost_rxq_enabled is freed in vhost destruct only,
> >   - rxq0 is enabled at the virtio device activation to accomodate
> >     legacy implementations which would not report per queue states
> >     later,
> >   - do not mix boolean with integer,
> >   - do not use bit operand on boolean,
> >
> > ---
>
> Hi.
>
> I performed some tests on my usual setup (PVP with bonded phy) without
> any disabled queues and see no valuable performance difference. So, it's
> OK for me.
>
> I have one style comment inline (which probably could be fixed while
> applying the patch).
>
> Beside that:
>
> Acked-by: Ilya Maximets <i.maximets@samsung.com>
>

Thanks Ilya.

What is the next step?
Are we taking the 3 patches once 18.11.2 is out?

I can send a n+1 patchset with the fix on the style.
Stokes, Ian June 26, 2019, 6:15 p.m. UTC | #3
On 4/25/2019 4:22 PM, David Marchand wrote:
> We currently poll all available queues based on the max queue count
> exchanged with the vhost peer and rely on the vhost library in DPDK to
> check the vring status beneath.
> This can lead to some overhead when we have a lot of unused queues.
> 
> To enhance the situation, we can skip the disabled queues.
> On rxq notifications, we make use of the netdev's change_seq number so
> that the pmd thread main loop can cache the queue state periodically.
> 
> $ ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 0 core_id 1:
>    isolated : true
>    port: dpdk0             queue-id:  0 (enabled)   pmd usage:  0 %
> pmd thread numa_id 0 core_id 2:
>    isolated : true
>    port: vhost1            queue-id:  0 (enabled)   pmd usage:  0 %
>    port: vhost3            queue-id:  0 (enabled)   pmd usage:  0 %
> pmd thread numa_id 0 core_id 15:
>    isolated : true
>    port: dpdk1             queue-id:  0 (enabled)   pmd usage:  0 %
> pmd thread numa_id 0 core_id 16:
>    isolated : true
>    port: vhost0            queue-id:  0 (enabled)   pmd usage:  0 %
>    port: vhost2            queue-id:  0 (enabled)   pmd usage:  0 %
> 
> $ while true; do
>    ovs-appctl dpif-netdev/pmd-rxq-show |awk '
>    /port: / {
>      tot++;
>      if ($5 == "(enabled)") {
>        en++;
>      }
>    }
>    END {
>      print "total: " tot ", enabled: " en
>    }'
>    sleep 1
> done
> 
> total: 6, enabled: 2
> total: 6, enabled: 2
> ...
> 
>   # Started vm, virtio devices are bound to kernel driver which enables
>   # F_MQ + all queue pairs
> total: 6, enabled: 2
> total: 66, enabled: 66
> ...
> 
>   # Unbound vhost0 and vhost1 from the kernel driver
> total: 66, enabled: 66
> total: 66, enabled: 34
> ...
> 
>   # Configured kernel bound devices to use only 1 queue pair
> total: 66, enabled: 34
> total: 66, enabled: 19
> total: 66, enabled: 4
> ...
> 
>   # While rebooting the vm
> total: 66, enabled: 4
> total: 66, enabled: 2
> ...
> total: 66, enabled: 66
> ...
> 
>   # After shutting down the vm
> total: 66, enabled: 66
> total: 66, enabled: 2
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>

Thank all for working on this. I didn't come across any issues during 
testing the last 2 days and code LGTM.

I've fixed the alignment issue spotted by Ilya below and pushed to master.

 > +     ds_put_format(reply, " %s", netdev_rxq_enabled(list[i].rxq->rx)
 > +                   ? "(enabled) " : "(disabled)");

The second line should be shifted to keep the ternary operator visually 
consistent:

ds_put_format(reply, " %s", netdev_rxq_enabled(list[i].rxq->rx)
                             ? "(enabled) " : "(disabled)");


Regards
Ian
diff mbox series

Patch

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index f1422b2..6149044 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -591,6 +591,8 @@  struct polled_queue {
     struct dp_netdev_rxq *rxq;
     odp_port_t port_no;
     bool emc_enabled;
+    bool rxq_enabled;
+    uint64_t change_seq;
 };
 
 /* Contained by struct dp_netdev_pmd_thread's 'poll_list' member. */
@@ -1163,6 +1165,8 @@  pmd_info_show_rxq(struct ds *reply, struct dp_netdev_pmd_thread *pmd)
             }
             ds_put_format(reply, "  port: %-16s  queue-id: %2d", name,
                           netdev_rxq_get_queue_id(list[i].rxq->rx));
+            ds_put_format(reply, " %s", netdev_rxq_enabled(list[i].rxq->rx)
+                          ? "(enabled) " : "(disabled)");
             ds_put_format(reply, "  pmd usage: ");
             if (total_cycles) {
                 ds_put_format(reply, "%2"PRIu64"",
@@ -5198,6 +5202,11 @@  dpif_netdev_run(struct dpif *dpif)
                 }
 
                 for (i = 0; i < port->n_rxq; i++) {
+
+                    if (!netdev_rxq_enabled(port->rxqs[i].rx)) {
+                        continue;
+                    }
+
                     if (dp_netdev_process_rxq_port(non_pmd,
                                                    &port->rxqs[i],
                                                    port->port_no)) {
@@ -5371,6 +5380,9 @@  pmd_load_queues_and_ports(struct dp_netdev_pmd_thread *pmd,
         poll_list[i].rxq = poll->rxq;
         poll_list[i].port_no = poll->rxq->port->port_no;
         poll_list[i].emc_enabled = poll->rxq->port->emc_enabled;
+        poll_list[i].rxq_enabled = netdev_rxq_enabled(poll->rxq->rx);
+        poll_list[i].change_seq =
+                     netdev_get_change_seq(poll->rxq->port->netdev);
         i++;
     }
 
@@ -5436,6 +5448,10 @@  reload:
 
         for (i = 0; i < poll_cnt; i++) {
 
+            if (!poll_list[i].rxq_enabled) {
+                continue;
+            }
+
             if (poll_list[i].emc_enabled) {
                 atomic_read_relaxed(&pmd->dp->emc_insert_min,
                                     &pmd->ctx.emc_insert_min);
@@ -5472,6 +5488,16 @@  reload:
             if (reload) {
                 break;
             }
+
+            for (i = 0; i < poll_cnt; i++) {
+                uint64_t current_seq =
+                         netdev_get_change_seq(poll_list[i].rxq->port->netdev);
+                if (poll_list[i].change_seq != current_seq) {
+                    poll_list[i].change_seq = current_seq;
+                    poll_list[i].rxq_enabled =
+                                 netdev_rxq_enabled(poll_list[i].rxq->rx);
+                }
+            }
         }
         pmd_perf_end_iteration(s, rx_packets, tx_packets,
                                pmd_perf_metrics_enabled(pmd));
diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 47153dc..ec2251e 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -424,6 +424,9 @@  struct netdev_dpdk {
         OVSRCU_TYPE(struct ingress_policer *) ingress_policer;
         uint32_t policer_rate;
         uint32_t policer_burst;
+
+        /* Array of vhost rxq states, see vring_state_changed. */
+        bool *vhost_rxq_enabled;
     );
 
     PADDED_MEMBERS(CACHE_LINE_SIZE,
@@ -1235,8 +1238,14 @@  vhost_common_construct(struct netdev *netdev)
     int socket_id = rte_lcore_to_socket_id(rte_get_master_lcore());
     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
 
+    dev->vhost_rxq_enabled = dpdk_rte_mzalloc(OVS_VHOST_MAX_QUEUE_NUM *
+                                              sizeof *dev->vhost_rxq_enabled);
+    if (!dev->vhost_rxq_enabled) {
+        return ENOMEM;
+    }
     dev->tx_q = netdev_dpdk_alloc_txq(OVS_VHOST_MAX_QUEUE_NUM);
     if (!dev->tx_q) {
+        rte_free(dev->vhost_rxq_enabled);
         return ENOMEM;
     }
 
@@ -1446,6 +1455,7 @@  netdev_dpdk_vhost_destruct(struct netdev *netdev)
 
     vhost_id = dev->vhost_id;
     dev->vhost_id = NULL;
+    rte_free(dev->vhost_rxq_enabled);
 
     common_destruct(dev);
 
@@ -2200,6 +2210,14 @@  netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
     return 0;
 }
 
+static bool
+netdev_dpdk_vhost_rxq_enabled(struct netdev_rxq *rxq)
+{
+    struct netdev_dpdk *dev = netdev_dpdk_cast(rxq->netdev);
+
+    return dev->vhost_rxq_enabled[rxq->queue_id];
+}
+
 static int
 netdev_dpdk_rxq_recv(struct netdev_rxq *rxq, struct dp_packet_batch *batch,
                      int *qfill)
@@ -3563,6 +3581,8 @@  destroy_device(int vid)
             ovs_mutex_lock(&dev->mutex);
             dev->vhost_reconfigured = false;
             ovsrcu_index_set(&dev->vid, -1);
+            memset(dev->vhost_rxq_enabled, 0,
+                   dev->up.n_rxq * sizeof *dev->vhost_rxq_enabled);
             netdev_dpdk_txq_map_clear(dev);
 
             netdev_change_seq_changed(&dev->up);
@@ -3597,24 +3617,30 @@  vring_state_changed(int vid, uint16_t queue_id, int enable)
     struct netdev_dpdk *dev;
     bool exists = false;
     int qid = queue_id / VIRTIO_QNUM;
+    bool is_rx = (queue_id % VIRTIO_QNUM) == VIRTIO_TXQ;
     char ifname[IF_NAME_SZ];
 
     rte_vhost_get_ifname(vid, ifname, sizeof ifname);
 
-    if (queue_id % VIRTIO_QNUM == VIRTIO_TXQ) {
-        return 0;
-    }
-
     ovs_mutex_lock(&dpdk_mutex);
     LIST_FOR_EACH (dev, list_node, &dpdk_list) {
         ovs_mutex_lock(&dev->mutex);
         if (nullable_string_is_equal(ifname, dev->vhost_id)) {
-            if (enable) {
-                dev->tx_q[qid].map = qid;
+            if (is_rx) {
+                bool old_state = dev->vhost_rxq_enabled[qid];
+
+                dev->vhost_rxq_enabled[qid] = enable != 0;
+                if (old_state != dev->vhost_rxq_enabled[qid]) {
+                    netdev_change_seq_changed(&dev->up);
+                }
             } else {
-                dev->tx_q[qid].map = OVS_VHOST_QUEUE_DISABLED;
+                if (enable) {
+                    dev->tx_q[qid].map = qid;
+                } else {
+                    dev->tx_q[qid].map = OVS_VHOST_QUEUE_DISABLED;
+                }
+                netdev_dpdk_remap_txqs(dev);
             }
-            netdev_dpdk_remap_txqs(dev);
             exists = true;
             ovs_mutex_unlock(&dev->mutex);
             break;
@@ -3624,9 +3650,9 @@  vring_state_changed(int vid, uint16_t queue_id, int enable)
     ovs_mutex_unlock(&dpdk_mutex);
 
     if (exists) {
-        VLOG_INFO("State of queue %d ( tx_qid %d ) of vhost device '%s' "
-                  "changed to \'%s\'", queue_id, qid, ifname,
-                  (enable == 1) ? "enabled" : "disabled");
+        VLOG_INFO("State of queue %d ( %s_qid %d ) of vhost device '%s' "
+                  "changed to \'%s\'", queue_id, is_rx == true ? "rx" : "tx",
+                  qid, ifname, (enable == 1) ? "enabled" : "disabled");
     } else {
         VLOG_INFO("vHost Device '%s' not found", ifname);
         return -1;
@@ -4085,6 +4111,10 @@  dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev)
     dev->up.n_rxq = dev->requested_n_rxq;
     int err;
 
+    /* Always keep RX queue 0 enabled for implementations that won't
+     * report vring states. */
+    dev->vhost_rxq_enabled[0] = true;
+
     /* Enable TX queue 0 by default if it wasn't disabled. */
     if (dev->tx_q[0].map == OVS_VHOST_QUEUE_MAP_UNKNOWN) {
         dev->tx_q[0].map = 0;
@@ -4297,7 +4327,8 @@  static const struct netdev_class dpdk_vhost_class = {
     .get_stats = netdev_dpdk_vhost_get_stats,
     .get_status = netdev_dpdk_vhost_user_get_status,
     .reconfigure = netdev_dpdk_vhost_reconfigure,
-    .rxq_recv = netdev_dpdk_vhost_rxq_recv
+    .rxq_recv = netdev_dpdk_vhost_rxq_recv,
+    .rxq_enabled = netdev_dpdk_vhost_rxq_enabled,
 };
 
 static const struct netdev_class dpdk_vhost_client_class = {
@@ -4311,7 +4342,8 @@  static const struct netdev_class dpdk_vhost_client_class = {
     .get_stats = netdev_dpdk_vhost_get_stats,
     .get_status = netdev_dpdk_vhost_user_get_status,
     .reconfigure = netdev_dpdk_vhost_client_reconfigure,
-    .rxq_recv = netdev_dpdk_vhost_rxq_recv
+    .rxq_recv = netdev_dpdk_vhost_rxq_recv,
+    .rxq_enabled = netdev_dpdk_vhost_rxq_enabled,
 };
 
 void
diff --git a/lib/netdev-provider.h b/lib/netdev-provider.h
index fb0c27e..7993261 100644
--- a/lib/netdev-provider.h
+++ b/lib/netdev-provider.h
@@ -789,6 +789,13 @@  struct netdev_class {
     void (*rxq_destruct)(struct netdev_rxq *);
     void (*rxq_dealloc)(struct netdev_rxq *);
 
+    /* Retrieves the current state of rx queue.  'false' means that queue won't
+     * get traffic in a short term and could be not polled.
+     *
+     * This function may be set to null if it would always return 'true'
+     * anyhow. */
+    bool (*rxq_enabled)(struct netdev_rxq *);
+
     /* Attempts to receive a batch of packets from 'rx'.  In 'batch', the
      * caller supplies 'packets' as the pointer to the beginning of an array
      * of NETDEV_MAX_BURST pointers to dp_packet.  If successful, the
diff --git a/lib/netdev.c b/lib/netdev.c
index 7d7ecf6..03a1b24 100644
--- a/lib/netdev.c
+++ b/lib/netdev.c
@@ -682,6 +682,16 @@  netdev_rxq_close(struct netdev_rxq *rx)
     }
 }
 
+bool netdev_rxq_enabled(struct netdev_rxq *rx)
+{
+    bool enabled = true;
+
+    if (rx->netdev->netdev_class->rxq_enabled) {
+        enabled = rx->netdev->netdev_class->rxq_enabled(rx);
+    }
+    return enabled;
+}
+
 /* Attempts to receive a batch of packets from 'rx'.  'batch' should point to
  * the beginning of an array of NETDEV_MAX_BURST pointers to dp_packet.  If
  * successful, this function stores pointers to up to NETDEV_MAX_BURST
diff --git a/lib/netdev.h b/lib/netdev.h
index d94817f..bfcdf39 100644
--- a/lib/netdev.h
+++ b/lib/netdev.h
@@ -183,6 +183,7 @@  enum netdev_pt_mode netdev_get_pt_mode(const struct netdev *);
 /* Packet reception. */
 int netdev_rxq_open(struct netdev *, struct netdev_rxq **, int id);
 void netdev_rxq_close(struct netdev_rxq *);
+bool netdev_rxq_enabled(struct netdev_rxq *);
 
 const char *netdev_rxq_get_name(const struct netdev_rxq *);
 int netdev_rxq_get_queue_id(const struct netdev_rxq *);
diff --git a/tests/pmd.at b/tests/pmd.at
index aac91a8..96ae959 100644
--- a/tests/pmd.at
+++ b/tests/pmd.at
@@ -14,7 +14,7 @@  parse_pmd_rxq_show () {
 # of the core on one line
 # 'port:' port_name 'queue_id:' rxq_id rxq_id rxq_id rxq_id
 parse_pmd_rxq_show_group () {
-   awk '/port:/ {print  $1, $2, $3, $4, $12, $20, $28}'
+   awk '/port:/ {print  $1, $2, $3, $4, $13, $22, $31}'
 }
 
 # Given the output of `ovs-appctl dpctl/dump-flows`, prints a list of flows
@@ -72,7 +72,7 @@  CHECK_PMD_THREADS_CREATED()
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | sed SED_NUMA_CORE_PATTERN], [0], [dnl
 pmd thread numa_id <cleared> core_id <cleared>:
   isolated : false
-  port: p0                queue-id:  0  pmd usage: NOT AVAIL
+  port: p0                queue-id:  0 (enabled)   pmd usage: NOT AVAIL
 ])
 
 AT_CHECK([ovs-appctl dpif/show | sed 's/\(tx_queues=\)[[0-9]]*/\1<cleared>/g'], [0], [dnl
@@ -103,14 +103,14 @@  dummy@ovs-dummy: hit:0 missed:0
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | sed SED_NUMA_CORE_PATTERN], [0], [dnl
 pmd thread numa_id <cleared> core_id <cleared>:
   isolated : false
-  port: p0                queue-id:  0  pmd usage: NOT AVAIL
-  port: p0                queue-id:  1  pmd usage: NOT AVAIL
-  port: p0                queue-id:  2  pmd usage: NOT AVAIL
-  port: p0                queue-id:  3  pmd usage: NOT AVAIL
-  port: p0                queue-id:  4  pmd usage: NOT AVAIL
-  port: p0                queue-id:  5  pmd usage: NOT AVAIL
-  port: p0                queue-id:  6  pmd usage: NOT AVAIL
-  port: p0                queue-id:  7  pmd usage: NOT AVAIL
+  port: p0                queue-id:  0 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  1 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  2 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  3 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  4 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  5 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  6 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  7 (enabled)   pmd usage: NOT AVAIL
 ])
 
 OVS_VSWITCHD_STOP
@@ -134,14 +134,14 @@  dummy@ovs-dummy: hit:0 missed:0
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | sed SED_NUMA_CORE_PATTERN], [0], [dnl
 pmd thread numa_id <cleared> core_id <cleared>:
   isolated : false
-  port: p0                queue-id:  0  pmd usage: NOT AVAIL
-  port: p0                queue-id:  1  pmd usage: NOT AVAIL
-  port: p0                queue-id:  2  pmd usage: NOT AVAIL
-  port: p0                queue-id:  3  pmd usage: NOT AVAIL
-  port: p0                queue-id:  4  pmd usage: NOT AVAIL
-  port: p0                queue-id:  5  pmd usage: NOT AVAIL
-  port: p0                queue-id:  6  pmd usage: NOT AVAIL
-  port: p0                queue-id:  7  pmd usage: NOT AVAIL
+  port: p0                queue-id:  0 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  1 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  2 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  3 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  4 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  5 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  6 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  7 (enabled)   pmd usage: NOT AVAIL
 ])
 
 AT_CHECK([ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-assign=cycles])
@@ -167,14 +167,14 @@  CHECK_PMD_THREADS_CREATED([1], [], [+$TMP])
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | sed SED_NUMA_CORE_PATTERN], [0], [dnl
 pmd thread numa_id <cleared> core_id <cleared>:
   isolated : false
-  port: p0                queue-id:  0  pmd usage: NOT AVAIL
-  port: p0                queue-id:  1  pmd usage: NOT AVAIL
-  port: p0                queue-id:  2  pmd usage: NOT AVAIL
-  port: p0                queue-id:  3  pmd usage: NOT AVAIL
-  port: p0                queue-id:  4  pmd usage: NOT AVAIL
-  port: p0                queue-id:  5  pmd usage: NOT AVAIL
-  port: p0                queue-id:  6  pmd usage: NOT AVAIL
-  port: p0                queue-id:  7  pmd usage: NOT AVAIL
+  port: p0                queue-id:  0 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  1 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  2 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  3 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  4 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  5 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  6 (enabled)   pmd usage: NOT AVAIL
+  port: p0                queue-id:  7 (enabled)   pmd usage: NOT AVAIL
 ])
 
 OVS_VSWITCHD_STOP