diff mbox series

[ovs-dev] vhost: Expose virtio interrupt requirement on rte_vhos API

Message ID CFF8EF42F1132E4CBE2BF0AB6C21C58D78817630@ESESSMB107.ericsson.se
State Not Applicable
Delegated to: Ian Stokes
Headers show
Series [ovs-dev] vhost: Expose virtio interrupt requirement on rte_vhos API | expand

Commit Message

Jan Scheurich Sept. 23, 2017, 7:16 p.m. UTC
Performance tests with the OVS DPDK datapath have shown that the tx throughput over a vhostuser port into a VM with an interrupt-based virtio driver is limited by the overhead incurred by virtio interrupts. The OVS PMD spends up to 30% of its cycles in system calls kicking the eventfd. Also the core running the vCPU is heavily loaded with generating the virtio interrupts in KVM on the host and handling these interrupts in the virtio-net driver in the guest. This limits the throughput to about 500-700 Kpps with a single vCPU.

OVS is trying to address this issue by batching packets to a vhostuser port for some time to limit the virtio interrupt frequency. With a 50 us batching period we have measured an iperf3  throughput increase by 15% and a PMD utilization decrease from 45% to 30%.

On the other hand, guests using virtio PMDs do not profit from time-based tx batching. Instead they experience a 2-3% performance penalty and an average latency increase of 30-40 us. OVS therefore intends to apply time-based tx batching only for vhostuser tx queues that need to trigger virtio interrupts.

Today this information is hidden inside the rte_vhost library and not accessible to users of the API. This patch adds a function to the API to query it.

Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com>

---

 lib/librte_vhost/rte_vhost.h | 12 ++++++++++++
 lib/librte_vhost/vhost.c     | 19 +++++++++++++++++++
 2 files changed, 31 insertions(+)

Comments

Stephen Hemminger Sept. 24, 2017, 2:02 p.m. UTC | #1
I heard Fd.io has a faster vhost driver. Has anyone investigated?

On Sep 23, 2017 8:16 PM, "Jan Scheurich" <jan.scheurich@ericsson.com> wrote:

> Performance tests with the OVS DPDK datapath have shown that the tx
> throughput over a vhostuser port into a VM with an interrupt-based virtio
> driver is limited by the overhead incurred by virtio interrupts. The OVS
> PMD spends up to 30% of its cycles in system calls kicking the eventfd.
> Also the core running the vCPU is heavily loaded with generating the virtio
> interrupts in KVM on the host and handling these interrupts in the
> virtio-net driver in the guest. This limits the throughput to about 500-700
> Kpps with a single vCPU.
>
> OVS is trying to address this issue by batching packets to a vhostuser
> port for some time to limit the virtio interrupt frequency. With a 50 us
> batching period we have measured an iperf3  throughput increase by 15% and
> a PMD utilization decrease from 45% to 30%.
>
> On the other hand, guests using virtio PMDs do not profit from time-based
> tx batching. Instead they experience a 2-3% performance penalty and an
> average latency increase of 30-40 us. OVS therefore intends to apply
> time-based tx batching only for vhostuser tx queues that need to trigger
> virtio interrupts.
>
> Today this information is hidden inside the rte_vhost library and not
> accessible to users of the API. This patch adds a function to the API to
> query it.
>
> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com>
>
> ---
>
>  lib/librte_vhost/rte_vhost.h | 12 ++++++++++++
>  lib/librte_vhost/vhost.c     | 19 +++++++++++++++++++
>  2 files changed, 31 insertions(+)
>
> diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
> index 8c974eb..d62338b 100644
> --- a/lib/librte_vhost/rte_vhost.h
> +++ b/lib/librte_vhost/rte_vhost.h
> @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t
> vring_idx,
>   */
>  uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid);
>
> +/**
> + * Does the virtio driver request interrupts for a vhost tx queue?
> + *
> + * @param vid
> + *  vhost device ID
> + * @param qid
> + *  virtio queue index in mq case
> + * @return
> + *  1 if true, 0 if false
> + */
> +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index 0b6aa1c..bd1ebf9 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -503,3 +503,22 @@ struct virtio_net *
>
>         return *((volatile uint16_t *)&vq->avail->idx) -
> vq->last_avail_idx;
>  }
> +
> +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid)
> +{
> +    struct virtio_net *dev;
> +    struct vhost_virtqueue *vq;
> +
> +    dev = get_device(vid);
> +    if (dev == NULL)
> +        return 0;
> +
> +    vq = dev->virtqueue[qid];
> +    if (vq == NULL)
> +        return 0;
> +
> +    if (unlikely(vq->enabled == 0 || vq->avail == NULL))
> +        return 0;
> +
> +    return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT);
> +}
>
>
Yang, Zhiyong Sept. 25, 2017, 6:49 a.m. UTC | #2
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen Hemminger
> Sent: Sunday, September 24, 2017 10:02 PM
> To: Jan Scheurich <jan.scheurich@ericsson.com>
> Cc: dev@dpdk.org; dev@openvswitch.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: Expose virtio interrupt requirement on
> rte_vhos API
> 
> I heard Fd.io has a faster vhost driver. Has anyone investigated?
> 

The perf comparison between  DPDK vhost and VPP vhost need a fair apple to apple testing, 
If DPDK vhost driver as plugin works under VPP framework, It seems that converting packet
header or adding VPP header  as overhead is needed. I don't see the apple to apple  testing nums
so far.

Thanks
Zhiyong
Chandran, Sugesh Oct. 13, 2017, 3:34 p.m. UTC | #3
Hi Jan,
The DPDK changes are looks OK to me and will be useful. I am interested in testing this patch to see the impact on performance. Are you planning to share the changes in OVS for these APIs?




Performance tests with the OVS DPDK datapath have shown that the tx throughput over a vhostuser port into a VM with an interrupt-based virtio driver is limited by the overhead incurred by virtio interrupts. The OVS PMD spends up to 30% of its cycles in system calls kicking the eventfd. Also the core running the vCPU is heavily loaded with generating the virtio interrupts in KVM on the host and handling these interrupts in the virtio-net driver in the guest. This limits the throughput to about 500-700 Kpps with a single vCPU.

OVS is trying to address this issue by batching packets to a vhostuser port for some time to limit the virtio interrupt frequency. With a 50 us batching period we have measured an iperf3  throughput increase by 15% and a PMD utilization decrease from 45% to 30%.

On the other hand, guests using virtio PMDs do not profit from time-based tx batching. Instead they experience a 2-3% performance penalty and an average latency increase of 30-40 us. OVS therefore intends to apply time-based tx batching only for vhostuser tx queues that need to trigger virtio interrupts.

Today this information is hidden inside the rte_vhost library and not accessible to users of the API. This patch adds a function to the API to query it.

Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com<mailto:jan.scheurich@ericsson.com>>

---

 lib/librte_vhost/rte_vhost.h | 12 ++++++++++++
 lib/librte_vhost/vhost.c     | 19 +++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
index 8c974eb..d62338b 100644
--- a/lib/librte_vhost/rte_vhost.h
+++ b/lib/librte_vhost/rte_vhost.h
@@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
  */
 uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid);

+/**
+ * Does the virtio driver request interrupts for a vhost tx queue?
+ *
+ * @param vid
+ *  vhost device ID
+ * @param qid
+ *  virtio queue index in mq case
+ * @return
+ *  1 if true, 0 if false
+ */
+int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 0b6aa1c..bd1ebf9 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -503,3 +503,22 @@ struct virtio_net *

        return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx;
 }
+
+int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid)
+{
+    struct virtio_net *dev;
+    struct vhost_virtqueue *vq;
+
+    dev = get_device(vid);
+    if (dev == NULL)
+        return 0;
+
+    vq = dev->virtqueue[qid];
+    if (vq == NULL)
+        return 0;
+
+    if (unlikely(vq->enabled == 0 || vq->avail == NULL))
+        return 0;
+
+    return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT);
+}
Jan Scheurich Oct. 13, 2017, 10:57 p.m. UTC | #4
Hi Sugesh,

Actually the new API call in DPDK is not needed. A reply by Zhiyong Yang (http://dpdk.org/ml/archives/dev/2017-September/076504.html) pointed out that an existing API call provides access to the vring data structure that contains the interrupt flag. So I will abandon the DPDK patch.

Using the existing API I have created a patch on top of Ilya's output batching v4 series that automatically enables time-based batching on ports that should benefit from it most: vhostuser(client) using virtio interrupts as well as internal ports on the Linux (or BSD) host.

I still need to do careful testing that the interrupt detection works reliable. The performance should be the baseline performance of Ilya's patch.

BR, Jan

> -----Original Message-----
> From: ovs-dev-bounces@openvswitch.org [mailto:ovs-dev-bounces@openvswitch.org] On Behalf Of Chandran, Sugesh
> Sent: Friday, 13 October, 2017 17:34
> To: Jan Scheurich <jan.scheurich@ericsson.com>
> Cc: dev@openvswitch.org
> Subject: Re: [ovs-dev] [PATCH] vhost: Expose virtio interrupt requirement on rte_vhos API
> 
> Hi Jan,
> The DPDK changes are looks OK to me and will be useful. I am interested in testing this patch to see the impact on performance. Are
> you planning to share the changes in OVS for these APIs?
> 
> 
> 
> 
> Performance tests with the OVS DPDK datapath have shown that the tx throughput over a vhostuser port into a VM with an interrupt-
> based virtio driver is limited by the overhead incurred by virtio interrupts. The OVS PMD spends up to 30% of its cycles in system calls
> kicking the eventfd. Also the core running the vCPU is heavily loaded with generating the virtio interrupts in KVM on the host and
> handling these interrupts in the virtio-net driver in the guest. This limits the throughput to about 500-700 Kpps with a single vCPU.
> 
> OVS is trying to address this issue by batching packets to a vhostuser port for some time to limit the virtio interrupt frequency. With a
> 50 us batching period we have measured an iperf3  throughput increase by 15% and a PMD utilization decrease from 45% to 30%.
> 
> On the other hand, guests using virtio PMDs do not profit from time-based tx batching. Instead they experience a 2-3% performance
> penalty and an average latency increase of 30-40 us. OVS therefore intends to apply time-based tx batching only for vhostuser tx
> queues that need to trigger virtio interrupts.
> 
> Today this information is hidden inside the rte_vhost library and not accessible to users of the API. This patch adds a function to the
> API to query it.
> 
> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com<mailto:jan.scheurich@ericsson.com>>
> 
> ---
> 
>  lib/librte_vhost/rte_vhost.h | 12 ++++++++++++
>  lib/librte_vhost/vhost.c     | 19 +++++++++++++++++++
>  2 files changed, 31 insertions(+)
> 
> diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
> index 8c974eb..d62338b 100644
> --- a/lib/librte_vhost/rte_vhost.h
> +++ b/lib/librte_vhost/rte_vhost.h
> @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
>   */
>  uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid);
> 
> +/**
> + * Does the virtio driver request interrupts for a vhost tx queue?
> + *
> + * @param vid
> + *  vhost device ID
> + * @param qid
> + *  virtio queue index in mq case
> + * @return
> + *  1 if true, 0 if false
> + */
> +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index 0b6aa1c..bd1ebf9 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -503,3 +503,22 @@ struct virtio_net *
> 
>         return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx;
>  }
> +
> +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid)
> +{
> +    struct virtio_net *dev;
> +    struct vhost_virtqueue *vq;
> +
> +    dev = get_device(vid);
> +    if (dev == NULL)
> +        return 0;
> +
> +    vq = dev->virtqueue[qid];
> +    if (vq == NULL)
> +        return 0;
> +
> +    if (unlikely(vq->enabled == 0 || vq->avail == NULL))
> +        return 0;
> +
> +    return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT);
> +}
> 
> _______________________________________________
> dev mailing list
> dev@openvswitch.org<mailto:dev@openvswitch.org>
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> 
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Chandran, Sugesh Oct. 15, 2017, 2:58 p.m. UTC | #5
Regards
_Sugesh

> -----Original Message-----
> From: Jan Scheurich [mailto:jan.scheurich@ericsson.com]
> Sent: Friday, October 13, 2017 11:57 PM
> To: Chandran, Sugesh <sugesh.chandran@intel.com>
> Cc: dev@openvswitch.org
> Subject: RE: [ovs-dev] [PATCH] vhost: Expose virtio interrupt requirement on
> rte_vhos API
> 
> Hi Sugesh,
> 
> Actually the new API call in DPDK is not needed. A reply by Zhiyong Yang
> (http://dpdk.org/ml/archives/dev/2017-September/076504.html) pointed out
> that an existing API call provides access to the vring data structure that contains
> the interrupt flag. So I will abandon the DPDK patch.
[Sugesh] Sure. That make sense.
> 
> Using the existing API I have created a patch on top of Ilya's output batching v4
> series that automatically enables time-based batching on ports that should
> benefit from it most: vhostuser(client) using virtio interrupts as well as internal
> ports on the Linux (or BSD) host.
> 
> I still need to do careful testing that the interrupt detection works reliable. The
> performance should be the baseline performance of Ilya's patch.
[Sugesh] Make sense. I would like to see the  performance improvement offered with this.
Will surely have a look at the patch when you release it in the ML

> 
> BR, Jan
> 
> > -----Original Message-----
> > From: ovs-dev-bounces@openvswitch.org
> > [mailto:ovs-dev-bounces@openvswitch.org] On Behalf Of Chandran, Sugesh
> > Sent: Friday, 13 October, 2017 17:34
> > To: Jan Scheurich <jan.scheurich@ericsson.com>
> > Cc: dev@openvswitch.org
> > Subject: Re: [ovs-dev] [PATCH] vhost: Expose virtio interrupt
> > requirement on rte_vhos API
> >
> > Hi Jan,
> > The DPDK changes are looks OK to me and will be useful. I am
> > interested in testing this patch to see the impact on performance. Are you
> planning to share the changes in OVS for these APIs?
> >
> >
> >
> >
> > Performance tests with the OVS DPDK datapath have shown that the tx
> > throughput over a vhostuser port into a VM with an interrupt- based
> > virtio driver is limited by the overhead incurred by virtio
> > interrupts. The OVS PMD spends up to 30% of its cycles in system calls kicking
> the eventfd. Also the core running the vCPU is heavily loaded with generating
> the virtio interrupts in KVM on the host and handling these interrupts in the
> virtio-net driver in the guest. This limits the throughput to about 500-700 Kpps
> with a single vCPU.
> >
> > OVS is trying to address this issue by batching packets to a vhostuser
> > port for some time to limit the virtio interrupt frequency. With a
> > 50 us batching period we have measured an iperf3  throughput increase by
> 15% and a PMD utilization decrease from 45% to 30%.
> >
> > On the other hand, guests using virtio PMDs do not profit from
> > time-based tx batching. Instead they experience a 2-3% performance
> > penalty and an average latency increase of 30-40 us. OVS therefore intends to
> apply time-based tx batching only for vhostuser tx queues that need to trigger
> virtio interrupts.
> >
> > Today this information is hidden inside the rte_vhost library and not
> > accessible to users of the API. This patch adds a function to the API to query it.
> >
> > Signed-off-by: Jan Scheurich
> > <jan.scheurich@ericsson.com<mailto:jan.scheurich@ericsson.com>>
> >
> > ---
> >
> >  lib/librte_vhost/rte_vhost.h | 12 ++++++++++++
> >  lib/librte_vhost/vhost.c     | 19 +++++++++++++++++++
> >  2 files changed, 31 insertions(+)
> >
> > diff --git a/lib/librte_vhost/rte_vhost.h
> > b/lib/librte_vhost/rte_vhost.h index 8c974eb..d62338b 100644
> > --- a/lib/librte_vhost/rte_vhost.h
> > +++ b/lib/librte_vhost/rte_vhost.h
> > @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t
> vring_idx,
> >   */
> >  uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid);
> >
> > +/**
> > + * Does the virtio driver request interrupts for a vhost tx queue?
> > + *
> > + * @param vid
> > + *  vhost device ID
> > + * @param qid
> > + *  virtio queue index in mq case
> > + * @return
> > + *  1 if true, 0 if false
> > + */
> > +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid);
> > +
> >  #ifdef __cplusplus
> >  }
> >  #endif
> > diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index
> > 0b6aa1c..bd1ebf9 100644
> > --- a/lib/librte_vhost/vhost.c
> > +++ b/lib/librte_vhost/vhost.c
> > @@ -503,3 +503,22 @@ struct virtio_net *
> >
> >         return *((volatile uint16_t *)&vq->avail->idx) -
> > vq->last_avail_idx;  }
> > +
> > +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid) {
> > +    struct virtio_net *dev;
> > +    struct vhost_virtqueue *vq;
> > +
> > +    dev = get_device(vid);
> > +    if (dev == NULL)
> > +        return 0;
> > +
> > +    vq = dev->virtqueue[qid];
> > +    if (vq == NULL)
> > +        return 0;
> > +
> > +    if (unlikely(vq->enabled == 0 || vq->avail == NULL))
> > +        return 0;
> > +
> > +    return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT); }
> >
> > _______________________________________________
> > dev mailing list
> > dev@openvswitch.org<mailto:dev@openvswitch.org>
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >
> > _______________________________________________
> > dev mailing list
> > dev@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Ilya Maximets Oct. 17, 2017, 1:08 p.m. UTC | #6
Sorry, CC: list.

On 14.10.2017 01:57, Jan Scheurich wrote:
>> Hi Sugesh,
>>
>> Actually the new API call in DPDK is not needed. A reply by Zhiyong Yang (http://dpdk.org/ml/archives/dev/2017-September/076504.html) pointed out that an existing API call provides access to the vring data structure that contains the interrupt flag. So I will abandon the DPDK patch.
>>
>> Using the existing API I have created a patch on top of Ilya's output batching v4 series that automatically enables time-based batching on ports that should benefit from it most: vhostuser(client) using virtio interrupts as well as internal ports on the Linux (or BSD) host.
> 


I'm not sure about enabling time based batching by default for guests with
enabled interrupts. I see the performance drop for iperf in VM-VM scenario
on my ARMv8 system:
1.33 with 50ms vs. 1.42 Gbps with 0ms and 1.53 Gbps with 500ms.

I'll share more detailed test results, but it's clear that best time
interval is highly system dependent. This means that we should not make
assumptions about it.


Best regards, Ilya Maximets.

>>
>> I still need to do careful testing that the interrupt detection works reliable. The performance should be the baseline performance of Ilya's patch.
>>
>> BR, Jan
>>
>>> -----Original Message-----
>>> From: ovs-dev-bounces at openvswitch.org [mailto:ovs-dev-bounces at openvswitch.org] On Behalf Of Chandran, Sugesh
>>> Sent: Friday, 13 October, 2017 17:34
>>> To: Jan Scheurich <jan.scheurich at ericsson.com>
>>> Cc: dev at openvswitch.org
>>> Subject: Re: [ovs-dev] [PATCH] vhost: Expose virtio interrupt requirement on rte_vhos API
>>>
>>> Hi Jan,
>>> The DPDK changes are looks OK to me and will be useful. I am interested in testing this patch to see the impact on performance. Are
>>> you planning to share the changes in OVS for these APIs?
>>>
>>>
>>>
>>>
>>> Performance tests with the OVS DPDK datapath have shown that the tx throughput over a vhostuser port into a VM with an interrupt-
>>> based virtio driver is limited by the overhead incurred by virtio interrupts. The OVS PMD spends up to 30% of its cycles in system calls
>>> kicking the eventfd. Also the core running the vCPU is heavily loaded with generating the virtio interrupts in KVM on the host and
>>> handling these interrupts in the virtio-net driver in the guest. This limits the throughput to about 500-700 Kpps with a single vCPU.
>>>
>>> OVS is trying to address this issue by batching packets to a vhostuser port for some time to limit the virtio interrupt frequency. With a
>>> 50 us batching period we have measured an iperf3  throughput increase by 15% and a PMD utilization decrease from 45% to 30%.
>>>
>>> On the other hand, guests using virtio PMDs do not profit from time-based tx batching. Instead they experience a 2-3% performance
>>> penalty and an average latency increase of 30-40 us. OVS therefore intends to apply time-based tx batching only for vhostuser tx
>>> queues that need to trigger virtio interrupts.
>>>
>>> Today this information is hidden inside the rte_vhost library and not accessible to users of the API. This patch adds a function to the
>>> API to query it.
>>>
>>> Signed-off-by: Jan Scheurich <jan.scheurich at ericsson.com<mailto:jan.scheurich at ericsson.com>>
>>>
>>> ---
>>>
>>>  lib/librte_vhost/rte_vhost.h | 12 ++++++++++++
>>>  lib/librte_vhost/vhost.c     | 19 +++++++++++++++++++
>>>  2 files changed, 31 insertions(+)
>>>
>>> diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
>>> index 8c974eb..d62338b 100644
>>> --- a/lib/librte_vhost/rte_vhost.h
>>> +++ b/lib/librte_vhost/rte_vhost.h
>>> @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
>>>   */
>>>  uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid);
>>>
>>> +/**
>>> + * Does the virtio driver request interrupts for a vhost tx queue?
>>> + *
>>> + * @param vid
>>> + *  vhost device ID
>>> + * @param qid
>>> + *  virtio queue index in mq case
>>> + * @return
>>> + *  1 if true, 0 if false
>>> + */
>>> +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid);
>>> +
>>>  #ifdef __cplusplus
>>>  }
>>>  #endif
>>> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
>>> index 0b6aa1c..bd1ebf9 100644
>>> --- a/lib/librte_vhost/vhost.c
>>> +++ b/lib/librte_vhost/vhost.c
>>> @@ -503,3 +503,22 @@ struct virtio_net *
>>>
>>>         return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx;
>>>  }
>>> +
>>> +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid)
>>> +{
>>> +    struct virtio_net *dev;
>>> +    struct vhost_virtqueue *vq;
>>> +
>>> +    dev = get_device(vid);
>>> +    if (dev == NULL)
>>> +        return 0;
>>> +
>>> +    vq = dev->virtqueue[qid];
>>> +    if (vq == NULL)
>>> +        return 0;
>>> +
>>> +    if (unlikely(vq->enabled == 0 || vq->avail == NULL))
>>> +        return 0;
>>> +
>>> +    return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT);
>>> +}
>>>
>>> _______________________________________________
>>> dev mailing list
>>> dev at openvswitch.org<mailto:dev at openvswitch.org>
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>>
>>> _______________________________________________
>>> dev mailing list
>>> dev at openvswitch.org
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
diff mbox series

Patch

diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
index 8c974eb..d62338b 100644
--- a/lib/librte_vhost/rte_vhost.h
+++ b/lib/librte_vhost/rte_vhost.h
@@ -444,6 +444,18 @@  int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
  */
 uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid);

+/**
+ * Does the virtio driver request interrupts for a vhost tx queue?
+ *
+ * @param vid
+ *  vhost device ID
+ * @param qid
+ *  virtio queue index in mq case
+ * @return
+ *  1 if true, 0 if false
+ */
+int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 0b6aa1c..bd1ebf9 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -503,3 +503,22 @@  struct virtio_net *

        return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx;
 }
+
+int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid)
+{
+    struct virtio_net *dev;
+    struct vhost_virtqueue *vq;
+
+    dev = get_device(vid);
+    if (dev == NULL)
+        return 0;
+
+    vq = dev->virtqueue[qid];
+    if (vq == NULL)
+        return 0;
+
+    if (unlikely(vq->enabled == 0 || vq->avail == NULL))
+        return 0;
+
+    return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT);
+}