mbox series

[ovs-dev,RFC,dpdk-latest,0/1] netdev-dpdk: Enable DPDK vHost async API's

Message ID 20201023094845.35652-1-sunil.pai.g@intel.com
Headers show
Series netdev-dpdk: Enable DPDK vHost async API's | expand

Message

Pai G, Sunil Oct. 23, 2020, 9:48 a.m. UTC
This series brings in the new asynchronous vHost API's in DPDK to OVS.
With the asynchronous framework, vHost-user can offload the memory copy operations
to the hardware like Intel® QuickData Technology without blocking the CPU.

This series also attempts to highlight noteable issues
associated with enabling asynchronous data path in OVS.

Currently OVS seems to be quite synchronous in nature in terms of packet data path.
This poses a problem in implementing the async data path as there doesnt seem to be a clean way to
free the packets at a later point in time without breaking abstractions.
Which is why the free for asynchronously sent packets is currently done at the dpif level per PMD.


It should also be noted that the DPDK libraries used in the series(raw/ioat and vHost async)
are currently experimental in nature.


Usage:
One can enable OVS to use the vHost async data path via :
ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-async-copy-support=true
Note: This attribute must be set before adding the vhost ports.

Followed by setting the vhost async attributes:
ovs-vsctl set Interface <vhost_interface_name> \
options:vhost-async-attr="(txq#,hardware channel DBDF,Async threshold in bytes),..."
ex: ovs-vsctl set Interface vhostuserclient0 options:vhost-async-attr="(txq0,00:04.0,256)"

Note:
1. The hardware channel must be bound to a userspace driver like VFIO/IGB UIO,
   and will be statically mapped to the provided TXQ.
2. The async threshold provided in the vhost-async-attr is suggestive to the DPDK vHost library
   to switch between CPU and hardware pipeline.
   For ex:
   Lets assume the async threshold is set to 256 bytes for txq0.
   The vHost library will pick the CPU pipeline for packets less than 256 bytes
   and the hardware pipleline otherwise.

This patch was tested on:
OVS branch  : dpdk-latest
with
DPDK branch : 20.11-rc1

TODO:
1. Update documentation.
2. Introduce debuggability.
3. Further investigation on where and when to free the packets sent via async data path.


Sunil Pai G (1):
  netdev-dpdk: Enable DPDK vHost async API's.

 lib/dpdk-stub.c   |   6 +
 lib/dpdk.c        |  13 ++
 lib/dpdk.h        |   1 +
 lib/dpif-netdev.c |  19 +-
 lib/netdev-dpdk.c | 548 +++++++++++++++++++++++++++++++++++++++++++++-
 lib/netdev-dpdk.h |   3 +
 6 files changed, 579 insertions(+), 11 deletions(-)

Comments

Ilya Maximets Sept. 15, 2021, 3:21 p.m. UTC | #1
On 10/23/20 11:48, Sunil Pai G wrote:
> This series brings in the new asynchronous vHost API's in DPDK to OVS.
> With the asynchronous framework, vHost-user can offload the memory copy operations
> to the hardware like Intel® QuickData Technology without blocking the CPU.
> 
> This series also attempts to highlight noteable issues
> associated with enabling asynchronous data path in OVS.
> 
> Currently OVS seems to be quite synchronous in nature in terms of packet data path.
> This poses a problem in implementing the async data path as there doesnt seem to be a clean way to
> free the packets at a later point in time without breaking abstractions.
> Which is why the free for asynchronously sent packets is currently done at the dpif level per PMD.

As said in reply to the 'deferral of work' patch-set, OVS is synchronous
and it is fine, because network devices are asynchronous by their nature.
OVS is not blocked by memory copies, because these are handled by DMA
configured and handled by device drivers.  This patch adds DMA handling
to vhost, making it essentially a physical device at some extent, but
for some reason driver for that is implemented inside OVS.  High level
application should not care about memory copies inside the physical device
and DMA configuration, but the code in this patch looks very much as parts
of a specific device driver.

Implementation of this feature belongs to vhost library, which is a driver
for this (now) physical device.  This way it can be consumed by OVS or any
other DPDK application without major code changes.

Best regards, Ilya Maximets.
Pai G, Sunil Sept. 15, 2021, 4:48 p.m. UTC | #2
Hi Ilya, 

> -----Original Message-----
> From: Ilya Maximets <i.maximets@ovn.org>
> Sent: Wednesday, September 15, 2021 8:52 PM
> To: Pai G, Sunil <sunil.pai.g@intel.com>; dev@openvswitch.org
> Cc: i.maximets@ovn.org; Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: Re: [ovs-dev] [PATCH RFC dpdk-latest 0/1] netdev-dpdk: Enable
> DPDK vHost async API's
> 
> On 10/23/20 11:48, Sunil Pai G wrote:
> > This series brings in the new asynchronous vHost API's in DPDK to OVS.
> > With the asynchronous framework, vHost-user can offload the memory
> > copy operations to the hardware like Intel® QuickData Technology without
> blocking the CPU.
> >
> > This series also attempts to highlight noteable issues associated with
> > enabling asynchronous data path in OVS.
> >
> > Currently OVS seems to be quite synchronous in nature in terms of packet
> data path.
> > This poses a problem in implementing the async data path as there
> > doesnt seem to be a clean way to free the packets at a later point in time
> without breaking abstractions.
> > Which is why the free for asynchronously sent packets is currently done at
> the dpif level per PMD.
> 
> As said in reply to the 'deferral of work' patch-set, OVS is synchronous and it
> is fine, because network devices are asynchronous by their nature.
> OVS is not blocked by memory copies, because these are handled by DMA
> configured and handled by device drivers.  This patch adds DMA handling to
> vhost, making it essentially a physical device at some extent, but for some
> reason driver for that is implemented inside OVS.  High level application
> should not care about memory copies inside the physical device and DMA
> configuration, but the code in this patch looks very much as parts of a specific
> device driver.
> 
> Implementation of this feature belongs to vhost library, which is a driver for
> this (now) physical device.  This way it can be consumed by OVS or any other
> DPDK application without major code changes.
> 
> Best regards, Ilya Maximets.

Thanks for the review.
A recent version of the patch with a different architecture was published here : http://patchwork.ozlabs.org/project/openvswitch/list/?series=261277 
It would be of great help if we could get your inputs/feedback on that one too.

Thanks and regards,
Sunil
Ilya Maximets Sept. 15, 2021, 5:23 p.m. UTC | #3
On 9/15/21 18:48, Pai G, Sunil wrote:
> Hi Ilya, 
> 
>> -----Original Message-----
>> From: Ilya Maximets <i.maximets@ovn.org>
>> Sent: Wednesday, September 15, 2021 8:52 PM
>> To: Pai G, Sunil <sunil.pai.g@intel.com>; dev@openvswitch.org
>> Cc: i.maximets@ovn.org; Maxime Coquelin <maxime.coquelin@redhat.com>
>> Subject: Re: [ovs-dev] [PATCH RFC dpdk-latest 0/1] netdev-dpdk: Enable
>> DPDK vHost async API's
>>
>> On 10/23/20 11:48, Sunil Pai G wrote:
>>> This series brings in the new asynchronous vHost API's in DPDK to OVS.
>>> With the asynchronous framework, vHost-user can offload the memory
>>> copy operations to the hardware like Intel® QuickData Technology without
>> blocking the CPU.
>>>
>>> This series also attempts to highlight noteable issues associated with
>>> enabling asynchronous data path in OVS.
>>>
>>> Currently OVS seems to be quite synchronous in nature in terms of packet
>> data path.
>>> This poses a problem in implementing the async data path as there
>>> doesnt seem to be a clean way to free the packets at a later point in time
>> without breaking abstractions.
>>> Which is why the free for asynchronously sent packets is currently done at
>> the dpif level per PMD.
>>
>> As said in reply to the 'deferral of work' patch-set, OVS is synchronous and it
>> is fine, because network devices are asynchronous by their nature.
>> OVS is not blocked by memory copies, because these are handled by DMA
>> configured and handled by device drivers.  This patch adds DMA handling to
>> vhost, making it essentially a physical device at some extent, but for some
>> reason driver for that is implemented inside OVS.  High level application
>> should not care about memory copies inside the physical device and DMA
>> configuration, but the code in this patch looks very much as parts of a specific
>> device driver.
>>
>> Implementation of this feature belongs to vhost library, which is a driver for
>> this (now) physical device.  This way it can be consumed by OVS or any other
>> DPDK application without major code changes.
>>
>> Best regards, Ilya Maximets.
> 
> Thanks for the review.
> A recent version of the patch with a different architecture was published here : http://patchwork.ozlabs.org/project/openvswitch/list/?series=261277 
> It would be of great help if we could get your inputs/feedback on that one too.

Oops.  Sorry, I meant to reply to the latest patch, but it seems that I don't
have it my inbox, so I replied to the old one without looking at the date.
I'll send a copy of my in reply to v2, as that is what I wanted to do.  Just
to be consistent.

Best regards, Ilya Maximets.