[ovs-dev,0/6] Accelerate peer port forwarding by bypassing DP processing
mbox series

Message ID 20190402145712.7290-1-elibr@mellanox.com
Headers show
Series
  • Accelerate peer port forwarding by bypassing DP processing
Related show

Message

Eli Britstein April 2, 2019, 2:57 p.m. UTC
Current HW offloading solutions use embedded switches to offload OVS DP
rules while using SRIOV pass-through interfaces to the guest VMs. This
architecture requires the VM guests to install vendor specific drivers
and is challenging for live-migration. Such caveats may force some users
to fall back to SW performance while using virtio interfaces.

VirtIO performance can be improved by using the HW offloading model
while bridging packets from VF ports to virtio ports.
The bridging logic can be implemented using simple OVS OF rules defining
port-to-port forwarding between VF and virtio interfaces such as:
ovs-ofctl add-flow br-fwd in_port=<VIRTIO>,actions:output=<VF>
ovs-ofctl add-flow br-fwd in_port=<VF>,actions:output=<VIRTIO>

This patch-set accelerates the port-to-port forwarding performance by
generalizing the current patch port peer option to all port types. Once
defined, the OVS DP will forward all the packets that are received by
the port to its specified peer.
This optimization effectively bypasses the packet parsing and rule
matching logic, thus increasing the forwarding performance by more than
2X.


Eli Britstein (6):
  netdev: Introduce peer port name as a netdevice class property
  netdev-vport: Use generic peer port name API for patch ports
  ofproto-dpif: Use peer port as a generic netdev property
  netdev-dpdk: Introduce peer name property for dpdk ports
  ofproto-dpif: Introduce peer port netdev as a netdev property
  dpif-netdev: Accelerate peer port forwarding by bypassing DP
    processing

 lib/dpif-netdev.c      | 17 ++++++++++---
 lib/netdev-dpdk.c      | 55 +++++++++++++++++++++++++++++++++++++++---
 lib/netdev-provider.h  |  7 ++++++
 lib/netdev-vport.c     | 40 ++++++++++++------------------
 lib/netdev-vport.h     |  2 --
 lib/netdev.c           | 18 ++++++++++++++
 lib/netdev.h           |  2 ++
 ofproto/ofproto-dpif.c | 14 +++++------
 8 files changed, 114 insertions(+), 41 deletions(-)

Comments

Eli Britstein April 2, 2019, 3:03 p.m. UTC | #1
Adding the elaboration on each patch. I'll add it in V2 cover-letter if needed.

On 4/2/2019 5:57 PM, Eli Britstein wrote:

Current HW offloading solutions use embedded switches to offload OVS DP
rules while using SRIOV pass-through interfaces to the guest VMs. This
architecture requires the VM guests to install vendor specific drivers
and is challenging for live-migration. Such caveats may force some users
to fall back to SW performance while using virtio interfaces.

VirtIO performance can be improved by using the HW offloading model
while bridging packets from VF ports to virtio ports.
The bridging logic can be implemented using simple OVS OF rules defining
port-to-port forwarding between VF and virtio interfaces such as:
ovs-ofctl add-flow br-fwd in_port=<VIRTIO>,actions:output=<VF>
ovs-ofctl add-flow br-fwd in_port=<VF>,actions:output=<VIRTIO>

This patch-set accelerates the port-to-port forwarding performance by
generalizing the current patch port peer option to all port types. Once
defined, the OVS DP will forward all the packets that are received by
the port to its specified peer.
This optimization effectively bypasses the packet parsing and rule
matching logic, thus increasing the forwarding performance by more than
2X.


Patches 1-3 generalize peer port option as a netdevice class property

Patch 4 introduces peer name property for dpdk ports

Patch 5 keeps the peer netdev as a netdev property, as a pre-step

 towards accelerating port-to-port forwarding scenario

Patch 6 bypasses the datapath processing to accelerate port-to-port

 forwarding scenario



Eli Britstein (6):
  netdev: Introduce peer port name as a netdevice class property
  netdev-vport: Use generic peer port name API for patch ports
  ofproto-dpif: Use peer port as a generic netdev property
  netdev-dpdk: Introduce peer name property for dpdk ports
  ofproto-dpif: Introduce peer port netdev as a netdev property
  dpif-netdev: Accelerate peer port forwarding by bypassing DP
    processing

 lib/dpif-netdev.c      | 17 ++++++++++---
 lib/netdev-dpdk.c      | 55 +++++++++++++++++++++++++++++++++++++++---
 lib/netdev-provider.h  |  7 ++++++
 lib/netdev-vport.c     | 40 ++++++++++++------------------
 lib/netdev-vport.h     |  2 --
 lib/netdev.c           | 18 ++++++++++++++
 lib/netdev.h           |  2 ++
 ofproto/ofproto-dpif.c | 14 +++++------
 8 files changed, 114 insertions(+), 41 deletions(-)
Ilya Maximets April 8, 2019, 10:20 a.m. UTC | #2
On 02.04.2019 17:57, Eli Britstein wrote:
> Current HW offloading solutions use embedded switches to offload OVS DP
> rules while using SRIOV pass-through interfaces to the guest VMs. This
> architecture requires the VM guests to install vendor specific drivers
> and is challenging for live-migration. Such caveats may force some users
> to fall back to SW performance while using virtio interfaces.
> 
> VirtIO performance can be improved by using the HW offloading model
> while bridging packets from VF ports to virtio ports.
> The bridging logic can be implemented using simple OVS OF rules defining
> port-to-port forwarding between VF and virtio interfaces such as:
> ovs-ofctl add-flow br-fwd in_port=<VIRTIO>,actions:output=<VF>
> ovs-ofctl add-flow br-fwd in_port=<VF>,actions:output=<VIRTIO>
> 
> This patch-set accelerates the port-to-port forwarding performance by
> generalizing the current patch port peer option to all port types. Once
> defined, the OVS DP will forward all the packets that are received by
> the port to its specified peer.
> This optimization effectively bypasses the packet parsing and rule
> matching logic, thus increasing the forwarding performance by more than
> 2X.

This looks very strange for me. Why you need an OVS to just forward frames
between 2 ports without even parsing them? Why not just run testpmd/l2fwd or
whatever else application for that purpose?

There are few major issues with the concept:

1. Configuring the 'peer' will lead to situation where PMD thread completely
   ignores all the OF rules. It will forward packets even if you'll configure
   OVS to drop all the incoming traffic. This seems wrong. OVS is, first of all,
   an OF switch and it should not ignore OF rules. Otherwise, why you need OVS
   at all in your setup.

2. For patch ports, every packet that OVS sends to patch port appears on its
   peer. However, this patch-set implements the opposite behaviour for DPDK
   ports: every packet that received from DPDK port, OVS sends to its peer.
   This is really confusing and should not be mixed.

The implementation is completely unsafe and I'm surprised that you didn't
catch any races. Probably, you didn't run any a bit realistic tests.

I'll reply in more details about implementation if needed. However, I'd like
to have better rationale for this series first, because it makes no sense to me.

Best regards, Ilya Maximets.