[ovs-dev,v3] Documentation: add notes for TSO & i40e
diff mbox series

Message ID 20200120130936.21050-1-ciara.loftus@intel.com
State New
Headers show
Series
  • [ovs-dev,v3] Documentation: add notes for TSO & i40e
Related show

Commit Message

Ciara Loftus Jan. 20, 2020, 1:09 p.m. UTC
When using TSO in OVS-DPDK with an i40e device, the following
patch is required for DPDK, which fixes an issue on the TSO path:
https://patches.dpdk.org/patch/64136/
Document this as a limitation until a DPDK release with the fix
included is supported by OVS.

Also, document best known methods for performance tuning when
testing TSO with the tool iperf.

Fixes: 29cf9c1b3b9c ("userspace: Add TCP Segmentation Offload support")
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Flavio Leitner <fbl@sysclose.org>
---
v3:
- Added Fixes tag
- Removed unwanted manpages.mk change

v2:
- rebased to master
- changed patch links from net-next tree to patchwork
---
 Documentation/topics/userspace-tso.rst | 27 ++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

Comments

Flavio Leitner Jan. 20, 2020, 2:45 p.m. UTC | #1
ACK, thanks!
fbl

On Mon, Jan 20, 2020 at 01:09:36PM +0000, Ciara Loftus wrote:
> When using TSO in OVS-DPDK with an i40e device, the following
> patch is required for DPDK, which fixes an issue on the TSO path:
> https://patches.dpdk.org/patch/64136/
> Document this as a limitation until a DPDK release with the fix
> included is supported by OVS.
> 
> Also, document best known methods for performance tuning when
> testing TSO with the tool iperf.
> 
> Fixes: 29cf9c1b3b9c ("userspace: Add TCP Segmentation Offload support")
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> Acked-by: Flavio Leitner <fbl@sysclose.org>
> ---
> v3:
> - Added Fixes tag
> - Removed unwanted manpages.mk change
> 
> v2:
> - rebased to master
> - changed patch links from net-next tree to patchwork
> ---
>  Documentation/topics/userspace-tso.rst | 27 ++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/Documentation/topics/userspace-tso.rst b/Documentation/topics/userspace-tso.rst
> index 893c64839..94eddc0b2 100644
> --- a/Documentation/topics/userspace-tso.rst
> +++ b/Documentation/topics/userspace-tso.rst
> @@ -96,3 +96,30 @@ datapath must support TSO or packets using that feature will be dropped
>  on ports without TSO support.  That also means guests using vhost-user
>  in client mode will receive TSO packet regardless of TSO being enabled
>  or disabled within the guest.
> +
> +When the NIC performing the segmentation is using the i40e DPDK PMD, a fix
> +must be included in the DPDK build, otherwise TSO will not work. The fix can
> +be found on `DPDK patchwork`__.
> +
> +__ https://patches.dpdk.org/patch/64136/
> +
> +This fix is expected to be included in the 19.11.1 release. When OVS migrates
> +to this DPDK release, this limitation can be removed.
> +
> +~~~~~~~~~~~~~~~~~~
> +Performance Tuning
> +~~~~~~~~~~~~~~~~~~
> +
> +iperf is often used to test TSO performance. Care needs to be taken when
> +configuring the environment in which the iperf server process is being run.
> +Since the iperf server uses the NIC's kernel driver, IRQs will be generated.
> +By default with some NICs eg. i40e, the IRQs will land on the same core as that
> +which is being used by the server process, provided the number of NIC queues is
> +greater or equal to that lcoreid. This causes contention between the iperf
> +server process and the IRQs. For optimal performance, it is suggested to pin
> +the IRQs to their own core. To change the affinity associated with a given IRQ
> +number, you can 'echo' the desired coremask to the file
> +/proc/irq/<number>/smp_affinity
> +For more on SMP affinity, refer to the `Linux kernel documentation`__.
> +
> +__ https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
> -- 
> 2.17.1
> 
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Stokes, Ian Jan. 21, 2020, 4:30 p.m. UTC | #2
On 1/20/2020 2:45 PM, Flavio Leitner wrote:
> 
> 
> ACK, thanks!
> fbl
> 

Thanks All,

pushed to master.

Regards
Ian

> On Mon, Jan 20, 2020 at 01:09:36PM +0000, Ciara Loftus wrote:
>> When using TSO in OVS-DPDK with an i40e device, the following
>> patch is required for DPDK, which fixes an issue on the TSO path:
>> https://patches.dpdk.org/patch/64136/
>> Document this as a limitation until a DPDK release with the fix
>> included is supported by OVS.
>>
>> Also, document best known methods for performance tuning when
>> testing TSO with the tool iperf.
>>
>> Fixes: 29cf9c1b3b9c ("userspace: Add TCP Segmentation Offload support")
>> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
>> Acked-by: Flavio Leitner <fbl@sysclose.org>
>> ---
>> v3:
>> - Added Fixes tag
>> - Removed unwanted manpages.mk change
>>
>> v2:
>> - rebased to master
>> - changed patch links from net-next tree to patchwork
>> ---
>>   Documentation/topics/userspace-tso.rst | 27 ++++++++++++++++++++++++++
>>   1 file changed, 27 insertions(+)
>>
>> diff --git a/Documentation/topics/userspace-tso.rst b/Documentation/topics/userspace-tso.rst
>> index 893c64839..94eddc0b2 100644
>> --- a/Documentation/topics/userspace-tso.rst
>> +++ b/Documentation/topics/userspace-tso.rst
>> @@ -96,3 +96,30 @@ datapath must support TSO or packets using that feature will be dropped
>>   on ports without TSO support.  That also means guests using vhost-user
>>   in client mode will receive TSO packet regardless of TSO being enabled
>>   or disabled within the guest.
>> +
>> +When the NIC performing the segmentation is using the i40e DPDK PMD, a fix
>> +must be included in the DPDK build, otherwise TSO will not work. The fix can
>> +be found on `DPDK patchwork`__.
>> +
>> +__ https://patches.dpdk.org/patch/64136/
>> +
>> +This fix is expected to be included in the 19.11.1 release. When OVS migrates
>> +to this DPDK release, this limitation can be removed.
>> +
>> +~~~~~~~~~~~~~~~~~~
>> +Performance Tuning
>> +~~~~~~~~~~~~~~~~~~
>> +
>> +iperf is often used to test TSO performance. Care needs to be taken when
>> +configuring the environment in which the iperf server process is being run.
>> +Since the iperf server uses the NIC's kernel driver, IRQs will be generated.
>> +By default with some NICs eg. i40e, the IRQs will land on the same core as that
>> +which is being used by the server process, provided the number of NIC queues is
>> +greater or equal to that lcoreid. This causes contention between the iperf
>> +server process and the IRQs. For optimal performance, it is suggested to pin
>> +the IRQs to their own core. To change the affinity associated with a given IRQ
>> +number, you can 'echo' the desired coremask to the file
>> +/proc/irq/<number>/smp_affinity
>> +For more on SMP affinity, refer to the `Linux kernel documentation`__.
>> +
>> +__ https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
>> -- 
>> 2.17.1
>>
>> _______________________________________________
>> dev mailing list
>> dev@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>

Patch
diff mbox series

diff --git a/Documentation/topics/userspace-tso.rst b/Documentation/topics/userspace-tso.rst
index 893c64839..94eddc0b2 100644
--- a/Documentation/topics/userspace-tso.rst
+++ b/Documentation/topics/userspace-tso.rst
@@ -96,3 +96,30 @@  datapath must support TSO or packets using that feature will be dropped
 on ports without TSO support.  That also means guests using vhost-user
 in client mode will receive TSO packet regardless of TSO being enabled
 or disabled within the guest.
+
+When the NIC performing the segmentation is using the i40e DPDK PMD, a fix
+must be included in the DPDK build, otherwise TSO will not work. The fix can
+be found on `DPDK patchwork`__.
+
+__ https://patches.dpdk.org/patch/64136/
+
+This fix is expected to be included in the 19.11.1 release. When OVS migrates
+to this DPDK release, this limitation can be removed.
+
+~~~~~~~~~~~~~~~~~~
+Performance Tuning
+~~~~~~~~~~~~~~~~~~
+
+iperf is often used to test TSO performance. Care needs to be taken when
+configuring the environment in which the iperf server process is being run.
+Since the iperf server uses the NIC's kernel driver, IRQs will be generated.
+By default with some NICs eg. i40e, the IRQs will land on the same core as that
+which is being used by the server process, provided the number of NIC queues is
+greater or equal to that lcoreid. This causes contention between the iperf
+server process and the IRQs. For optimal performance, it is suggested to pin
+the IRQs to their own core. To change the affinity associated with a given IRQ
+number, you can 'echo' the desired coremask to the file
+/proc/irq/<number>/smp_affinity
+For more on SMP affinity, refer to the `Linux kernel documentation`__.
+
+__ https://www.kernel.org/doc/Documentation/IRQ-affinity.txt