diff mbox series

[bpf-next,V2,7/8] bpf/xdp: non-map redirect can avoid calling ndo_xdp_flush

Message ID 152775721817.24817.11576562399044807823.stgit@firesoul
State Accepted, archived
Delegated to: BPF Maintainers
Headers show
Series bpf/xdp: add flags argument to ndo_xdp_xmit and flag flush operation | expand

Commit Message

Jesper Dangaard Brouer May 31, 2018, 9 a.m. UTC
This is the first real user of the XDP_XMIT_FLUSH flag.

As pointed out many times, XDP_REDIRECT without using BPF maps is
significant slower than the map variant.  This is primary due to the
lack of bulking, as the ndo_xdp_flush operation is required after each
frame (to avoid frames hanging on the egress device).

It is still possible to optimize this case.  Instead of invoking two
NDO indirect calls, which are very expensive with CONFIG_RETPOLINE,
instead instruct ndo_xdp_xmit to flush via XDP_XMIT_FLUSH flag.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 net/core/filter.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Song Liu May 31, 2018, 4:16 p.m. UTC | #1
On Thu, May 31, 2018 at 2:00 AM, Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
> This is the first real user of the XDP_XMIT_FLUSH flag.
>
> As pointed out many times, XDP_REDIRECT without using BPF maps is
> significant slower than the map variant.  This is primary due to the
> lack of bulking, as the ndo_xdp_flush operation is required after each
> frame (to avoid frames hanging on the egress device).
>
> It is still possible to optimize this case.  Instead of invoking two
> NDO indirect calls, which are very expensive with CONFIG_RETPOLINE,
> instead instruct ndo_xdp_xmit to flush via XDP_XMIT_FLUSH flag.
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>

Acked-by: Song Liu <songliubraving@fb.com>

> ---
>  net/core/filter.c |    3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 6a21dbcad350..6981b4608979 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -3056,10 +3056,9 @@ static int __bpf_tx_xdp(struct net_device *dev,
>         if (unlikely(!xdpf))
>                 return -EOVERFLOW;
>
> -       sent = dev->netdev_ops->ndo_xdp_xmit(dev, 1, &xdpf, 0);
> +       sent = dev->netdev_ops->ndo_xdp_xmit(dev, 1, &xdpf, XDP_XMIT_FLUSH);
>         if (sent <= 0)
>                 return sent;
> -       dev->netdev_ops->ndo_xdp_flush(dev);
>         return 0;
>  }
>
>
diff mbox series

Patch

diff --git a/net/core/filter.c b/net/core/filter.c
index 6a21dbcad350..6981b4608979 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3056,10 +3056,9 @@  static int __bpf_tx_xdp(struct net_device *dev,
 	if (unlikely(!xdpf))
 		return -EOVERFLOW;
 
-	sent = dev->netdev_ops->ndo_xdp_xmit(dev, 1, &xdpf, 0);
+	sent = dev->netdev_ops->ndo_xdp_xmit(dev, 1, &xdpf, XDP_XMIT_FLUSH);
 	if (sent <= 0)
 		return sent;
-	dev->netdev_ops->ndo_xdp_flush(dev);
 	return 0;
 }