Message ID | 158634669527.707275.1340397871511076658.stgit@firesoul |
---|---|
State | RFC |
Delegated to: | BPF Maintainers |
Headers | show |
Series | [RFC,v2,01/33] xdp: add frame size to xdp_buff | expand |
> -----Original Message----- > From: Jesper Dangaard Brouer <brouer@redhat.com> > Sent: Wednesday, April 8, 2020 7:52 AM > To: sameehj@amazon.com > Cc: Wei Liu <wei.liu@kernel.org>; KY Srinivasan <kys@microsoft.com>; > Haiyang Zhang <haiyangz@microsoft.com>; Stephen Hemminger > <sthemmin@microsoft.com>; Jesper Dangaard Brouer > <brouer@redhat.com>; netdev@vger.kernel.org; bpf@vger.kernel.org; > zorik@amazon.com; akiyano@amazon.com; gtzalik@amazon.com; Toke > Høiland-Jørgensen <toke@redhat.com>; Daniel Borkmann > <borkmann@iogearbox.net>; Alexei Starovoitov > <alexei.starovoitov@gmail.com>; John Fastabend > <john.fastabend@gmail.com>; Alexander Duyck > <alexander.duyck@gmail.com>; Jeff Kirsher <jeffrey.t.kirsher@intel.com>; > David Ahern <dsahern@gmail.com>; Willem de Bruijn > <willemdebruijn.kernel@gmail.com>; Ilias Apalodimas > <ilias.apalodimas@linaro.org>; Lorenzo Bianconi <lorenzo@kernel.org>; > Saeed Mahameed <saeedm@mellanox.com> > Subject: [PATCH RFC v2 12/33] hv_netvsc: add XDP frame size to driver > > The hyperv NIC drivers XDP implementation is rather disappointing as it > will be a slowdown to enable XDP on this driver, given it will allocate a > new page for each packet and copy over the payload, before invoking the > XDP BPF-prog. As explained when I submit the XDP support for hv_netvsc -- without XDP, this driver already allocates memory and does a copy for every packet. So the page allocation for XDP data buf is not slower than the existing code path. Also, an optimization that only allocates a PAGE once, and re-uses it in a NAPI cycle will be done. And, my XDP implementation for hv_netvsc transparently passes xdp_prog to the associated VF NIC. Many of the Azure VMs are using SRIOV, so majority of the data are actually processed directly on the VF driver's XDP path. So the overhead of the synthetic data path (hv_netvsc) is minimal. Thanks, - Haiyang
diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c index b86611041db6..1e0c024b0a93 100644 --- a/drivers/net/hyperv/netvsc_bpf.c +++ b/drivers/net/hyperv/netvsc_bpf.c @@ -49,6 +49,7 @@ u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan, xdp_set_data_meta_invalid(xdp); xdp->data_end = xdp->data + len; xdp->rxq = &nvchan->xdp_rxq; + xdp->frame_sz = PAGE_SIZE; xdp->handle = 0; memcpy(xdp->data, data, len); diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index d8e86bdbfba1..651344fea0a5 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -794,7 +794,7 @@ static struct sk_buff *netvsc_alloc_recv_skb(struct net_device *net, if (xbuf) { unsigned int hdroom = xdp->data - xdp->data_hard_start; unsigned int xlen = xdp->data_end - xdp->data; - unsigned int frag_size = netvsc_xdp_fraglen(hdroom + xlen); + unsigned int frag_size = xdp->frame_sz; skb = build_skb(xbuf, frag_size);
The hyperv NIC drivers XDP implementation is rather disappointing as it will be a slowdown to enable XDP on this driver, given it will allocate a new page for each packet and copy over the payload, before invoking the XDP BPF-prog. The only positive thing it that its easy to determine the xdp.frame_sz. Then XDP is enabled on this driver, XDP_PASS and XDP_TX will create the SKB via build_skb (based on the newly allocated page). Now using XDP frame_sz this will provide more skb_tailroom, which netstack can use for SKB coalescing (e.g tcp_try_coalesce -> skb_try_coalesce). Cc: Wei Liu <wei.liu@kernel.org> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> --- drivers/net/hyperv/netvsc_bpf.c | 1 + drivers/net/hyperv/netvsc_drv.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)