Message ID | 1482503852-12438-7-git-send-email-jasowang@redhat.com |
---|---|
State | Accepted, archived |
Delegated to: | David Miller |
Headers | show |
On 16-12-23 06:37 AM, Jason Wang wrote: > We don't update ewma rx buf size in the case of XDP. This will lead > underestimation of rx buf size which causes host to produce more than > one buffers. This will greatly increase the possibility of XDP page > linearization. > > Cc: John Fastabend <john.r.fastabend@intel.com> > Signed-off-by: Jason Wang <jasowang@redhat.com> > --- > drivers/net/virtio_net.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index 0778dc8..77ae358 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -584,10 +584,12 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, > put_page(page); > head_skb = page_to_skb(vi, rq, xdp_page, > 0, len, PAGE_SIZE); > + ewma_pkt_len_add(&rq->mrg_avg_pkt_len, len); > return head_skb; > } > break; > case XDP_TX: > + ewma_pkt_len_add(&rq->mrg_avg_pkt_len, len); > if (unlikely(xdp_page != page)) > goto err_xdp; > rcu_read_unlock(); > @@ -596,6 +598,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, > default: > if (unlikely(xdp_page != page)) > __free_pages(xdp_page, 0); > + ewma_pkt_len_add(&rq->mrg_avg_pkt_len, len); > goto err_xdp; > } > } > Looks needed although I guess it will only be the case with MTU > ETH_DATA_LEN because of the clamp in get_mergeable_buf_len(). Although XDP setup allows MTU up to page_size - hdr so certainly will happen with ~MTU > 1500. I need to add some various MTU tests to my setup. Acked-by: John Fastabend <john.r.fastabend@intel.com>
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 0778dc8..77ae358 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -584,10 +584,12 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, put_page(page); head_skb = page_to_skb(vi, rq, xdp_page, 0, len, PAGE_SIZE); + ewma_pkt_len_add(&rq->mrg_avg_pkt_len, len); return head_skb; } break; case XDP_TX: + ewma_pkt_len_add(&rq->mrg_avg_pkt_len, len); if (unlikely(xdp_page != page)) goto err_xdp; rcu_read_unlock(); @@ -596,6 +598,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, default: if (unlikely(xdp_page != page)) __free_pages(xdp_page, 0); + ewma_pkt_len_add(&rq->mrg_avg_pkt_len, len); goto err_xdp; } }
We don't update ewma rx buf size in the case of XDP. This will lead underestimation of rx buf size which causes host to produce more than one buffers. This will greatly increase the possibility of XDP page linearization. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Jason Wang <jasowang@redhat.com> --- drivers/net/virtio_net.c | 3 +++ 1 file changed, 3 insertions(+)