Message ID | 42ca2494c92f572388e3ab4c6f613dd4f038361b.1305846412.git.mst@redhat.com |
---|---|
State | Not Applicable, archived |
Delegated to: | David Miller |
Headers | show |
On Fri, 20 May 2011 02:11:47 +0300, "Michael S. Tsirkin" <mst@redhat.com> wrote: > virtio net uses the number of sg entries to > check for TX ring capacity freed. But this > gives incorrect results when indirect buffers > are used. Use the new capacity API instead. OK, but this explanation needs enhancement, such as noting the actual results of that miscalculation. Something like: virtio_net uses the number of sg entries in the skb it frees to calculate how many descriptors in the ring have just been made available. But this value is an overestimate: with indirect buffers each skb only uses one descriptor entry, meaning we may wake the queue only to find we still can't transmit anything. Using the new virtqueue_get_capacity() call, we can exactly determine the remaining capacity, so we should use that instead. But, here's the side effect: > /* More just got used, free them then recheck. */ > - capacity += free_old_xmit_skbs(vi); > + free_old_xmit_skbs(vi); > + capacity = virtqueue_get_capacity(vi->svq); > if (capacity >= 2+MAX_SKB_FRAGS) { That capacity >= 2+MAX_SKB_FRAGS is too much for indirect buffers. This means we waste 20 entries in the ring, but OTOH if we hit OOM we fall back to direct buffers and we *will* need this. Which means this comment in the driver is now wrong: /* This can happen with OOM and indirect buffers. */ if (unlikely(capacity < 0)) { if (net_ratelimit()) { if (likely(capacity == -ENOMEM)) { dev_warn(&dev->dev, "TX queue failure: out of memory\n"); } else { dev->stats.tx_fifo_errors++; dev_warn(&dev->dev, "Unexpected TX queue failure: %d\n", capacity); } } dev->stats.tx_dropped++; kfree_skb(skb); return NETDEV_TX_OK; } virtqueue_kick(vi->svq); So I'm not applying this patch (nor the virtqueue_get_capacity predeccessor) for the moment. Thanks, Rusty. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index f685324..f33c92b 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -509,19 +509,17 @@ again: return received; } -static unsigned int free_old_xmit_skbs(struct virtnet_info *vi) +static void free_old_xmit_skbs(struct virtnet_info *vi) { struct sk_buff *skb; - unsigned int len, tot_sgs = 0; + unsigned int len; while ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) { pr_debug("Sent skb %p\n", skb); vi->dev->stats.tx_bytes += skb->len; vi->dev->stats.tx_packets++; - tot_sgs += skb_vnet_hdr(skb)->num_sg; dev_kfree_skb_any(skb); } - return tot_sgs; } static int xmit_skb(struct virtnet_info *vi, struct sk_buff *skb) @@ -611,7 +609,8 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) netif_stop_queue(dev); if (unlikely(!virtqueue_enable_cb_delayed(vi->svq))) { /* More just got used, free them then recheck. */ - capacity += free_old_xmit_skbs(vi); + free_old_xmit_skbs(vi); + capacity = virtqueue_get_capacity(vi->svq); if (capacity >= 2+MAX_SKB_FRAGS) { netif_start_queue(dev); virtqueue_disable_cb(vi->svq);
virtio net uses the number of sg entries to check for TX ring capacity freed. But this gives incorrect results when indirect buffers are used. Use the new capacity API instead. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> --- drivers/net/virtio_net.c | 9 ++++----- 1 files changed, 4 insertions(+), 5 deletions(-)