From patchwork Mon Aug 8 04:51:27 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Herbert X-Patchwork-Id: 108858 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id A8259B6F72 for ; Mon, 8 Aug 2011 14:51:35 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751328Ab1HHEvb (ORCPT ); Mon, 8 Aug 2011 00:51:31 -0400 Received: from smtp-out.google.com ([216.239.44.51]:22505 "EHLO smtp-out.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750947Ab1HHEva (ORCPT ); Mon, 8 Aug 2011 00:51:30 -0400 Received: from kpbe13.cbf.corp.google.com (kpbe13.cbf.corp.google.com [172.25.105.77]) by smtp-out.google.com with ESMTP id p784pSKX025729; Sun, 7 Aug 2011 21:51:28 -0700 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=google.com; s=beta; t=1312779088; bh=BOI0SkXfQIWA3v9Tf7GLZmLXObA=; h=Date:From:To:Subject:Message-ID:MIME-Version:Content-Type; b=KxarjyAf71vH+N185aQyF4scd+f9inrAQRy/odyosfBHUTeQKDJKW45MUDu2TZDjD yguCwr/cnIa3zaJHjhSyw== DomainKey-Signature: a=rsa-sha1; s=beta; d=google.com; c=nofws; q=dns; h=date:from:to:subject:message-id:user-agent:mime-version:content-type; b=hERACxOx6DhxUgpX0k6cHQB9xVhGJMBsux6xckvJubvYZ4EGiONa2s1eCjP+Y3vJF DfQbq/EX1PP5ejLvs8Yhw== Received: from pokey.mtv.corp.google.com (pokey.mtv.corp.google.com [172.18.96.23]) by kpbe13.cbf.corp.google.com with ESMTP id p784pR4C011026; Sun, 7 Aug 2011 21:51:27 -0700 Received: by pokey.mtv.corp.google.com (Postfix, from userid 60832) id 36C9222F00B; Sun, 7 Aug 2011 21:51:27 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by pokey.mtv.corp.google.com (Postfix) with ESMTP id 2C31B22EED5; Sun, 7 Aug 2011 21:51:27 -0700 (PDT) Date: Sun, 7 Aug 2011 21:51:27 -0700 (PDT) From: Tom Herbert To: davem@davemloft.net, netdev@vger.kernel.org Subject: [RFC PATCH v2 6/9] forcedeth: Support for byte queue limits Message-ID: User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Changes to forcedeth to use byte queue limits. Signed-off-by: Tom Herbert --- drivers/net/forcedeth.c | 18 ++++++++++++++++++ 1 files changed, 18 insertions(+), 0 deletions(-) diff --git a/drivers/net/forcedeth.c b/drivers/net/forcedeth.c index e55df30..fcd664a 100644 --- a/drivers/net/forcedeth.c +++ b/drivers/net/forcedeth.c @@ -1924,6 +1924,7 @@ static void nv_drain_tx(struct net_device *dev) np->tx_skb[i].first_tx_desc = NULL; np->tx_skb[i].next_tx_ctx = NULL; } + netdev_reset_queue(np->dev); np->tx_pkts_in_progress = 0; np->tx_change_owner = NULL; np->tx_end_flip = NULL; @@ -2178,6 +2179,9 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) /* set tx flags */ start_tx->flaglen |= cpu_to_le32(tx_flags | tx_flags_extra); + + netdev_sent_queue(np->dev, 1, skb->len); + np->put_tx.orig = put_tx; spin_unlock_irqrestore(&np->lock, flags); @@ -2317,6 +2321,9 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, /* set tx flags */ start_tx->flaglen |= cpu_to_le32(tx_flags | tx_flags_extra); + + netdev_sent_queue(np->dev, 1, skb->len); + np->put_tx.ex = put_tx; spin_unlock_irqrestore(&np->lock, flags); @@ -2354,6 +2361,7 @@ static int nv_tx_done(struct net_device *dev, int limit) u32 flags; int tx_work = 0; struct ring_desc *orig_get_tx = np->get_tx.orig; + unsigned int bytes_compl = 0; while ((np->get_tx.orig != np->put_tx.orig) && !((flags = le32_to_cpu(np->get_tx.orig->flaglen)) & NV_TX_VALID) && @@ -2375,6 +2383,7 @@ static int nv_tx_done(struct net_device *dev, int limit) dev->stats.tx_packets++; dev->stats.tx_bytes += np->get_tx_ctx->skb->len; } + bytes_compl += np->get_tx_ctx->skb->len; dev_kfree_skb_any(np->get_tx_ctx->skb); np->get_tx_ctx->skb = NULL; tx_work++; @@ -2393,6 +2402,7 @@ static int nv_tx_done(struct net_device *dev, int limit) dev->stats.tx_packets++; dev->stats.tx_bytes += np->get_tx_ctx->skb->len; } + bytes_compl += np->get_tx_ctx->skb->len; dev_kfree_skb_any(np->get_tx_ctx->skb); np->get_tx_ctx->skb = NULL; tx_work++; @@ -2403,6 +2413,9 @@ static int nv_tx_done(struct net_device *dev, int limit) if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) np->get_tx_ctx = np->first_tx_ctx; } + + netdev_completed_queue(np->dev, tx_work, bytes_compl); + if (unlikely((np->tx_stop == 1) && (np->get_tx.orig != orig_get_tx))) { np->tx_stop = 0; netif_wake_queue(dev); @@ -2416,6 +2429,7 @@ static int nv_tx_done_optimized(struct net_device *dev, int limit) u32 flags; int tx_work = 0; struct ring_desc_ex *orig_get_tx = np->get_tx.ex; + unsigned long bytes_cleaned = 0; while ((np->get_tx.ex != np->put_tx.ex) && !((flags = le32_to_cpu(np->get_tx.ex->flaglen)) & NV_TX2_VALID) && @@ -2435,6 +2449,7 @@ static int nv_tx_done_optimized(struct net_device *dev, int limit) } } + bytes_cleaned += np->get_tx_ctx->skb->len; dev_kfree_skb_any(np->get_tx_ctx->skb); np->get_tx_ctx->skb = NULL; tx_work++; @@ -2447,6 +2462,9 @@ static int nv_tx_done_optimized(struct net_device *dev, int limit) if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) np->get_tx_ctx = np->first_tx_ctx; } + + netdev_completed_queue(np->dev, tx_work, bytes_cleaned); + if (unlikely((np->tx_stop == 1) && (np->get_tx.ex != orig_get_tx))) { np->tx_stop = 0; netif_wake_queue(dev);