From patchwork Thu Nov 22 13:16:58 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 201225 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 86FBE2C008C for ; Fri, 23 Nov 2012 11:07:25 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752225Ab2KWAHX (ORCPT ); Thu, 22 Nov 2012 19:07:23 -0500 Received: from merlin.infradead.org ([205.233.59.134]:39564 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751615Ab2KWAHW (ORCPT ); Thu, 22 Nov 2012 19:07:22 -0500 Received: from shinybook.infradead.org ([2001:8b0:10b:1:e6ce:8fff:fe1f:f2c0]) by merlin.infradead.org with esmtpsa (Exim 4.76 #1 (Red Hat Linux)) id 1TbWeK-0007iQ-4i; Thu, 22 Nov 2012 13:17:00 +0000 Message-ID: <1353590218.26346.214.camel@shinybook.infradead.org> Subject: [PATCH] 8139cp: enable bql From: David Woodhouse To: netdev@vger.kernel.org, codel@lists.bufferbloat.net Cc: Dave Taht Date: Thu, 22 Nov 2012 13:16:58 +0000 X-Mailer: Evolution 3.6.1 (3.6.1-1.fc18) Mime-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This adds support for byte queue limits on RTL8139C+ Tested on real hardware. Signed-off-by: David Woodhouse Acked-By: Dave Täht --- dtaht looking over my shoulder and says it seems to be doing the right thing... --- drivers/net/ethernet/realtek/8139cp.c~ 2012-11-21 20:49:47.000000000 +0000 +++ drivers/net/ethernet/realtek/8139cp.c 2012-11-22 13:07:26.119076315 +0000 @@ -648,6 +648,7 @@ static void cp_tx (struct cp_private *cp { unsigned tx_head = cp->tx_head; unsigned tx_tail = cp->tx_tail; + unsigned bytes_compl = 0, pkts_compl = 0; while (tx_tail != tx_head) { struct cp_desc *txd = cp->tx_ring + tx_tail; @@ -666,6 +667,9 @@ static void cp_tx (struct cp_private *cp le32_to_cpu(txd->opts1) & 0xffff, PCI_DMA_TODEVICE); + bytes_compl += skb->len; + pkts_compl++; + if (status & LastFrag) { if (status & (TxError | TxFIFOUnder)) { netif_dbg(cp, tx_err, cp->dev, @@ -697,6 +701,7 @@ static void cp_tx (struct cp_private *cp cp->tx_tail = tx_tail; + netdev_completed_queue(cp->dev, pkts_compl, bytes_compl); if (TX_BUFFS_AVAIL(cp) > (MAX_SKB_FRAGS + 1)) netif_wake_queue(cp->dev); } @@ -843,6 +848,8 @@ static netdev_tx_t cp_start_xmit (struct wmb(); } cp->tx_head = entry; + + netdev_sent_queue(dev, skb->len); netif_dbg(cp, tx_queued, cp->dev, "tx queued, slot %d, skblen %d\n", entry, skb->len); if (TX_BUFFS_AVAIL(cp) <= (MAX_SKB_FRAGS + 1)) @@ -937,6 +944,8 @@ static void cp_stop_hw (struct cp_privat cp->rx_tail = 0; cp->tx_head = cp->tx_tail = 0; + + netdev_reset_queue(cp->dev); } static void cp_reset_hw (struct cp_private *cp) @@ -981,6 +990,8 @@ static inline void cp_start_hw (struct c cpw32_f(TxRingAddr + 4, (ring_dma >> 16) >> 16); cpw8(Cmd, RxOn | TxOn); + + netdev_reset_queue(cp->dev); } static void cp_enable_irq(struct cp_private *cp)