From patchwork Tue Sep 20 07:11:22 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirsher, Jeffrey T" X-Patchwork-Id: 115456 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 040CCB70B9 for ; Tue, 20 Sep 2011 17:12:00 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753280Ab1ITHLf (ORCPT ); Tue, 20 Sep 2011 03:11:35 -0400 Received: from mga03.intel.com ([143.182.124.21]:10470 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752393Ab1ITHLd (ORCPT ); Tue, 20 Sep 2011 03:11:33 -0400 Received: from azsmga001.ch.intel.com ([10.2.17.19]) by azsmga101.ch.intel.com with ESMTP; 20 Sep 2011 00:11:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.68,410,1312182000"; d="scan'208";a="50758292" Received: from unknown (HELO jtkirshe-mobl.amr.corp.intel.com) ([10.255.14.162]) by azsmga001.ch.intel.com with ESMTP; 20 Sep 2011 00:11:32 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alexander Duyck , netdev@vger.kernel.org, gospo@redhat.com, Jeff Kirsher Subject: [net-next 1/9] igb: Update RXDCTL/TXDCTL configurations Date: Tue, 20 Sep 2011 00:11:22 -0700 Message-Id: <1316502690-25488-2-git-send-email-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 1.7.6.2 In-Reply-To: <1316502690-25488-1-git-send-email-jeffrey.t.kirsher@intel.com> References: <1316502690-25488-1-git-send-email-jeffrey.t.kirsher@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alexander Duyck This change cleans up the RXDCTL and TXDCTL configurations and optimizes RX performance by allowing back write-backs on all hardware other than 82576. Signed-off-by: Alexander Duyck Tested-by: Aaron Brown Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/igb/igb.h | 5 +++-- drivers/net/ethernet/intel/igb/igb_main.c | 23 +++++++++-------------- 2 files changed, 12 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index 265e151..577fd3e 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -100,11 +100,12 @@ struct vf_data_storage { */ #define IGB_RX_PTHRESH 8 #define IGB_RX_HTHRESH 8 -#define IGB_RX_WTHRESH 1 #define IGB_TX_PTHRESH 8 #define IGB_TX_HTHRESH 1 +#define IGB_RX_WTHRESH ((hw->mac.type == e1000_82576 && \ + adapter->msix_entries) ? 1 : 4) #define IGB_TX_WTHRESH ((hw->mac.type == e1000_82576 && \ - adapter->msix_entries) ? 1 : 16) + adapter->msix_entries) ? 1 : 16) /* this is the size past which hardware will drop packets when setting LPE=0 */ #define MAXIMUM_ETHERNET_VLAN_SIZE 1522 diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 3cb1bc9..aa78c10 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2666,14 +2666,12 @@ void igb_configure_tx_ring(struct igb_adapter *adapter, struct igb_ring *ring) { struct e1000_hw *hw = &adapter->hw; - u32 txdctl; + u32 txdctl = 0; u64 tdba = ring->dma; int reg_idx = ring->reg_idx; /* disable the queue */ - txdctl = rd32(E1000_TXDCTL(reg_idx)); - wr32(E1000_TXDCTL(reg_idx), - txdctl & ~E1000_TXDCTL_QUEUE_ENABLE); + wr32(E1000_TXDCTL(reg_idx), 0); wrfl(); mdelay(10); @@ -2685,7 +2683,7 @@ void igb_configure_tx_ring(struct igb_adapter *adapter, ring->head = hw->hw_addr + E1000_TDH(reg_idx); ring->tail = hw->hw_addr + E1000_TDT(reg_idx); - writel(0, ring->head); + wr32(E1000_TDH(reg_idx), 0); writel(0, ring->tail); txdctl |= IGB_TX_PTHRESH; @@ -3028,12 +3026,10 @@ void igb_configure_rx_ring(struct igb_adapter *adapter, struct e1000_hw *hw = &adapter->hw; u64 rdba = ring->dma; int reg_idx = ring->reg_idx; - u32 srrctl, rxdctl; + u32 srrctl = 0, rxdctl = 0; /* disable the queue */ - rxdctl = rd32(E1000_RXDCTL(reg_idx)); - wr32(E1000_RXDCTL(reg_idx), - rxdctl & ~E1000_RXDCTL_QUEUE_ENABLE); + wr32(E1000_RXDCTL(reg_idx), 0); /* Set DMA base address registers */ wr32(E1000_RDBAL(reg_idx), @@ -3045,7 +3041,7 @@ void igb_configure_rx_ring(struct igb_adapter *adapter, /* initialize head and tail */ ring->head = hw->hw_addr + E1000_RDH(reg_idx); ring->tail = hw->hw_addr + E1000_RDT(reg_idx); - writel(0, ring->head); + wr32(E1000_RDH(reg_idx), 0); writel(0, ring->tail); /* set descriptor configuration */ @@ -3076,13 +3072,12 @@ void igb_configure_rx_ring(struct igb_adapter *adapter, /* set filtering for VMDQ pools */ igb_set_vmolr(adapter, reg_idx & 0x7, true); - /* enable receive descriptor fetching */ - rxdctl = rd32(E1000_RXDCTL(reg_idx)); - rxdctl |= E1000_RXDCTL_QUEUE_ENABLE; - rxdctl &= 0xFFF00000; rxdctl |= IGB_RX_PTHRESH; rxdctl |= IGB_RX_HTHRESH << 8; rxdctl |= IGB_RX_WTHRESH << 16; + + /* enable receive descriptor fetching */ + rxdctl |= E1000_RXDCTL_QUEUE_ENABLE; wr32(E1000_RXDCTL(reg_idx), rxdctl); }