From patchwork Mon Jul 13 13:22:03 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudiu Manoil X-Patchwork-Id: 494574 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 2F4DB14016A for ; Mon, 13 Jul 2015 23:24:39 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752685AbbGMNYe (ORCPT ); Mon, 13 Jul 2015 09:24:34 -0400 Received: from mail-by2on0135.outbound.protection.outlook.com ([207.46.100.135]:52512 "EHLO na01-by2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751968AbbGMNYL (ORCPT ); Mon, 13 Jul 2015 09:24:11 -0400 Received: from BN3PR0301CA0024.namprd03.prod.outlook.com (10.160.180.162) by BY2PR03MB608.namprd03.prod.outlook.com (10.255.93.39) with Microsoft SMTP Server (TLS) id 15.1.213.14; Mon, 13 Jul 2015 13:24:08 +0000 Received: from BY2FFO11FD003.protection.gbl (2a01:111:f400:7c0c::142) by BN3PR0301CA0024.outlook.office365.com (2a01:111:e400:4000::34) with Microsoft SMTP Server (TLS) id 15.1.213.14 via Frontend Transport; Mon, 13 Jul 2015 13:24:08 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=freescale.com; davemloft.net; dkim=none (message not signed) header.d=none; Received-SPF: Fail (protection.outlook.com: domain of freescale.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BY2FFO11FD003.mail.protection.outlook.com (10.1.14.125) with Microsoft SMTP Server (TLS) id 15.1.213.8 via Frontend Transport; Mon, 13 Jul 2015 13:24:07 +0000 Received: from fsr-fed1764-016.ea.freescale.net (fsr-fed1764-016.ea.freescale.net [10.171.81.161]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id t6DDO4Pa031978; Mon, 13 Jul 2015 06:24:06 -0700 From: Claudiu Manoil To: CC: "David S. Miller" Subject: [PATCH net-next 1/4 v2] gianfar: Bundle Rx allocation, cleanup Date: Mon, 13 Jul 2015 16:22:03 +0300 Message-ID: <1436793726-13310-2-git-send-email-claudiu.manoil@freescale.com> X-Mailer: git-send-email 1.7.11.7 In-Reply-To: <1436793726-13310-1-git-send-email-claudiu.manoil@freescale.com> References: <1436793726-13310-1-git-send-email-claudiu.manoil@freescale.com> X-EOPAttributedMessage: 0 X-Microsoft-Exchange-Diagnostics: 1; BY2FFO11FD003; 1:wg7KwrxedjL1bU1uhc2YWzeEaevFWICDrjPTPgCfQ7QIEo6GncYWtzHgpqU+rtXkCOF5JoZC576LZLEbIr5jugy3iQecMyMmZ0a4bSh2GMKqub+I+cPGxlMl/7CIN1DC/1KUXKjihX8ocJX4IiI5XsQijep7Hnz0hxEY8P4inJjDGdobydDpiG0pdv0zVp0KydJU1lm2UFnXo70atg3Bm3YoruBNGFEmjYUiNc8wKjdWN5JoxX8RDlo0GHBR9slZ+UT2bjrSWX9AksJJP++EAbt7EkQjtp65jI6Wb+oiGh/k3SroZa+0PaEVCO/Qdgdd4ypbnLLgEbJkEyIcLpIfxE9JZXckojPt3Rn3SMxGJwOfbCSfIoMgmjqKbV4LZEYxkexziBvA4VCpTzCzPyQ8OKBr4poCevWGoLv2oUwqvcE= X-Forefront-Antispam-Report: CIP:192.88.168.50; CTRY:US; IPV:NLI; EFV:NLI; SFV:NSPM; SFS:(10019020)(6009001)(2980300002)(339900001)(189002)(199003)(85426001)(105606002)(48376002)(50986999)(229853001)(50226001)(76176999)(2351001)(50466002)(33646002)(104016003)(2950100001)(6806004)(5001920100001)(5001960100002)(47776003)(77156002)(62966003)(110136002)(46102003)(19580395003)(19580405001)(92566002)(87936001)(5003940100001)(77096005)(106466001)(86362001)(189998001)(36756003); DIR:OUT; SFP:1102; SCL:1; SRVR:BY2PR03MB608; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; MLV:sfv; MX:1; A:1; LANG:en; MIME-Version: 1.0 X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB608; 2:5WJJn2WjU/bwEHkv7TKh1MRUT0WRrqSbsLaeXhIKxO6fWc4xgaPECObAT95Lbe3r; 3:MJN61W6MKdhTcSV4tC/3C4pA8qJB9nTLjv16ZrJIBgtI9oMInq7oCVs4ArgkPnwHF0eR03giV86UTKSyDGZDbSUjXS6yFtVVLvhlja0RJfY4HlhPDwyKtfsakCZ5lEA3hSHZD1uzr7EM8tMKqTb16d0/Y5BFaqY2ZUHIbw6VNI54vDz0EK05pa4jASWIB37uDxxtmlNkQ+TK7xYQMTNF/0x6s9KhH/no1IZbCq+QSq8=; 25:KrwbMIb5k4aQUL4T8eXyvHMFyl2vHau100mVcd1N0/diykIP5YBpJ0BzhSYLbePCdHLVzgbXC3nG01m5y4jchemwMJEkPC2wf+n3Q4hn6t2P7qNhJwvZ1gXREwUoeu5r/oHgRG7EjsgIhcDKUv54opaHmji8QwJjaSHAULjv2xq/rNdUrD8/7j8xyLusPRnw3N0BcpB1kPncnZbG5XCxYSOE5frmK7wVchY1xHxcRKXX+VL3CJVhJYKlYvqLsLQVx3RvJ/ebGOLVo7tIkDFTVA==; 20:YWXtd3j41hC5OOTSM8dZRlpwGWIFJ4rw0T524WJ2kXksw0c3Zs17KjPFbu2TF+BX82ioH8MFD11J574zJA07bbhc90ZT74I8McgfA+hOFmGfd+eCnmfne47mTRfqTr0A+s7WRlad59ByzPJL9dpMq1LbgT+24iXVlsiUsP6b/qnTtPlUWyZmjcTkP3ZEj1exHIhry5VlMpuX4Kwl3Q+Ab3CxsnlWiVtoIJg08ErItltBa8+EVDoiE8xjyTqVh5rmeM84NTs4QfOJjnmoVHPyXkAsWpLAs8UCrFxPshUVfoD6l+9bqBpLXkOQYj1PUP+5i8sMCX2Ae81iMkQlw/N5rQxVYNUyY83OYU7oYTqr4hM= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BY2PR03MB608; BY2PR03MB608: X-MS-Exchange-Organization-RulesExecuted X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(5005006)(3002001); SRVR:BY2PR03MB608; BCL:0; PCL:0; RULEID:; SRVR:BY2PR03MB608; X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB608; 4:bD1jkOWVVDkQiYgFabSSeoHBygvEUC6WsbIJB0eFLWWRifbnmLBjo9af9gbehuhMiEwzKXa6p0Rspwk8Qw2eQK3kvGGiZ0Dz4w4rrgRbJdbOv8ownGU+A+cArCz8fedlKrgq/lVIOIveRKTql5NS0MCyrl58CZyztn4NF31knaqYlfi1E3pg9yexvxWQ1kbkUSHySa3sy+cl8Uf8qW1nBvdUy6JvdCo61sCgRbSuh4uugl4EuUXtuTMCPp3DzNH/4F0ruhZcwMNC5s7yKrWKNAAbwmduO6g1iNA3BjbATd4= X-Forefront-PRVS: 0636271852 X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB608; 23:OCMPKLR29k48o2ZuumYgDJD+Lz+YbZx7bihNcQO1MdRQcnkrXR3lei3caM0KLtIp/LYSTaiPAVzkVghC0LqGCr/em0uQGLZf7A/n6bzofv4VN+7DnmQC/HbsbmTsrj0jYwahRIKBarSp1CqVWXIo2Te8Jas/hM1Bl8mpmodx8cRWP8miht+Q5KRDbbKtzVZrEm+KEVSCOKy41gfmLWiFNqQ+1tF/D3jIX7Jk4PuSOT/Gi4dknUCND7Aps0Og1RkwBjOfr9v4b4T3YE9wOSS/kFHWXYnjp0O3lJn/9zF11TsEyWUhNHoEZmbFNMT+mfLQMLnRTVjwlL99q01PvExJJV2wXhk/fXz7azPcy56IviWAfG7+kztkVFkVeScgNn1Q3Ao3PjamTmdFZQXT6dR5oju7zvgcBsG3rTorkEt/eUgNvTENzpq4LCXR6g7p9NRf5aDgmbJ/na+/dKbgjPmXCO/GHSl8n0L9KbvEYP4P0AjtCop1Zc/iLh5hq1aw4mxX7oPrzGH8XRddsHS3itJeDI+FzQ77seQyZYoUn+MFkkpYOkHKGBTf1HOZGAdoWrs/pq6VBRSSSKVcExIgaD+ksMEG05qApDme7FXsb58ZkadGBMXvrlKkQmc/AHBTFSihjLOc5dqc/SPOUkLtcElQV3dFx7avjfJJ+gE0ZkCk7Di1AVlzE9KfhAa6AfyDcfgToJAIKYSMZOOVMZx94w0dcCeCW2JNQIXW8mImtotBe/VjVZNhdMbJhL61YroLr0J4XHeIZFb6oPiiwriCOnUus7XYnCmnKrm3Giqa618nqPjNvYB4UbIxqg6BT0oWYR0O53+mR3bK6VBrEvem8GkreL/Ydoeityx/dH9cbHNDht7wj5QmN4yYzLZR9AUMJdzZ09yUuBMk5+JwknwIhId2fmYe1p/qYeG2NGycLeG/0p9QA8CouLady7cvk8SIcAgP X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB608; 5:nb8YKMtna4KkR3bJsFO1c8XB/SpWgrI0Us4xG47CXXdoBlFheAR5PZy60A9+WZeAWcgMwtnu0ePICmNndqp5/zm69H6+VE+B3aFT9tuCBdvv3aNCJKObbPwq2HpZ7IgX3VjDJlYM4AfVnzJPNoTxuw==; 24:a9iteXxnROpnIiOaKFnu5aLZgeM8fwIqqSY6lOumhu6gmEKW0MHV5QP3ls5UExsRndRXcqLJPjKaTRrqnPh7xmUH+lFvIUli5PJRE18vFbg=; 20:3JOLseR8X8+A34HWEuwzkijYaXsgmvuZS3TeVfaPJhZaBrHEiQgRAKnt52+ZLmEdJEe4KeYo2Vgzy3ZMJvaFTw== X-OriginatorOrg: freescale.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jul 2015 13:24:07.5969 (UTC) X-MS-Exchange-CrossTenant-Id: 710a03f5-10f6-4d38-9ff4-a80b81da590d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=710a03f5-10f6-4d38-9ff4-a80b81da590d; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY2PR03MB608 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Use a more common consumer/ producer index design to improve rx buffer allocation. Instead of allocating a single new buffer (skb) on each iteration, bundle the allocation of several rx buffers at a time. This also opens the path for further memory optimizations. Remove useless check of rxq->rfbptr, since this patch touches rx pause frame handling code as well. rxq->rfbptr is always initialized as part of Rx BD ring init. Remove redundant (and misleading) 'amount_pull' parameter. Signed-off-by: Claudiu Manoil --- v2: none drivers/net/ethernet/freescale/gianfar.c | 201 ++++++++++++----------- drivers/net/ethernet/freescale/gianfar.h | 39 +++-- drivers/net/ethernet/freescale/gianfar_ethtool.c | 3 + 3 files changed, 136 insertions(+), 107 deletions(-) diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c index ff87502..b35bf3d 100644 --- a/drivers/net/ethernet/freescale/gianfar.c +++ b/drivers/net/ethernet/freescale/gianfar.c @@ -116,8 +116,8 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev); static void gfar_reset_task(struct work_struct *work); static void gfar_timeout(struct net_device *dev); static int gfar_close(struct net_device *dev); -static struct sk_buff *gfar_new_skb(struct net_device *dev, - dma_addr_t *bufaddr); +static void gfar_alloc_rx_buffs(struct gfar_priv_rx_q *rx_queue, + int alloc_cnt); static int gfar_set_mac_address(struct net_device *dev); static int gfar_change_mtu(struct net_device *dev, int new_mtu); static irqreturn_t gfar_error(int irq, void *dev_id); @@ -142,7 +142,7 @@ static void gfar_netpoll(struct net_device *dev); int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit); static void gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue); static void gfar_process_frame(struct net_device *dev, struct sk_buff *skb, - int amount_pull, struct napi_struct *napi); + struct napi_struct *napi); static void gfar_halt_nodisable(struct gfar_private *priv); static void gfar_clear_exact_match(struct net_device *dev); static void gfar_set_mac_for_addr(struct net_device *dev, int num, @@ -169,17 +169,15 @@ static void gfar_init_rxbdp(struct gfar_priv_rx_q *rx_queue, struct rxbd8 *bdp, bdp->lstatus = cpu_to_be32(lstatus); } -static int gfar_init_bds(struct net_device *ndev) +static void gfar_init_bds(struct net_device *ndev) { struct gfar_private *priv = netdev_priv(ndev); struct gfar __iomem *regs = priv->gfargrp[0].regs; struct gfar_priv_tx_q *tx_queue = NULL; struct gfar_priv_rx_q *rx_queue = NULL; struct txbd8 *txbdp; - struct rxbd8 *rxbdp; u32 __iomem *rfbptr; int i, j; - dma_addr_t bufaddr; for (i = 0; i < priv->num_tx_queues; i++) { tx_queue = priv->tx_queue[i]; @@ -207,33 +205,18 @@ static int gfar_init_bds(struct net_device *ndev) rfbptr = ®s->rfbptr0; for (i = 0; i < priv->num_rx_queues; i++) { rx_queue = priv->rx_queue[i]; - rx_queue->cur_rx = rx_queue->rx_bd_base; - rx_queue->skb_currx = 0; - rxbdp = rx_queue->rx_bd_base; - - for (j = 0; j < rx_queue->rx_ring_size; j++) { - struct sk_buff *skb = rx_queue->rx_skbuff[j]; - if (skb) { - bufaddr = be32_to_cpu(rxbdp->bufPtr); - } else { - skb = gfar_new_skb(ndev, &bufaddr); - if (!skb) { - netdev_err(ndev, "Can't allocate RX buffers\n"); - return -ENOMEM; - } - rx_queue->rx_skbuff[j] = skb; - } + rx_queue->next_to_clean = 0; + rx_queue->next_to_use = 0; - gfar_init_rxbdp(rx_queue, rxbdp, bufaddr); - rxbdp++; - } + /* make sure next_to_clean != next_to_use after this + * by leaving at least 1 unused descriptor + */ + gfar_alloc_rx_buffs(rx_queue, gfar_rxbd_unused(rx_queue)); rx_queue->rfbptr = rfbptr; rfbptr += 2; } - - return 0; } static int gfar_alloc_skb_resources(struct net_device *ndev) @@ -311,8 +294,7 @@ static int gfar_alloc_skb_resources(struct net_device *ndev) rx_queue->rx_skbuff[j] = NULL; } - if (gfar_init_bds(ndev)) - goto cleanup; + gfar_init_bds(ndev); return 0; @@ -1639,10 +1621,7 @@ static int gfar_restore(struct device *dev) return 0; } - if (gfar_init_bds(ndev)) { - free_skb_resources(priv); - return -ENOMEM; - } + gfar_init_bds(ndev); gfar_mac_reset(priv); @@ -2704,30 +2683,19 @@ static void gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue) netdev_tx_completed_queue(txq, howmany, bytes_sent); } -static struct sk_buff *gfar_alloc_skb(struct net_device *dev) +static struct sk_buff *gfar_new_skb(struct net_device *ndev, + dma_addr_t *bufaddr) { - struct gfar_private *priv = netdev_priv(dev); + struct gfar_private *priv = netdev_priv(ndev); struct sk_buff *skb; + dma_addr_t addr; - skb = netdev_alloc_skb(dev, priv->rx_buffer_size + RXBUF_ALIGNMENT); + skb = netdev_alloc_skb(ndev, priv->rx_buffer_size + RXBUF_ALIGNMENT); if (!skb) return NULL; gfar_align_skb(skb); - return skb; -} - -static struct sk_buff *gfar_new_skb(struct net_device *dev, dma_addr_t *bufaddr) -{ - struct gfar_private *priv = netdev_priv(dev); - struct sk_buff *skb; - dma_addr_t addr; - - skb = gfar_alloc_skb(dev); - if (!skb) - return NULL; - addr = dma_map_single(priv->dev, skb->data, priv->rx_buffer_size, DMA_FROM_DEVICE); if (unlikely(dma_mapping_error(priv->dev, addr))) { @@ -2739,6 +2707,55 @@ static struct sk_buff *gfar_new_skb(struct net_device *dev, dma_addr_t *bufaddr) return skb; } +static void gfar_rx_alloc_err(struct gfar_priv_rx_q *rx_queue) +{ + struct gfar_private *priv = netdev_priv(rx_queue->dev); + struct gfar_extra_stats *estats = &priv->extra_stats; + + netdev_err(rx_queue->dev, "Can't alloc RX buffers\n"); + atomic64_inc(&estats->rx_alloc_err); +} + +static void gfar_alloc_rx_buffs(struct gfar_priv_rx_q *rx_queue, + int alloc_cnt) +{ + struct net_device *ndev = rx_queue->dev; + struct rxbd8 *bdp, *base; + dma_addr_t bufaddr; + int i; + + i = rx_queue->next_to_use; + base = rx_queue->rx_bd_base; + bdp = &rx_queue->rx_bd_base[i]; + + while (alloc_cnt--) { + struct sk_buff *skb = rx_queue->rx_skbuff[i]; + + if (likely(!skb)) { + skb = gfar_new_skb(ndev, &bufaddr); + if (unlikely(!skb)) { + gfar_rx_alloc_err(rx_queue); + break; + } + } else { /* restore from sleep state */ + bufaddr = be32_to_cpu(bdp->bufPtr); + } + + rx_queue->rx_skbuff[i] = skb; + + /* Setup the new RxBD */ + gfar_init_rxbdp(rx_queue, bdp, bufaddr); + + /* Update to the next pointer */ + bdp = next_bd(bdp, base, rx_queue->rx_ring_size); + + if (unlikely(++i == rx_queue->rx_ring_size)) + i = 0; + } + + rx_queue->next_to_use = i; +} + static inline void count_errors(unsigned short status, struct net_device *dev) { struct gfar_private *priv = netdev_priv(dev); @@ -2838,7 +2855,7 @@ static inline void gfar_rx_checksum(struct sk_buff *skb, struct rxfcb *fcb) /* gfar_process_frame() -- handle one incoming packet if skb isn't NULL. */ static void gfar_process_frame(struct net_device *dev, struct sk_buff *skb, - int amount_pull, struct napi_struct *napi) + struct napi_struct *napi) { struct gfar_private *priv = netdev_priv(dev); struct rxfcb *fcb = NULL; @@ -2849,9 +2866,9 @@ static void gfar_process_frame(struct net_device *dev, struct sk_buff *skb, /* Remove the FCB from the skb * Remove the padded bytes, if there are any */ - if (amount_pull) { + if (priv->uses_rxfcb) { skb_record_rx_queue(skb, fcb->rq); - skb_pull(skb, amount_pull); + skb_pull(skb, GMAC_FCB_LEN); } /* Get receive timestamp from the skb */ @@ -2895,27 +2912,30 @@ int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit) struct net_device *dev = rx_queue->dev; struct rxbd8 *bdp, *base; struct sk_buff *skb; - int pkt_len; - int amount_pull; - int howmany = 0; + int i, howmany = 0; + int cleaned_cnt = gfar_rxbd_unused(rx_queue); struct gfar_private *priv = netdev_priv(dev); /* Get the first full descriptor */ - bdp = rx_queue->cur_rx; base = rx_queue->rx_bd_base; + i = rx_queue->next_to_clean; - amount_pull = priv->uses_rxfcb ? GMAC_FCB_LEN : 0; + while (rx_work_limit--) { - while (!(be16_to_cpu(bdp->status) & RXBD_EMPTY) && rx_work_limit--) { - struct sk_buff *newskb; - dma_addr_t bufaddr; + if (cleaned_cnt >= GFAR_RX_BUFF_ALLOC) { + gfar_alloc_rx_buffs(rx_queue, cleaned_cnt); + cleaned_cnt = 0; + } - rmb(); + bdp = &rx_queue->rx_bd_base[i]; + if (be16_to_cpu(bdp->status) & RXBD_EMPTY) + break; - /* Add another skb for the future */ - newskb = gfar_new_skb(dev, &bufaddr); + /* order rx buffer descriptor reads */ + rmb(); - skb = rx_queue->rx_skbuff[rx_queue->skb_currx]; + /* fetch next to clean buffer from the ring */ + skb = rx_queue->rx_skbuff[i]; dma_unmap_single(priv->dev, be32_to_cpu(bdp->bufPtr), priv->rx_buffer_size, DMA_FROM_DEVICE); @@ -2924,30 +2944,26 @@ int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit) be16_to_cpu(bdp->length) > priv->rx_buffer_size)) bdp->status = cpu_to_be16(RXBD_LARGE); - /* We drop the frame if we failed to allocate a new buffer */ - if (unlikely(!newskb || - !(be16_to_cpu(bdp->status) & RXBD_LAST) || + if (unlikely(!(be16_to_cpu(bdp->status) & RXBD_LAST) || be16_to_cpu(bdp->status) & RXBD_ERR)) { count_errors(be16_to_cpu(bdp->status), dev); - if (unlikely(!newskb)) { - newskb = skb; - bufaddr = be32_to_cpu(bdp->bufPtr); - } else if (skb) - dev_kfree_skb(skb); + /* discard faulty buffer */ + dev_kfree_skb(skb); + } else { /* Increment the number of packets */ rx_queue->stats.rx_packets++; howmany++; if (likely(skb)) { - pkt_len = be16_to_cpu(bdp->length) - + int pkt_len = be16_to_cpu(bdp->length) - ETH_FCS_LEN; /* Remove the FCS from the packet length */ skb_put(skb, pkt_len); rx_queue->stats.rx_bytes += pkt_len; skb_record_rx_queue(skb, rx_queue->qindex); - gfar_process_frame(dev, skb, amount_pull, + gfar_process_frame(dev, skb, &rx_queue->grp->napi_rx); } else { @@ -2958,26 +2974,23 @@ int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit) } - rx_queue->rx_skbuff[rx_queue->skb_currx] = newskb; - - /* Setup the new bdp */ - gfar_init_rxbdp(rx_queue, bdp, bufaddr); + rx_queue->rx_skbuff[i] = NULL; + cleaned_cnt++; + if (unlikely(++i == rx_queue->rx_ring_size)) + i = 0; + } - /* Update Last Free RxBD pointer for LFC */ - if (unlikely(rx_queue->rfbptr && priv->tx_actual_en)) - gfar_write(rx_queue->rfbptr, (u32)bdp); + rx_queue->next_to_clean = i; - /* Update to the next pointer */ - bdp = next_bd(bdp, base, rx_queue->rx_ring_size); + if (cleaned_cnt) + gfar_alloc_rx_buffs(rx_queue, cleaned_cnt); - /* update to point at the next skb */ - rx_queue->skb_currx = (rx_queue->skb_currx + 1) & - RX_RING_MOD_MASK(rx_queue->rx_ring_size); + /* Update Last Free RxBD pointer for LFC */ + if (unlikely(priv->tx_actual_en)) { + bdp = gfar_rxbd_lastfree(rx_queue); + gfar_write(rx_queue->rfbptr, (u32)bdp); } - /* Update the current rxbd pointer to be the next one */ - rx_queue->cur_rx = bdp; - return howmany; } @@ -3552,14 +3565,8 @@ static noinline void gfar_update_link_state(struct gfar_private *priv) if ((tempval1 & MACCFG1_TX_FLOW) && !tx_flow_oldval) { for (i = 0; i < priv->num_rx_queues; i++) { rx_queue = priv->rx_queue[i]; - bdp = rx_queue->cur_rx; - /* skip to previous bd */ - bdp = skip_bd(bdp, rx_queue->rx_ring_size - 1, - rx_queue->rx_bd_base, - rx_queue->rx_ring_size); - - if (rx_queue->rfbptr) - gfar_write(rx_queue->rfbptr, (u32)bdp); + bdp = gfar_rxbd_lastfree(rx_queue); + gfar_write(rx_queue->rfbptr, (u32)bdp); } priv->tx_actual_en = 1; diff --git a/drivers/net/ethernet/freescale/gianfar.h b/drivers/net/ethernet/freescale/gianfar.h index daa1d37..cadb068 100644 --- a/drivers/net/ethernet/freescale/gianfar.h +++ b/drivers/net/ethernet/freescale/gianfar.h @@ -92,6 +92,8 @@ extern const char gfar_driver_version[]; #define DEFAULT_TX_RING_SIZE 256 #define DEFAULT_RX_RING_SIZE 256 +#define GFAR_RX_BUFF_ALLOC 16 + #define GFAR_RX_MAX_RING_SIZE 256 #define GFAR_TX_MAX_RING_SIZE 256 @@ -640,6 +642,7 @@ struct rmon_mib }; struct gfar_extra_stats { + atomic64_t rx_alloc_err; atomic64_t rx_large; atomic64_t rx_short; atomic64_t rx_nonoctet; @@ -1015,9 +1018,9 @@ struct rx_q_stats { /** * struct gfar_priv_rx_q - per rx queue structure * @rx_skbuff: skb pointers - * @skb_currx: currently use skb pointer * @rx_bd_base: First rx buffer descriptor - * @cur_rx: Next free rx ring entry + * @next_to_use: index of the next buffer to be alloc'd + * @next_to_clean: index of the next buffer to be cleaned * @qindex: index of this queue * @dev: back pointer to the dev structure * @rx_ring_size: Rx ring size @@ -1027,19 +1030,18 @@ struct rx_q_stats { struct gfar_priv_rx_q { struct sk_buff **rx_skbuff __aligned(SMP_CACHE_BYTES); - dma_addr_t rx_bd_dma_base; struct rxbd8 *rx_bd_base; - struct rxbd8 *cur_rx; struct net_device *dev; - struct gfar_priv_grp *grp; + struct gfar_priv_grp *grp; + u16 rx_ring_size; + u16 qindex; + u16 next_to_clean; + u16 next_to_use; struct rx_q_stats stats; - u16 skb_currx; - u16 qindex; - unsigned int rx_ring_size; - /* RX Coalescing values */ + u32 __iomem *rfbptr; unsigned char rxcoalescing; unsigned long rxic; - u32 __iomem *rfbptr; + dma_addr_t rx_bd_dma_base; }; enum gfar_irqinfo_id { @@ -1295,6 +1297,23 @@ static inline void gfar_clear_txbd_status(struct txbd8 *bdp) bdp->lstatus = cpu_to_be32(lstatus); } +static inline int gfar_rxbd_unused(struct gfar_priv_rx_q *rxq) +{ + if (rxq->next_to_clean > rxq->next_to_use) + return rxq->next_to_clean - rxq->next_to_use - 1; + + return rxq->rx_ring_size + rxq->next_to_clean - rxq->next_to_use - 1; +} + +static inline struct rxbd8 *gfar_rxbd_lastfree(struct gfar_priv_rx_q *rxq) +{ + int i; + + i = rxq->next_to_use ? rxq->next_to_use - 1 : rxq->rx_ring_size - 1; + + return &rxq->rx_bd_base[i]; +} + irqreturn_t gfar_receive(int irq, void *dev_id); int startup_gfar(struct net_device *dev); void stop_gfar(struct net_device *dev); diff --git a/drivers/net/ethernet/freescale/gianfar_ethtool.c b/drivers/net/ethernet/freescale/gianfar_ethtool.c index fda12fb..012fa4e 100644 --- a/drivers/net/ethernet/freescale/gianfar_ethtool.c +++ b/drivers/net/ethernet/freescale/gianfar_ethtool.c @@ -61,6 +61,8 @@ static void gfar_gdrvinfo(struct net_device *dev, struct ethtool_drvinfo *drvinfo); static const char stat_gstrings[][ETH_GSTRING_LEN] = { + /* extra stats */ + "rx-allocation-errors", "rx-large-frame-errors", "rx-short-frame-errors", "rx-non-octet-errors", @@ -74,6 +76,7 @@ static const char stat_gstrings[][ETH_GSTRING_LEN] = { "tx-underrun-errors", "rx-skb-missing-errors", "tx-timeout-errors", + /* rmon stats */ "tx-rx-64-frames", "tx-rx-65-127-frames", "tx-rx-128-255-frames",