From patchwork Tue Mar 19 17:40:02 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudiu Manoil X-Patchwork-Id: 229157 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id D018E2C00BF for ; Wed, 20 Mar 2013 04:41:22 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757242Ab3CSRlT (ORCPT ); Tue, 19 Mar 2013 13:41:19 -0400 Received: from tx2ehsobe001.messaging.microsoft.com ([65.55.88.11]:28681 "EHLO tx2outboundpool.messaging.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932483Ab3CSRlS (ORCPT ); Tue, 19 Mar 2013 13:41:18 -0400 Received: from mail25-tx2-R.bigfish.com (10.9.14.252) by TX2EHSOBE013.bigfish.com (10.9.40.33) with Microsoft SMTP Server id 14.1.225.23; Tue, 19 Mar 2013 17:41:17 +0000 Received: from mail25-tx2 (localhost [127.0.0.1]) by mail25-tx2-R.bigfish.com (Postfix) with ESMTP id 62FF5420167; Tue, 19 Mar 2013 17:41:17 +0000 (UTC) X-Forefront-Antispam-Report: CIP:70.37.183.190; KIP:(null); UIP:(null); IPV:NLI; H:mail.freescale.net; RD:none; EFVD:NLI X-SpamScore: 0 X-BigFish: VS0(zzzz1f42h1ee6h1de0h1202h1e76h1d1ah1d2ahzz8275bhz2dh2a8h668h839hd24he5bhf0ah107ah11b5h121eh1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14afh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah1155h) Received: from mail25-tx2 (localhost.localdomain [127.0.0.1]) by mail25-tx2 (MessageSwitch) id 1363714874811276_21911; Tue, 19 Mar 2013 17:41:14 +0000 (UTC) Received: from TX2EHSMHS041.bigfish.com (unknown [10.9.14.229]) by mail25-tx2.bigfish.com (Postfix) with ESMTP id B656918005F; Tue, 19 Mar 2013 17:41:14 +0000 (UTC) Received: from mail.freescale.net (70.37.183.190) by TX2EHSMHS041.bigfish.com (10.9.99.141) with Microsoft SMTP Server (TLS) id 14.1.225.23; Tue, 19 Mar 2013 17:41:14 +0000 Received: from az84smr01.freescale.net (10.64.34.197) by 039-SN1MMR1-003.039d.mgd.msft.net (10.84.1.16) with Microsoft SMTP Server (TLS) id 14.2.328.11; Tue, 19 Mar 2013 17:41:13 +0000 Received: from zro04cle141.ea.freescale.net (udp157456uds.ea.freescale.net [140.101.223.141]) by az84smr01.freescale.net (8.14.3/8.14.0) with ESMTP id r2JHfCj5024189; Tue, 19 Mar 2013 10:41:12 -0700 Received: by zro04cle141.ea.freescale.net (Postfix, from userid 23113) id A4C0940083; Tue, 19 Mar 2013 19:40:05 +0200 (EET) From: Claudiu Manoil To: CC: Paul Gortmaker , "David S. Miller" Subject: [PATCH 1/4][net-next] gianfar: Fix tx napi polling Date: Tue, 19 Mar 2013 19:40:02 +0200 Message-ID: <1363714805-9142-2-git-send-email-claudiu.manoil@freescale.com> X-Mailer: git-send-email 1.7.11.3 In-Reply-To: <1363714805-9142-1-git-send-email-claudiu.manoil@freescale.com> References: <1363714805-9142-1-git-send-email-claudiu.manoil@freescale.com> MIME-Version: 1.0 X-OriginatorOrg: freescale.net Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org There are 2 issues with the current napi poll routine, with regards to tx ring cleanup: 1) for multi-queue devices (MQ_MG_MODE), should tx_bit_map != rx_bit_map, which is possible (and supported in h/w) if the DT property "fsl,tx-bit-map" holds a different value than rx_bit_map, the current polling routine will service the wrong Tx queues in this case (i.e. the interrupt group will receive interrupts from tx queues that it will not service) 2) Tx cleanup completion consumes napi budget, whereas the napi budget should be reserved for Rx work only. The patch fixes these issues and provides a clean napi polling routine. Napi poll completion is reached when all the Rx queues have been serviced and there is no Tx work to do. Signed-off-by: Claudiu Manoil --- drivers/net/ethernet/freescale/gianfar.c | 82 ++++++++++++++++++-------------- 1 file changed, 45 insertions(+), 37 deletions(-) diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c index 1b468a8..1e555a7 100644 --- a/drivers/net/ethernet/freescale/gianfar.c +++ b/drivers/net/ethernet/freescale/gianfar.c @@ -132,7 +132,7 @@ static int gfar_poll(struct napi_struct *napi, int budget); static void gfar_netpoll(struct net_device *dev); #endif int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit); -static int gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue); +static void gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue); static void gfar_process_frame(struct net_device *dev, struct sk_buff *skb, int amount_pull, struct napi_struct *napi); void gfar_halt(struct net_device *dev); @@ -2468,7 +2468,7 @@ static void gfar_align_skb(struct sk_buff *skb) } /* Interrupt Handler for Transmit complete */ -static int gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue) +static void gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue) { struct net_device *dev = tx_queue->dev; struct netdev_queue *txq; @@ -2570,8 +2570,6 @@ static int gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue) tx_queue->dirty_tx = bdp; netdev_tx_completed_queue(txq, howmany, bytes_sent); - - return howmany; } static void gfar_schedule_cleanup(struct gfar_priv_grp *gfargrp) @@ -2834,62 +2832,72 @@ static int gfar_poll(struct napi_struct *napi, int budget) struct gfar __iomem *regs = gfargrp->regs; struct gfar_priv_tx_q *tx_queue = NULL; struct gfar_priv_rx_q *rx_queue = NULL; - int rx_cleaned = 0, budget_per_queue = 0, rx_cleaned_per_queue = 0; - int tx_cleaned = 0, i, left_over_budget = budget; + int work_done = 0, work_done_per_q = 0; + int i, budget_per_q; + int has_tx_work; unsigned long serviced_queues = 0; - int num_queues = 0; - - num_queues = gfargrp->num_rx_queues; - budget_per_queue = budget/num_queues; + int num_queues = gfargrp->num_rx_queues; + budget_per_q = budget/num_queues; /* Clear IEVENT, so interrupts aren't called again * because of the packets that have already arrived */ gfar_write(®s->ievent, IEVENT_RTX_MASK); - while (num_queues && left_over_budget) { - budget_per_queue = left_over_budget/num_queues; - left_over_budget = 0; + while (1) { + has_tx_work = 0; + for_each_set_bit(i, &gfargrp->tx_bit_map, priv->num_tx_queues) { + tx_queue = priv->tx_queue[i]; + /* run Tx cleanup to completion */ + if (tx_queue->tx_skbuff[tx_queue->skb_dirtytx]) { + gfar_clean_tx_ring(tx_queue); + has_tx_work = 1; + } + } for_each_set_bit(i, &gfargrp->rx_bit_map, priv->num_rx_queues) { if (test_bit(i, &serviced_queues)) continue; + rx_queue = priv->rx_queue[i]; - tx_queue = priv->tx_queue[rx_queue->qindex]; - - tx_cleaned += gfar_clean_tx_ring(tx_queue); - rx_cleaned_per_queue = - gfar_clean_rx_ring(rx_queue, budget_per_queue); - rx_cleaned += rx_cleaned_per_queue; - if (rx_cleaned_per_queue < budget_per_queue) { - left_over_budget = left_over_budget + - (budget_per_queue - - rx_cleaned_per_queue); + work_done_per_q = + gfar_clean_rx_ring(rx_queue, budget_per_q); + work_done += work_done_per_q; + + /* finished processing this queue */ + if (work_done_per_q < budget_per_q) { set_bit(i, &serviced_queues); num_queues--; + if (!num_queues) + break; + /* recompute budget per Rx queue */ + budget_per_q = + (budget - work_done) / num_queues; } } - } - if (tx_cleaned) - return budget; + if (work_done >= budget) + break; - if (rx_cleaned < budget) { - napi_complete(napi); + if (!num_queues && !has_tx_work) { - /* Clear the halt bit in RSTAT */ - gfar_write(®s->rstat, gfargrp->rstat); + napi_complete(napi); - gfar_write(®s->imask, IMASK_DEFAULT); + /* Clear the halt bit in RSTAT */ + gfar_write(®s->rstat, gfargrp->rstat); - /* If we are coalescing interrupts, update the timer - * Otherwise, clear it - */ - gfar_configure_coalescing(priv, gfargrp->rx_bit_map, - gfargrp->tx_bit_map); + gfar_write(®s->imask, IMASK_DEFAULT); + + /* If we are coalescing interrupts, update the timer + * Otherwise, clear it + */ + gfar_configure_coalescing(priv, gfargrp->rx_bit_map, + gfargrp->tx_bit_map); + break; + } } - return rx_cleaned; + return work_done; } #ifdef CONFIG_NET_POLL_CONTROLLER