From patchwork Mon Dec 22 10:00:22 2008 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yevgeny Petrilin X-Patchwork-Id: 15208 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id CDC55DDE08 for ; Mon, 22 Dec 2008 21:08:12 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753675AbYLVKII (ORCPT ); Mon, 22 Dec 2008 05:08:08 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753557AbYLVKIH (ORCPT ); Mon, 22 Dec 2008 05:08:07 -0500 Received: from mail.mellanox.co.il ([194.90.237.43]:58951 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753426AbYLVKIF (ORCPT ); Mon, 22 Dec 2008 05:08:05 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from yevgenyp@mellanox.co.il) with SMTP; 22 Dec 2008 12:01:21 +0200 Received: from [10.4.3.12] ([10.4.3.12]) by mtlexch01.mtl.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 22 Dec 2008 12:01:21 +0200 Message-ID: <494F6536.6040107@mellanox.co.il> Date: Mon, 22 Dec 2008 12:00:22 +0200 From: Yevgeny Petrilin User-Agent: Thunderbird 2.0.0.17 (X11/20080914) MIME-Version: 1.0 To: jeff@garzik.org CC: rdreier@cisco.com, netdev@vger.kernel.org, general@lists.openfabrics.org Subject: [PATCH 2/9] mlx4_en: Removed TX locking when polling TX cq X-OriginalArrivalTime: 22 Dec 2008 10:01:21.0661 (UTC) FILETIME=[3DDE4AD0:01C9641C] X-TM-AS-Product-Ver: SMEX-8.0.0.1181-5.500.1027-16354.006 X-TM-AS-Result: No--5.372900-8.000000-31 X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org There is no need to synchronize the polling with the transmit function. The only place to synchronize is when we process the cq from the transmit function. Also removed spin_lock_irq, and using spin_trylock, if somebody else is already processing the cq, no need to wait for it to finish. Signed-off-by: Yevgeny Petrilin --- drivers/net/mlx4/en_tx.c | 24 +++++++++++++----------- 1 files changed, 13 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx4/en_tx.c b/drivers/net/mlx4/en_tx.c index 8592f8f..1f25821 100644 --- a/drivers/net/mlx4/en_tx.c +++ b/drivers/net/mlx4/en_tx.c @@ -404,14 +404,12 @@ void mlx4_en_tx_irq(struct mlx4_cq *mcq) struct mlx4_en_priv *priv = netdev_priv(cq->dev); struct mlx4_en_tx_ring *ring = &priv->tx_ring[cq->ring]; - spin_lock_irq(&ring->comp_lock); cq->armed = 0; + if (!spin_trylock(&ring->comp_lock)) + return; mlx4_en_process_tx_cq(cq->dev, cq); - if (ring->blocked) - mlx4_en_arm_cq(priv, cq); - else - mod_timer(&cq->timer, jiffies + 1); - spin_unlock_irq(&ring->comp_lock); + mod_timer(&cq->timer, jiffies + 1); + spin_unlock(&ring->comp_lock); } @@ -424,8 +422,10 @@ void mlx4_en_poll_tx_cq(unsigned long data) INC_PERF_COUNTER(priv->pstats.tx_poll); - netif_tx_lock(priv->dev); - spin_lock_irq(&ring->comp_lock); + if (!spin_trylock(&ring->comp_lock)) { + mod_timer(&cq->timer, jiffies + MLX4_EN_TX_POLL_TIMEOUT); + return; + } mlx4_en_process_tx_cq(cq->dev, cq); inflight = (u32) (ring->prod - ring->cons - ring->last_nr_txbb); @@ -435,8 +435,7 @@ void mlx4_en_poll_tx_cq(unsigned long data) if (inflight && priv->port_up) mod_timer(&cq->timer, jiffies + MLX4_EN_TX_POLL_TIMEOUT); - spin_unlock_irq(&ring->comp_lock); - netif_tx_unlock(priv->dev); + spin_unlock(&ring->comp_lock); } static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv, @@ -479,7 +478,10 @@ static inline void mlx4_en_xmit_poll(struct mlx4_en_priv *priv, int tx_ind) /* Poll the CQ every mlx4_en_TX_MODER_POLL packets */ if ((++ring->poll_cnt & (MLX4_EN_TX_POLL_MODER - 1)) == 0) - mlx4_en_process_tx_cq(priv->dev, cq); + if (spin_trylock(&ring->comp_lock)) { + mlx4_en_process_tx_cq(priv->dev, cq); + spin_unlock(&ring->comp_lock); + } } static void *get_frag_ptr(struct sk_buff *skb)