From patchwork Fri Aug 30 21:49:24 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Herring X-Patchwork-Id: 271449 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 6F1CA2C00C4 for ; Sat, 31 Aug 2013 07:50:09 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756822Ab3H3VuF (ORCPT ); Fri, 30 Aug 2013 17:50:05 -0400 Received: from mail-oa0-f43.google.com ([209.85.219.43]:49378 "EHLO mail-oa0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756802Ab3H3VuD (ORCPT ); Fri, 30 Aug 2013 17:50:03 -0400 Received: by mail-oa0-f43.google.com with SMTP id i10so3002774oag.30 for ; Fri, 30 Aug 2013 14:50:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IGXlmNISCSqTIbFQh2BPUF3t8BG3vBInPCeYnIojO+E=; b=m7US0w29MmVV8nXC+IUVVB0ccscX2PnVPnIhdR9sAx9SjssdRuGOqBM309694vAj5r dxl2faKQtsJJrnLjU+PsXvTti4wsSlmcFXFotp4CGaQdp5nXYRnKpzaJr424SaWeURLk U+wZs4BcTpVD3p199/G4i29ERetdKJ7TJMH15JZay5VY3Xj1ztXALXOPFFAgZxGfjDvd ISjfhWOQskGAnNNRXZ0JG6625UszJj+KUN32Dq73+ImwhVCyTSety8rjw4Uz9PKdyjCr HGAeV2JghfuSuPfMt/dxQh6ohMp3lDb/353bC5KFfxTSJoctz5/D4iHSA2O4DA4/+3vq fR9g== X-Received: by 10.60.83.46 with SMTP id n14mr7162944oey.34.1377899402838; Fri, 30 Aug 2013 14:50:02 -0700 (PDT) Received: from rob-laptop.calxeda.com ([173.226.190.126]) by mx.google.com with ESMTPSA id qi5sm47974obb.6.1969.12.31.16.00.00 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 30 Aug 2013 14:50:02 -0700 (PDT) From: Rob Herring To: netdev@vger.kernel.org Cc: Ben Hutchings , Lennert Buytenhek , Rob Herring Subject: [PATCH v3 06/11] net: calxedaxgmac: fix race with tx queue stop/wake Date: Fri, 30 Aug 2013 16:49:24 -0500 Message-Id: <1377899369-23252-6-git-send-email-robherring2@gmail.com> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1377899369-23252-1-git-send-email-robherring2@gmail.com> References: <1377899369-23252-1-git-send-email-robherring2@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Rob Herring Since the xgmac transmit start and completion work locklessly, it is possible for xgmac_xmit to stop the tx queue after the xgmac_tx_complete has run resulting in the tx queue never being woken up. Fix this by ensuring that ring buffer index updates are visible and recheck the ring space after stopping the queue. Also fix an off-by-one bug where we need to stop the queue when the ring buffer space is equal to MAX_SKB_FRAGS. The implementation used here was copied from drivers/net/ethernet/broadcom/tg3.c. Signed-off-by: Rob Herring Reviewed-by: Ben Hutchings --- v3: - Fix off-by-one bug in queue stopping v2: - drop netif_tx_lock - use netif_start_queue instead of netif_wake_queue drivers/net/ethernet/calxeda/xgmac.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/calxeda/xgmac.c b/drivers/net/ethernet/calxeda/xgmac.c index f630855..5d0b61a 100644 --- a/drivers/net/ethernet/calxeda/xgmac.c +++ b/drivers/net/ethernet/calxeda/xgmac.c @@ -410,6 +410,9 @@ struct xgmac_priv { #define dma_ring_space(h, t, s) CIRC_SPACE(h, t, s) #define dma_ring_cnt(h, t, s) CIRC_CNT(h, t, s) +#define tx_dma_ring_space(p) \ + dma_ring_space((p)->tx_head, (p)->tx_tail, DMA_TX_RING_SZ) + /* XGMAC Descriptor Access Helpers */ static inline void desc_set_buf_len(struct xgmac_dma_desc *p, u32 buf_sz) { @@ -886,8 +889,10 @@ static void xgmac_tx_complete(struct xgmac_priv *priv) priv->tx_tail = dma_ring_incr(entry, DMA_TX_RING_SZ); } - if (dma_ring_space(priv->tx_head, priv->tx_tail, DMA_TX_RING_SZ) > - MAX_SKB_FRAGS) + /* Ensure tx_tail is visible to xgmac_xmit */ + smp_mb(); + if (unlikely(netif_queue_stopped(priv->dev) && + (tx_dma_ring_space(priv) > MAX_SKB_FRAGS))) netif_wake_queue(priv->dev); } @@ -1125,10 +1130,15 @@ static netdev_tx_t xgmac_xmit(struct sk_buff *skb, struct net_device *dev) priv->tx_head = dma_ring_incr(entry, DMA_TX_RING_SZ); - if (dma_ring_space(priv->tx_head, priv->tx_tail, DMA_TX_RING_SZ) < - MAX_SKB_FRAGS) + /* Ensure tx_head update is visible to tx completion */ + smp_mb(); + if (unlikely(tx_dma_ring_space(priv) <= MAX_SKB_FRAGS)) { netif_stop_queue(dev); - + /* Ensure netif_stop_queue is visible to tx completion */ + smp_mb(); + if (tx_dma_ring_space(priv) > MAX_SKB_FRAGS) + netif_start_queue(dev); + } return NETDEV_TX_OK; }