From patchwork Fri Apr 1 21:06:42 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 605020 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3qcDTv5pdtz9sdm for ; Sat, 2 Apr 2016 08:07:15 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b=m+DSB+3+; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754088AbcDAVHL (ORCPT ); Fri, 1 Apr 2016 17:07:11 -0400 Received: from mail-wm0-f46.google.com ([74.125.82.46]:37415 "EHLO mail-wm0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753906AbcDAVHH (ORCPT ); Fri, 1 Apr 2016 17:07:07 -0400 Received: by mail-wm0-f46.google.com with SMTP id p65so5354671wmp.0 for ; Fri, 01 Apr 2016 14:07:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UXM100njSt8bBOnBycBXvrgwGOEpLrHs0QCRp1t2xfQ=; b=m+DSB+3+FdWIPwBWXcGnFWtAoMgvpI2VaL0dDJvxWff5mD9ROGq4DdgxiepXPSB0Sd dHanYS9OwR6vBG7dTQD1h1J0spssJ1CbmxWLD4mpNHSi3z6mtzC4VZBUKhlLweSl80kg mC81bnN60TbQnIj+ucc3KDgieT5V09r9u14+SH+h/d94I80yu9WkPrlr8Lc5gbOnufFU 3uSGvF1vm5M6Tpc00nzJQo+6c4df2QGM4HmqmJBtN/umHpH9h+PojTxDeSytgytm9dGb PCtBOUA5TVUdqWb6dciWEiPfBbxEiPGmD1FsCNcTc2M38JjSvt6/0jfzmWZLVyYLJ3P0 Vggw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UXM100njSt8bBOnBycBXvrgwGOEpLrHs0QCRp1t2xfQ=; b=ASEAwRvZQQ1oqK5Vsi991ZpAgtJQpQm57IKHPeoFAUyJ08OMgLsrOzssEngM1DlFal u+rj9KUwmTEQz8jly3m4SkY7xAleg6zKIeZPqczv6Jt4OCDgFDeBGwG0iFacjzYZ4zI/ P8sIviN73Oa0bus7L5MT41kONwXZ+fxRVMK3UOrikZaqSDxBJ4fUGvipgosf2Zq5/dJC c7TPJbBehiPNjiSEDukBk/V77ftuPhXfUBGCfLG5mFfygQ8xrGjXDkObLtNfMMAsNuhL r2vIiqeUJz2JEpxRkhTgcH0lpAN0HHjQsK1qnfQLSKC2Gmlz7/yUHZKYxUcoW1qki9YG 2Jxg== X-Gm-Message-State: AD7BkJIuXJiNWBrX5cI7UDjAx7274GiSq+uqMRAXE9JdPgT/tnGSMdpVRjc+7vhtAYtaOjss X-Received: by 10.194.178.132 with SMTP id cy4mr18824364wjc.66.1459544825627; Fri, 01 Apr 2016 14:07:05 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([79.78.33.110]) by smtp.gmail.com with ESMTPSA id by7sm15688971wjc.18.2016.04.01.14.07.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 01 Apr 2016 14:07:04 -0700 (PDT) From: Jakub Kicinski To: netdev@vger.kernel.org Cc: Jakub Kicinski Subject: [PATCH v4 net-next 06/15] nfp: cleanup tx ring flush and rename to reset Date: Fri, 1 Apr 2016 22:06:42 +0100 Message-Id: <1459544811-24879-7-git-send-email-jakub.kicinski@netronome.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459544811-24879-1-git-send-email-jakub.kicinski@netronome.com> References: <1459544811-24879-1-git-send-email-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since we never used flush without freeing the ring later the functionality of the two operations is mixed. Rename flush to ring reset and move there all the things which have to be done after FW ring state is cleared. While at it do some clean-ups. Signed-off-by: Jakub Kicinski --- .../net/ethernet/netronome/nfp/nfp_net_common.c | 81 ++++++++++------------ 1 file changed, 37 insertions(+), 44 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 8f7e2e044811..9a027a3cfe02 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -868,61 +868,59 @@ static void nfp_net_tx_complete(struct nfp_net_tx_ring *tx_ring) } /** - * nfp_net_tx_flush() - Free any untransmitted buffers currently on the TX ring - * @tx_ring: TX ring structure + * nfp_net_tx_ring_reset() - Free any untransmitted buffers and reset pointers + * @nn: NFP Net device + * @tx_ring: TX ring structure * * Assumes that the device is stopped */ -static void nfp_net_tx_flush(struct nfp_net_tx_ring *tx_ring) +static void +nfp_net_tx_ring_reset(struct nfp_net *nn, struct nfp_net_tx_ring *tx_ring) { - struct nfp_net_r_vector *r_vec = tx_ring->r_vec; - struct nfp_net *nn = r_vec->nfp_net; - struct pci_dev *pdev = nn->pdev; const struct skb_frag_struct *frag; struct netdev_queue *nd_q; - struct sk_buff *skb; - int nr_frags; - int fidx; - int idx; + struct pci_dev *pdev = nn->pdev; while (tx_ring->rd_p != tx_ring->wr_p) { - idx = tx_ring->rd_p % tx_ring->cnt; + int nr_frags, fidx, idx; + struct sk_buff *skb; + idx = tx_ring->rd_p % tx_ring->cnt; skb = tx_ring->txbufs[idx].skb; - if (skb) { - nr_frags = skb_shinfo(skb)->nr_frags; - fidx = tx_ring->txbufs[idx].fidx; - - if (fidx == -1) { - /* unmap head */ - dma_unmap_single(&pdev->dev, - tx_ring->txbufs[idx].dma_addr, - skb_headlen(skb), - DMA_TO_DEVICE); - } else { - /* unmap fragment */ - frag = &skb_shinfo(skb)->frags[fidx]; - dma_unmap_page(&pdev->dev, - tx_ring->txbufs[idx].dma_addr, - skb_frag_size(frag), - DMA_TO_DEVICE); - } - - /* check for last gather fragment */ - if (fidx == nr_frags - 1) - dev_kfree_skb_any(skb); - - tx_ring->txbufs[idx].dma_addr = 0; - tx_ring->txbufs[idx].skb = NULL; - tx_ring->txbufs[idx].fidx = -2; + nr_frags = skb_shinfo(skb)->nr_frags; + fidx = tx_ring->txbufs[idx].fidx; + + if (fidx == -1) { + /* unmap head */ + dma_unmap_single(&pdev->dev, + tx_ring->txbufs[idx].dma_addr, + skb_headlen(skb), DMA_TO_DEVICE); + } else { + /* unmap fragment */ + frag = &skb_shinfo(skb)->frags[fidx]; + dma_unmap_page(&pdev->dev, + tx_ring->txbufs[idx].dma_addr, + skb_frag_size(frag), DMA_TO_DEVICE); } - memset(&tx_ring->txds[idx], 0, sizeof(tx_ring->txds[idx])); + /* check for last gather fragment */ + if (fidx == nr_frags - 1) + dev_kfree_skb_any(skb); + + tx_ring->txbufs[idx].dma_addr = 0; + tx_ring->txbufs[idx].skb = NULL; + tx_ring->txbufs[idx].fidx = -2; tx_ring->qcp_rd_p++; tx_ring->rd_p++; } + memset(tx_ring->txds, 0, sizeof(*tx_ring->txds) * tx_ring->cnt); + tx_ring->wr_p = 0; + tx_ring->rd_p = 0; + tx_ring->qcp_rd_p = 0; + tx_ring->wr_ptr_add = 0; + nd_q = netdev_get_tx_queue(nn->netdev, tx_ring->idx); netdev_tx_reset_queue(nd_q); } @@ -1363,11 +1361,6 @@ static void nfp_net_tx_ring_free(struct nfp_net_tx_ring *tx_ring) tx_ring->txds, tx_ring->dma); tx_ring->cnt = 0; - tx_ring->wr_p = 0; - tx_ring->rd_p = 0; - tx_ring->qcp_rd_p = 0; - tx_ring->wr_ptr_add = 0; - tx_ring->txbufs = NULL; tx_ring->txds = NULL; tx_ring->dma = 0; @@ -1860,7 +1853,7 @@ static int nfp_net_netdev_close(struct net_device *netdev) */ for (r = 0; r < nn->num_r_vecs; r++) { nfp_net_rx_flush(nn->r_vecs[r].rx_ring); - nfp_net_tx_flush(nn->r_vecs[r].tx_ring); + nfp_net_tx_ring_reset(nn, nn->r_vecs[r].tx_ring); nfp_net_rx_ring_free(nn->r_vecs[r].rx_ring); nfp_net_tx_ring_free(nn->r_vecs[r].tx_ring); nfp_net_cleanup_vector(nn, &nn->r_vecs[r]);