From patchwork Thu Apr 7 18:39:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 607633 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3qgryk733dz9sC4 for ; Fri, 8 Apr 2016 04:41:18 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b=1ukDieyB; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757175AbcDGSlP (ORCPT ); Thu, 7 Apr 2016 14:41:15 -0400 Received: from mail-qk0-f175.google.com ([209.85.220.175]:33118 "EHLO mail-qk0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757390AbcDGSkc (ORCPT ); Thu, 7 Apr 2016 14:40:32 -0400 Received: by mail-qk0-f175.google.com with SMTP id k135so30783690qke.0 for ; Thu, 07 Apr 2016 11:40:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rgR6rp2cXY3RKuToFXwqPPwPWXQZoxVD3mL+4dm562o=; b=1ukDieyBK2nvkE5LyxTrGrCx8UkBhjdjGQG8IxpbMRMCI9dvVrj23NQzreMT5GnVR4 QmIxVNfdHU5nLYiM/Y4DRs4Qk15dzXeeTD+lYlr6tpF+NNU/45NUXEj0rj7gXg5hnwlA 3ZrCKSBPBDHWUn20TcrxSfKJugFYY3BvrItZnceM05A/5EhS2OSUwaR7LiBYlcJy0SE2 yn+0yf0pwY9D9oV8uOiCaKEBHjL8hNiiI7QDrfveAjuJ2LBcL24UcqLGQoY+DXq6o1Tq ItcxfP/Q92XI/i8JDNqqDnnfwzy/sZhFavoV1RSeg9XNkSE/ywZDF/9NlxcBYBwYtr4T NQ5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rgR6rp2cXY3RKuToFXwqPPwPWXQZoxVD3mL+4dm562o=; b=KL5CsSvsRcSbKd8DRj0NblcHGrEjRE1XrLqGkNztnnksm0PmCNAiLQFYacbAfp/y0V 2A7BaO712iFMfHSGdfM7nQMb9ZxFfgbARblg4eHtOAzwQ4OYu6lIzWUJkkUDXjtGzbcl poGzvaHo/p+ev0H4K0SbYy7TaTcGJiLib68ENSlcDi/2YrC37bwjANBrGEQuxM60atUB 785CHwclUxF8i/Y7fhBI696iC7AK77COiIi4C7cng/sPii4KE/Bjj/22yK0iKo0ZFyYM 8ciI79yYv/xFTy+WzjiZTanRY7m7g99HuwTQqk7KQEqlG0wHmw6eyBQTNo+/ZnQES+Ph i+fA== X-Gm-Message-State: AD7BkJKyWolgvc2yjI0k4erjExN9/wb4GKyHAvxADsj8fTulkICfeZbGLCBMSZ+OudXalz/K X-Received: by 10.55.15.85 with SMTP id z82mr6027533qkg.15.1460054431382; Thu, 07 Apr 2016 11:40:31 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([213.205.251.176]) by smtp.gmail.com with ESMTPSA id 131sm2530154qhk.15.2016.04.07.11.40.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 07 Apr 2016 11:40:30 -0700 (PDT) From: Jakub Kicinski To: netdev@vger.kernel.org Cc: Jakub Kicinski Subject: [PATCH v5 net-next 11/15] nfp: sync ring state during FW reconfiguration Date: Thu, 7 Apr 2016 19:39:44 +0100 Message-Id: <1460054388-471-12-git-send-email-jakub.kicinski@netronome.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1460054388-471-1-git-send-email-jakub.kicinski@netronome.com> References: <1460054388-471-1-git-send-email-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org FW reconfiguration in .ndo_open()/.ndo_stop() should reset/ restore queue state. Since we need IRQs to be disabled when filling rings on RX path we have to move disable_irq() from .ndo_open() all the way up to IRQ allocation. nfp_net_start_vec() becomes trivial now so it's inlined. Signed-off-by: Jakub Kicinski --- .../net/ethernet/netronome/nfp/nfp_net_common.c | 45 ++++++++-------------- 1 file changed, 16 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 6c1ed8914416..ed23b9d348c3 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -1519,6 +1519,7 @@ nfp_net_prepare_vector(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, nn_err(nn, "Error requesting IRQ %d\n", entry->vector); return err; } + disable_irq(entry->vector); /* Setup NAPI */ netif_napi_add(nn->netdev, &r_vec->napi, @@ -1647,13 +1648,14 @@ static void nfp_net_clear_config_and_disable(struct nfp_net *nn) nn_writel(nn, NFP_NET_CFG_CTRL, new_ctrl); err = nfp_net_reconfig(nn, update); - if (err) { + if (err) nn_err(nn, "Could not disable device: %d\n", err); - return; - } - for (r = 0; r < nn->num_r_vecs; r++) + for (r = 0; r < nn->num_r_vecs; r++) { + nfp_net_rx_ring_reset(nn->r_vecs[r].rx_ring); + nfp_net_tx_ring_reset(nn, nn->r_vecs[r].tx_ring); nfp_net_vec_clear_ring_data(nn, r); + } nn->ctrl = new_ctrl; } @@ -1721,6 +1723,9 @@ static int __nfp_net_set_config_and_enable(struct nfp_net *nn) nn->ctrl = new_ctrl; + for (r = 0; r < nn->num_r_vecs; r++) + nfp_net_rx_ring_fill_freelist(nn->r_vecs[r].rx_ring); + /* Since reconfiguration requests while NFP is down are ignored we * have to wipe the entire VXLAN configuration and reinitialize it. */ @@ -1749,26 +1754,6 @@ static int nfp_net_set_config_and_enable(struct nfp_net *nn) } /** - * nfp_net_start_vec() - Start ring vector - * @nn: NFP Net device structure - * @r_vec: Ring vector to be started - */ -static void -nfp_net_start_vec(struct nfp_net *nn, struct nfp_net_r_vector *r_vec) -{ - unsigned int irq_vec; - - irq_vec = nn->irq_entries[r_vec->irq_idx].vector; - - disable_irq(irq_vec); - - nfp_net_rx_ring_fill_freelist(r_vec->rx_ring); - napi_enable(&r_vec->napi); - - enable_irq(irq_vec); -} - -/** * nfp_net_open_stack() - Start the device from stack's perspective * @nn: NFP Net device to reconfigure */ @@ -1776,8 +1761,10 @@ static void nfp_net_open_stack(struct nfp_net *nn) { unsigned int r; - for (r = 0; r < nn->num_r_vecs; r++) - nfp_net_start_vec(nn, &nn->r_vecs[r]); + for (r = 0; r < nn->num_r_vecs; r++) { + napi_enable(&nn->r_vecs[r].napi); + enable_irq(nn->irq_entries[nn->r_vecs[r].irq_idx].vector); + } netif_tx_wake_all_queues(nn->netdev); @@ -1902,8 +1889,10 @@ static void nfp_net_close_stack(struct nfp_net *nn) netif_carrier_off(nn->netdev); nn->link_up = false; - for (r = 0; r < nn->num_r_vecs; r++) + for (r = 0; r < nn->num_r_vecs; r++) { + disable_irq(nn->irq_entries[nn->r_vecs[r].irq_idx].vector); napi_disable(&nn->r_vecs[r].napi); + } netif_tx_disable(nn->netdev); } @@ -1917,9 +1906,7 @@ static void nfp_net_close_free_all(struct nfp_net *nn) unsigned int r; for (r = 0; r < nn->num_r_vecs; r++) { - nfp_net_rx_ring_reset(nn->r_vecs[r].rx_ring); nfp_net_rx_ring_bufs_free(nn, nn->r_vecs[r].rx_ring); - nfp_net_tx_ring_reset(nn, nn->r_vecs[r].tx_ring); nfp_net_rx_ring_free(nn->r_vecs[r].rx_ring); nfp_net_tx_ring_free(nn->r_vecs[r].tx_ring); nfp_net_cleanup_vector(nn, &nn->r_vecs[r]);