From patchwork Wed Apr 19 13:25:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael, Alice" X-Patchwork-Id: 752527 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3w7Zr82zz5z9s3s for ; Thu, 20 Apr 2017 07:28:56 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id D7B5D8A33B; Wed, 19 Apr 2017 21:28:54 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4ohY+Y4qdPdd; Wed, 19 Apr 2017 21:28:52 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by hemlock.osuosl.org (Postfix) with ESMTP id C70EC8A342; Wed, 19 Apr 2017 21:28:50 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by ash.osuosl.org (Postfix) with ESMTP id A843A1C276A for ; Wed, 19 Apr 2017 21:28:45 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id A280630D32 for ; Wed, 19 Apr 2017 21:28:45 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Xmu6Xf2anE6s for ; Wed, 19 Apr 2017 21:28:42 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by silver.osuosl.org (Postfix) with ESMTPS id 6FBEA30D48 for ; Wed, 19 Apr 2017 21:28:42 +0000 (UTC) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Apr 2017 14:28:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,222,1488873600"; d="scan'208";a="958903555" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.16.121]) by orsmga003.jf.intel.com with ESMTP; 19 Apr 2017 14:28:41 -0700 From: Alice Michael To: alice.michael@intel.com, intel-wired-lan@lists.osuosl.org Date: Wed, 19 Apr 2017 09:25:52 -0400 Message-Id: <20170419132559.20459-3-alice.michael@intel.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170419132559.20459-1-alice.michael@intel.com> References: <20170419132559.20459-1-alice.michael@intel.com> Subject: [Intel-wired-lan] [next PATCH S71 03/10] i40e: amortize wait time when disabling lots of VFs X-BeenThere: intel-wired-lan@lists.osuosl.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-wired-lan-bounces@lists.osuosl.org Sender: "Intel-wired-lan" From: Jacob Keller Just as we do in i40e_reset_all_vfs, save some time when freeing VFs by amortizing the wait time for stopping queues. We can use i40e_vsi_stop_rings_no_wait() to begin the process of stopping all the VF rings at once. Then, once we've started the process on each VF we can begin waiting for the VFs to stop. This helps reduce the total wait time by a large factor. Signed-off-by: Jacob Keller Tested-by: Andrew Bowers --- drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 74977a2..2a47a64 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -1194,9 +1194,21 @@ void i40e_free_vfs(struct i40e_pf *pf) usleep_range(1000, 2000); i40e_notify_client_of_vf_enable(pf, 0); - for (i = 0; i < pf->num_alloc_vfs; i++) + + /* Amortize wait time by stopping all VFs at the same time */ + for (i = 0; i < pf->num_alloc_vfs; i++) { + if (test_bit(I40E_VF_STATE_INIT, &pf->vf[i].vf_states)) + continue; + + i40e_vsi_stop_rings_no_wait(pf->vsi[pf->vf[i].lan_vsi_idx]); + } + + for (i = 0; i < pf->num_alloc_vfs; i++) { if (test_bit(I40E_VF_STATE_INIT, &pf->vf[i].vf_states)) - i40e_vsi_stop_rings(pf->vsi[pf->vf[i].lan_vsi_idx]); + continue; + + i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[i].lan_vsi_idx]); + } /* Disable IOV before freeing resources. This lets any VF drivers * running in the host get themselves cleaned up before we yank