From patchwork Sun Apr 30 13:24:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirsher, Jeffrey T" X-Patchwork-Id: 756861 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3wG7bD3Qytz9s3w for ; Sun, 30 Apr 2017 23:25:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1164973AbdD3NZ0 (ORCPT ); Sun, 30 Apr 2017 09:25:26 -0400 Received: from mga07.intel.com ([134.134.136.100]:53521 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1163243AbdD3NZK (ORCPT ); Sun, 30 Apr 2017 09:25:10 -0400 Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP; 30 Apr 2017 06:24:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,395,1488873600"; d="scan'208";a="93926535" Received: from krburton-mobl2.amr.corp.intel.com (HELO jtkirshe-DESK.amr.corp.intel.com.com) ([10.254.69.40]) by orsmga005.jf.intel.com with ESMTP; 30 Apr 2017 06:24:56 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Jacob Keller , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jogreene@redhat.com, Jeff Kirsher Subject: [net-next 09/13] i40evf: remove needless min_t() on num_online_cpus()*2 Date: Sun, 30 Apr 2017 06:24:47 -0700 Message-Id: <20170430132451.68494-10-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.12.2 In-Reply-To: <20170430132451.68494-1-jeffrey.t.kirsher@intel.com> References: <20170430132451.68494-1-jeffrey.t.kirsher@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jacob Keller We already set pairs to the value of adapter->num_active_queues. This value is limited by vsi_res->num_queue_pairs and num_online_cpus(). This means that pairs by definition is already smaller than num_online_cpus()*2, so we don't even need to bother with this check. Lets just remove it and update the comment. Signed-off-by: Jacob Keller Tested-by: Andrew Bowers Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/i40evf/i40evf_main.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_main.c b/drivers/net/ethernet/intel/i40evf/i40evf_main.c index 8e6276d864e6..89035ee01679 100644 --- a/drivers/net/ethernet/intel/i40evf/i40evf_main.c +++ b/drivers/net/ethernet/intel/i40evf/i40evf_main.c @@ -1271,13 +1271,13 @@ static int i40evf_set_interrupt_capability(struct i40evf_adapter *adapter) } pairs = adapter->num_active_queues; - /* It's easy to be greedy for MSI-X vectors, but it really - * doesn't do us much good if we have a lot more vectors - * than CPU's. So let's be conservative and only ask for - * (roughly) twice the number of vectors as there are CPU's. + /* It's easy to be greedy for MSI-X vectors, but it really doesn't do + * us much good if we have more vectors than CPUs. However, we already + * limit the total number of queues by the number of CPUs so we do not + * need any further limiting here. */ - v_budget = min_t(int, pairs, (int)(num_online_cpus() * 2)) + NONQ_VECS; - v_budget = min_t(int, v_budget, (int)adapter->vf_res->max_vectors); + v_budget = min_t(int, pairs + NONQ_VECS, + (int)adapter->vf_res->max_vectors); adapter->msix_entries = kcalloc(v_budget, sizeof(struct msix_entry), GFP_KERNEL);