From patchwork Thu Feb 4 11:26:02 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Torek X-Patchwork-Id: 44466 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3B758B7D48 for ; Thu, 4 Feb 2010 22:26:16 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756732Ab0BDL0N (ORCPT ); Thu, 4 Feb 2010 06:26:13 -0500 Received: from mail.windriver.com ([147.11.1.11]:44505 "EHLO mail.windriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756509Ab0BDL0J (ORCPT ); Thu, 4 Feb 2010 06:26:09 -0500 Received: from ALA-MAIL03.corp.ad.wrs.com (ala-mail03 [147.11.57.144]) by mail.windriver.com (8.14.3/8.14.3) with ESMTP id o14BQ8lG002819; Thu, 4 Feb 2010 03:26:08 -0800 (PST) Received: from ala-mail06.corp.ad.wrs.com ([147.11.57.147]) by ALA-MAIL03.corp.ad.wrs.com with Microsoft SMTPSVC(6.0.3790.1830); Thu, 4 Feb 2010 03:26:07 -0800 Received: from localhost.localdomain ([172.25.39.238]) by ala-mail06.corp.ad.wrs.com with Microsoft SMTPSVC(6.0.3790.1830); Thu, 4 Feb 2010 03:26:07 -0800 From: Chris Torek To: sparclinux@vger.kernel.org Cc: chris.torek@gmail.com Subject: [PATCH 8/8] niu: rxflow integration Date: Thu, 4 Feb 2010 04:26:02 -0700 Message-Id: X-Mailer: git-send-email 1.6.0.4.766.g6fc4a In-Reply-To: References: <1265282762-13954-1-git-send-email-chris.torek@windriver.com> <14d7f5a63a7026b4413d4b4efa4ce6ddea0e055b.1265231568.git.chris.torek@windriver.com> <9a55d2f53e2c1d5bbc8864ef7a0fb46d84317f48.1265231568.git.chris.torek@windriver.com> <73c852f8f8035f5a432fba64e58b39737e2adde5.1265231569.git.chris.torek@windriver.com> <59e1f00f42c92d3dafeef5d713bf0b11149f065d.1265231569.git.chris.torek@windriver.com> <6babb06d2d37fc1ea764d37b86c5a589f387201c.1265231569.git.chris.torek@windriver.com> In-Reply-To: References: X-OriginalArrivalTime: 04 Feb 2010 11:26:07.0845 (UTC) FILETIME=[D86C2150:01CAA58C] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org Set the number of CPUs to be used to handle separated flows on receive. The default maximum is 16, but we never use more than half the online processors. Signed-off-by: Hong H. Pham Signed-off-by: Chris Torek --- drivers/net/niu.c | 30 ++++++++++++++++++++++++++++++ 1 files changed, 30 insertions(+), 0 deletions(-) diff --git a/drivers/net/niu.c b/drivers/net/niu.c index c82e970..488a4ae 100644 --- a/drivers/net/niu.c +++ b/drivers/net/niu.c @@ -76,6 +76,14 @@ static unsigned int rbr_refill_min __read_mostly = RBR_REFILL_MIN; module_param(rbr_refill_min, uint, 0644); MODULE_PARM_DESC(rbr_refill_min, "Minimum RBR refill threshold"); +/* + * An upper limit of 16 CPUs in rxflow separation usually works well. + * Lowering this value to 0 reverts the driver to pre-rxflow behavior. + */ +static unsigned int rxflow_max_cpus = 16; +module_param(rxflow_max_cpus, uint, 0644); +MODULE_PARM_DESC(rxflow_max_cpus, "Maximum CPUs for RXflow separation"); + #ifndef readq static u64 readq(void __iomem *reg) { @@ -9812,6 +9820,26 @@ static void __devinit niu_driver_version(void) pr_info("%s", version); } +#ifdef CONFIG_SMP +/* + * Set number of CPUs to handle flow separation on receive. We + * want half the online CPUs, or the module-parameter upper limit + * (normally 16), whichever is smaller. + */ +static void __devinit niu_set_default_rx_cpus(struct net_device *dev) +{ + unsigned int n; + + n = num_online_cpus() / 2; + if (n > rxflow_max_cpus) + n = rxflow_max_cpus; + + dev->rx_cpus = n; +} +#else +#define niu_set_default_rx_cpus(dev) do {} while (0) +#endif /* CONFIG_SMP */ + static struct net_device * __devinit niu_alloc_and_init( struct device *gen_dev, struct pci_dev *pdev, struct of_device *op, const struct niu_ops *ops, @@ -9842,6 +9870,8 @@ static struct net_device * __devinit niu_alloc_and_init( np->port = port; + niu_set_default_rx_cpus(dev); + return dev; }