From patchwork Thu Jun 20 08:15:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 252795 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 080B12C029D for ; Thu, 20 Jun 2013 18:16:05 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754353Ab3FTIQA (ORCPT ); Thu, 20 Jun 2013 04:16:00 -0400 Received: from mail-bk0-f52.google.com ([209.85.214.52]:57284 "EHLO mail-bk0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754498Ab3FTIP4 (ORCPT ); Thu, 20 Jun 2013 04:15:56 -0400 Received: by mail-bk0-f52.google.com with SMTP id d7so2692795bkh.39 for ; Thu, 20 Jun 2013 01:15:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:subject:from:to:cc:date:in-reply-to:references :content-type:x-mailer:content-transfer-encoding:mime-version; bh=NoRfFu2mHiEXi8y1t1RYv1tycehCI1LbPI6EpIsdpzI=; b=io6riTGYblRrxuuB/77jocFXzdwMNs5N/l/POsBKfg/WKxWzn0ItDRNH5qOY8xf/0k pbtKE2mykeZmk56hb23yNxCdAozGShTnzAuQ1sr6UZCUbhkwrkYqzmHEb3t3TW2c6Lo2 Y10aMLE7s3kTg62l9WL9c6Rp6in3etXz0zEl5C3FazuiHLpOGU4bWuUB2Qgonfioch2G 6bRNrFVsn4BivjAKQa6usSb7BB3iUUIdeLuv0nfEDxM1VqIJ+y7JVIP/nGEgNXry7Dd2 2TNU8HxJVxLv/Oc4wNSwE7tErMsioPIaZV/RBxa5TePOkLSU50IQYXkgWMjk44bbk9Dv nSNg== X-Received: by 10.204.226.75 with SMTP id iv11mr945408bkb.136.1371716155597; Thu, 20 Jun 2013 01:15:55 -0700 (PDT) Received: from [172.28.91.177] ([172.28.91.177]) by mx.google.com with ESMTPSA id kz11sm8971379bkb.11.2013.06.20.01.15.53 for (version=SSLv3 cipher=RC4-SHA bits=128/128); Thu, 20 Jun 2013 01:15:54 -0700 (PDT) Message-ID: <1371716151.3252.390.camel@edumazet-glaptop> Subject: [PATCH net-next] net: allow large number of tx queues From: Eric Dumazet To: "Michael S. Tsirkin" Cc: Jason Wang , davem@davemloft.net, hkchu@google.com, netdev Date: Thu, 20 Jun 2013 01:15:51 -0700 In-Reply-To: <20130619180709.GA15017@redhat.com> References: <1371620452-49349-1-git-send-email-jasowang@redhat.com> <1371620452-49349-2-git-send-email-jasowang@redhat.com> <1371623518.3252.267.camel@edumazet-glaptop> <20130619091132.GA2816@redhat.com> <1371635763.3252.289.camel@edumazet-glaptop> <20130619154059.GA13735@redhat.com> <1371657511.3252.324.camel@edumazet-glaptop> <20130619180709.GA15017@redhat.com> X-Mailer: Evolution 3.2.3-0ubuntu6 Mime-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Eric Dumazet On Wed, 2013-06-19 at 21:07 +0300, Michael S. Tsirkin wrote: > As we have explicit code to recover, maybe set __GFP_NOWARN? > If we are out of memory, vzalloc will warn too, we don't > need two warnings. Yes, that's absolutely the right thing to do. Its yet not clear this patch is really needed, but here it is. Thanks [PATCH net-next] net: allow large number of tx queues netif_alloc_netdev_queues() uses kcalloc() to allocate memory for the "struct netdev_queue *_tx" array. For large number of tx queues, kcalloc() might fail, so this patch does a fallback to vzalloc(). As vmalloc() adds overhead on a critical network path, add __GFP_REPEAT to kzalloc() flags to do this fallback only when really needed. Signed-off-by: Eric Dumazet Acked-by: Michael S. Tsirkin --- net/core/dev.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/core/dev.c b/net/core/dev.c index fa007db..722f633 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -130,6 +130,7 @@ #include #include #include +#include #include "net-sysfs.h" @@ -5253,17 +5254,28 @@ static void netdev_init_one_queue(struct net_device *dev, #endif } +static void netif_free_tx_queues(struct net_device *dev) +{ + if (is_vmalloc_addr(dev->_tx)) + vfree(dev->_tx); + else + kfree(dev->_tx); +} + static int netif_alloc_netdev_queues(struct net_device *dev) { unsigned int count = dev->num_tx_queues; struct netdev_queue *tx; + size_t sz = count * sizeof(*tx); - BUG_ON(count < 1); - - tx = kcalloc(count, sizeof(struct netdev_queue), GFP_KERNEL); - if (!tx) - return -ENOMEM; + BUG_ON(count < 1 || count > 0xffff); + tx = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT); + if (!tx) { + tx = vzalloc(sz); + if (!tx) + return -ENOMEM; + } dev->_tx = tx; netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL); @@ -5811,7 +5823,7 @@ free_all: free_pcpu: free_percpu(dev->pcpu_refcnt); - kfree(dev->_tx); + netif_free_tx_queues(dev); #ifdef CONFIG_RPS kfree(dev->_rx); #endif @@ -5836,7 +5848,7 @@ void free_netdev(struct net_device *dev) release_net(dev_net(dev)); - kfree(dev->_tx); + netif_free_tx_queues(dev); #ifdef CONFIG_RPS kfree(dev->_rx); #endif