From patchwork Tue Oct 12 05:05:25 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 67501 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 869EFB6EFF for ; Tue, 12 Oct 2010 16:05:57 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751580Ab0JLFFs (ORCPT ); Tue, 12 Oct 2010 01:05:48 -0400 Received: from mail-ww0-f44.google.com ([74.125.82.44]:60901 "EHLO mail-ww0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750999Ab0JLFFr (ORCPT ); Tue, 12 Oct 2010 01:05:47 -0400 Received: by wwj40 with SMTP id 40so4570403wwj.1 for ; Mon, 11 Oct 2010 22:05:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:from:to:cc :in-reply-to:references:content-type:date:message-id:mime-version :x-mailer:content-transfer-encoding; bh=0erQH7PsYEXDXsh7PZ1K8DbdSq5YT6iIYE9NKB5ekQU=; b=F+unu9USQFpwfyiXu0oacDMNl4AaR2s7JibXqdhrARLkU3CYkHrHNwBimcW0oEzR3y A0Im3kCYXdAJIPpSTS7Duko0OqG9A6FmXRc11A4JqMKB7HY6SRbqv1x5QYNIxgA3Y+fF 0sSqq0NiwEngZS+CmKG/X/i0Rtzj61R4c2abc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:cc:in-reply-to:references:content-type:date :message-id:mime-version:x-mailer:content-transfer-encoding; b=XSYce7uq3M6fSiROKqYI4FM1Wd4ya3LiayurU0yGCaLgocK9Hw85Gfv3LpI+zRE0RQ 5Z5clwR7BXEga4bFfOA8Hh/+iTOOvx+V3sQqS5qf9PHrksKhtNiQpl/nDrJwGBOkvVgk TzuQT6XIri/9nPnKA96hTeOz6+ca77ZQt0zRI= Received: by 10.216.67.66 with SMTP id i44mr4412026wed.53.1286859945954; Mon, 11 Oct 2010 22:05:45 -0700 (PDT) Received: from [192.168.1.21] (167.144.72-86.rev.gaoland.net [86.72.144.167]) by mx.google.com with ESMTPS id p42sm4903366weq.36.2010.10.11.22.05.43 (version=SSLv3 cipher=RC4-MD5); Mon, 11 Oct 2010 22:05:44 -0700 (PDT) Subject: [PATCH net-next] net: allocate skbs on local node From: Eric Dumazet To: David Miller Cc: netdev , Michael Chan , Eilon Greenstein , Andrew Morton , Christoph Hellwig , Christoph Lameter In-Reply-To: <1286839363.30423.130.camel@edumazet-laptop> References: <1286838210.30423.128.camel@edumazet-laptop> <1286839363.30423.130.camel@edumazet-laptop> Date: Tue, 12 Oct 2010 07:05:25 +0200 Message-ID: <1286859925.30423.184.camel@edumazet-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.30.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Le mardi 12 octobre 2010 à 01:22 +0200, Eric Dumazet a écrit : > Le mardi 12 octobre 2010 à 01:03 +0200, Eric Dumazet a écrit : > > > > For multi queue devices, it makes more sense to allocate skb on local > > node of the cpu handling RX interrupts. This allow each cpu to > > manipulate its own slub/slab queues/structures without doing expensive > > cross-node business. > > > > For non multi queue devices, IRQ affinity should be set so that a cpu > > close to the device services interrupts. Even if not set, using > > dev_alloc_skb() is faster. > > > > Signed-off-by: Eric Dumazet > > Or maybe revert : > > commit b30973f877fea1a3fb84e05599890fcc082a88e5 > Author: Christoph Hellwig > Date: Wed Dec 6 20:32:36 2006 -0800 > > [PATCH] node-aware skb allocation > > Node-aware allocation of skbs for the receive path. > > Details: > > - __alloc_skb gets a new node argument and cals the node-aware > slab functions with it. > - netdev_alloc_skb passed the node number it gets from dev_to_node > to it, everyone else passes -1 (any node) > > Signed-off-by: Christoph Hellwig > Cc: Christoph Lameter > Cc: "David S. Miller" > Signed-off-by: Andrew Morton > > > Apparently, only Christoph and Andrew signed it. > > [PATCH net-next] net: allocate skbs on local node commit b30973f877 (node-aware skb allocation) spread a wrong habit of allocating net drivers skbs on a given memory node : The one closest to the NIC hardware. This is wrong because as soon as we try to scale network stack, we need to use many cpus to handle traffic and hit slub/slab management on cross-node allocations/frees when these cpus have to alloc/free skbs bound to a central node. skb allocated in RX path are ephemeral, they have a very short lifetime : Extra cost to maintain NUMA affinity is too expensive. What appeared as a nice idea four years ago is in fact a bad one. In 2010, NIC hardwares are multiqueue, or we use RPS to spread the load, and two 10Gb NIC might deliver more than 28 million packets per second, needing all the available cpus. Cost of cross-node handling in network and vm stacks outperforms the small benefit hardware had when doing its DMA transfert in its 'local' memory node at RX time. Even trying to differentiate the two allocations done for one skb (the sk_buff on local node, the data part on NIC hardware node) is not enough to bring good performance. Signed-off-by: Eric Dumazet Acked-by: Tom Herbert --- include/linux/skbuff.h | 20 ++++++++++++++++---- net/core/skbuff.c | 13 +------------ 2 files changed, 17 insertions(+), 16 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 0b53c43..05a358f 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -496,13 +496,13 @@ extern struct sk_buff *__alloc_skb(unsigned int size, static inline struct sk_buff *alloc_skb(unsigned int size, gfp_t priority) { - return __alloc_skb(size, priority, 0, -1); + return __alloc_skb(size, priority, 0, NUMA_NO_NODE); } static inline struct sk_buff *alloc_skb_fclone(unsigned int size, gfp_t priority) { - return __alloc_skb(size, priority, 1, -1); + return __alloc_skb(size, priority, 1, NUMA_NO_NODE); } extern bool skb_recycle_check(struct sk_buff *skb, int skb_size); @@ -1563,13 +1563,25 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev, return skb; } -extern struct page *__netdev_alloc_page(struct net_device *dev, gfp_t gfp_mask); +/** + * __netdev_alloc_page - allocate a page for ps-rx on a specific device + * @dev: network device to receive on + * @gfp_mask: alloc_pages_node mask + * + * Allocate a new page. dev currently unused. + * + * %NULL is returned if there is no free memory. + */ +static inline struct page *__netdev_alloc_page(struct net_device *dev, gfp_t gfp_mask) +{ + return alloc_pages_node(NUMA_NO_NODE, gfp_mask, 0); +} /** * netdev_alloc_page - allocate a page for ps-rx on a specific device * @dev: network device to receive on * - * Allocate a new page node local to the specified device. + * Allocate a new page. dev currently unused. * * %NULL is returned if there is no free memory. */ diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 752c197..4e8b82e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -247,10 +247,9 @@ EXPORT_SYMBOL(__alloc_skb); struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int length, gfp_t gfp_mask) { - int node = dev->dev.parent ? dev_to_node(dev->dev.parent) : -1; struct sk_buff *skb; - skb = __alloc_skb(length + NET_SKB_PAD, gfp_mask, 0, node); + skb = __alloc_skb(length + NET_SKB_PAD, gfp_mask, 0, NUMA_NO_NODE); if (likely(skb)) { skb_reserve(skb, NET_SKB_PAD); skb->dev = dev; @@ -259,16 +258,6 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, } EXPORT_SYMBOL(__netdev_alloc_skb); -struct page *__netdev_alloc_page(struct net_device *dev, gfp_t gfp_mask) -{ - int node = dev->dev.parent ? dev_to_node(dev->dev.parent) : -1; - struct page *page; - - page = alloc_pages_node(node, gfp_mask, 0); - return page; -} -EXPORT_SYMBOL(__netdev_alloc_page); - void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off, int size) {