From patchwork Thu Feb 12 05:47:40 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Zhang, Yanmin" X-Patchwork-Id: 22995 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 1C86CDDDCA for ; Thu, 12 Feb 2009 16:48:08 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750970AbZBLFsB (ORCPT ); Thu, 12 Feb 2009 00:48:01 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1750932AbZBLFsA (ORCPT ); Thu, 12 Feb 2009 00:48:00 -0500 Received: from mga06.intel.com ([134.134.136.21]:10411 "EHLO orsmga101.jf.intel.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750705AbZBLFr7 (ORCPT ); Thu, 12 Feb 2009 00:47:59 -0500 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 11 Feb 2009 21:43:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.38,195,1233561600"; d="scan'208";a="386142293" Received: from ymzhang.sh.intel.com (HELO [10.239.36.211]) ([10.239.36.211]) by orsmga002.jf.intel.com with ESMTP; 11 Feb 2009 21:56:47 -0800 Subject: Re: Mainline kernel OLTP performance update From: "Zhang, Yanmin" To: Pekka Enberg Cc: Christoph Lameter , Andi Kleen , Matthew Wilcox , Nick Piggin , Andrew Morton , netdev@vger.kernel.org, Stephen Rothwell , matthew.r.wilcox@intel.com, chinang.ma@intel.com, linux-kernel@vger.kernel.org, sharad.c.tripathi@intel.com, arjan@linux.intel.com, suresh.b.siddha@intel.com, harita.chilukuri@intel.com, douglas.w.styner@intel.com, peter.xihong.wang@intel.com, hubert.nueckel@intel.com, chris.mason@oracle.com, srostedt@redhat.com, linux-scsi@vger.kernel.org, andrew.vasquez@qlogic.com, anirban.chakraborty@qlogic.com, Ingo Molnar In-Reply-To: <1234416153.2604.387.camel@ymzhang> References: <1232616517.11429.129.camel@ymzhang> <1232617672.14549.25.camel@penberg-laptop> <1232679773.11429.155.camel@ymzhang> <4979692B.3050703@cs.helsinki.fi> <1232697998.6094.17.camel@penberg-laptop> <1232699401.11429.163.camel@ymzhang> <1232703989.6094.29.camel@penberg-laptop> <1232765728.11429.193.camel@ymzhang> <84144f020901232336v71687223y2fb21ee081c7517f@mail.gmail.com> <1234416153.2604.387.camel@ymzhang> Date: Thu, 12 Feb 2009 13:47:40 +0800 Message-Id: <1234417660.2604.391.camel@ymzhang> Mime-Version: 1.0 X-Mailer: Evolution 2.22.1 (2.22.1-2.fc9) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu, 2009-02-12 at 13:22 +0800, Zhang, Yanmin wrote: > On Sat, 2009-01-24 at 09:36 +0200, Pekka Enberg wrote: > > On Fri, 2009-01-23 at 10:22 -0500, Christoph Lameter wrote: > > >> No there is another way. Increase the allocator order to 3 for the > > >> kmalloc-8192 slab then multiple 8k blocks can be allocated from one of the > > >> larger chunks of data gotten from the page allocator. That will allow slub > > >> to do fast allocs. > > > > On Sat, Jan 24, 2009 at 4:55 AM, Zhang, Yanmin > > wrote: > > > After I change kmalloc-8192/order to 3, the result(pinned netperf UDP-U-4k) > > > difference between SLUB and SLQB becomes 1% which can be considered as fluctuation. > > > > Great. We should fix calculate_order() to be order 3 for kmalloc-8192. > > Are you interested in doing that? > Pekka, > > Sorry for the late update. > The default order of kmalloc-8192 on 2*4 stoakley is really an issue of calculate_order. Oh, previous patch has a compiling warning. Pls. use below patch. From: Zhang Yanmin The default order of kmalloc-8192 on 2*4 stoakley is an issue of calculate_order. slab_size order name ------------------------------------------------- 4096 3 sgpool-128 8192 2 kmalloc-8192 16384 3 kmalloc-16384 kmalloc-8192's default order is smaller than sgpool-128's. On 4*4 tigerton machine, a similiar issue appears on another kmem_cache. Function calculate_order uses 'min_objects /= 2;' to shrink. Plus size calculation/checking in slab_order, sometimes above issue appear. Below patch against 2.6.29-rc2 fixes it. I checked the default orders of all kmem_cache and they don't become smaller than before. So the patch wouldn't hurt performance. Signed-off-by Zhang Yanmin --- -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- linux-2.6.29-rc2/mm/slub.c 2009-02-11 00:49:48.000000000 -0500 +++ linux-2.6.29-rc2_slubcalc_order/mm/slub.c 2009-02-12 00:47:52.000000000 -0500 @@ -1844,6 +1844,7 @@ static inline int calculate_order(int si int order; int min_objects; int fraction; + int max_objects; /* * Attempt to find best configuration for a slab. This @@ -1856,6 +1857,9 @@ static inline int calculate_order(int si min_objects = slub_min_objects; if (!min_objects) min_objects = 4 * (fls(nr_cpu_ids) + 1); + max_objects = (PAGE_SIZE << slub_max_order)/size; + min_objects = min(min_objects, max_objects); + while (min_objects > 1) { fraction = 16; while (fraction >= 4) { @@ -1865,7 +1869,7 @@ static inline int calculate_order(int si return order; fraction /= 2; } - min_objects /= 2; + min_objects --; } /*