From patchwork Fri Jun 22 14:30:40 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 166622 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3F629B6FA5 for ; Sat, 23 Jun 2012 00:40:28 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932969Ab2FVOkI (ORCPT ); Fri, 22 Jun 2012 10:40:08 -0400 Received: from cantor2.suse.de ([195.135.220.15]:33511 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1762253Ab2FVObC (ORCPT ); Fri, 22 Jun 2012 10:31:02 -0400 Received: from relay1.suse.de (unknown [195.135.220.254]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id 47EDDA3B06; Fri, 22 Jun 2012 16:31:01 +0200 (CEST) From: Mel Gorman To: Andrew Morton Cc: Linux-MM , Linux-Netdev , LKML , David Miller , Neil Brown , Peter Zijlstra , Mike Christie , Eric B Munson , Eric Dumazet , Sebastian Andrzej Siewior , Mel Gorman Subject: [PATCH 13/16] mm: Micro-optimise slab to avoid a function call Date: Fri, 22 Jun 2012 15:30:40 +0100 Message-Id: <1340375443-22455-14-git-send-email-mgorman@suse.de> X-Mailer: git-send-email 1.7.9.2 In-Reply-To: <1340375443-22455-1-git-send-email-mgorman@suse.de> References: <1340375443-22455-1-git-send-email-mgorman@suse.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Getting and putting objects in SLAB currently requires a function call but the bulk of the work is related to PFMEMALLOC reserves which are only consumed when network-backed storage is critical. Use an inline function to determine if the function call is required. Signed-off-by: Mel Gorman --- mm/slab.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index d9fe508..84f471e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -117,6 +117,8 @@ #include #include +#include + #include #include #include @@ -991,7 +993,7 @@ out: spin_unlock_irqrestore(&l3->list_lock, flags); } -static void *ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac, +static void *__ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac, gfp_t flags, bool force_refill) { int i; @@ -1038,7 +1040,20 @@ static void *ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac, return objp; } -static void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, +static inline void *ac_get_obj(struct kmem_cache *cachep, + struct array_cache *ac, gfp_t flags, bool force_refill) +{ + void *objp; + + if (unlikely(sk_memalloc_socks())) + objp = __ac_get_obj(cachep, ac, flags, force_refill); + else + objp = ac->entry[--ac->avail]; + + return objp; +} + +static void *__ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, void *objp) { if (unlikely(pfmemalloc_active)) { @@ -1048,6 +1063,15 @@ static void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, set_obj_pfmemalloc(&objp); } + return objp; +} + +static inline void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, + void *objp) +{ + if (unlikely(sk_memalloc_socks())) + objp = __ac_put_obj(cachep, ac, objp); + ac->entry[ac->avail++] = objp; }