From patchwork Thu Jul 12 00:18:10 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Duyck, Alexander H" X-Patchwork-Id: 170534 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 1ACCE2C0204 for ; Thu, 12 Jul 2012 10:17:53 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030534Ab2GLARu (ORCPT ); Wed, 11 Jul 2012 20:17:50 -0400 Received: from mga11.intel.com ([192.55.52.93]:17193 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030530Ab2GLARt (ORCPT ); Wed, 11 Jul 2012 20:17:49 -0400 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 11 Jul 2012 17:17:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="191498443" Received: from gitlad.jf.intel.com ([10.23.153.32]) by fmsmga002.fm.intel.com with ESMTP; 11 Jul 2012 17:17:48 -0700 Received: from gitlad.jf.intel.com (gitlad.jf.intel.com [127.0.0.1]) by gitlad.jf.intel.com (8.14.2/8.14.2) with ESMTP id q6C0IACd026616; Wed, 11 Jul 2012 17:18:10 -0700 From: Alexander Duyck Subject: [PATCH 2/2] net: Update alloc frag to reduce get/put page usage and recycle pages To: netdev@vger.kernel.org Cc: davem@davemloft.net, jeffrey.t.kirsher@intel.com, alexander.duyck@gmail.com, Eric Dumazet , Alexander Duyck Date: Wed, 11 Jul 2012 17:18:10 -0700 Message-ID: <20120712001810.26542.61967.stgit@gitlad.jf.intel.com> In-Reply-To: <20120712001804.26542.2889.stgit@gitlad.jf.intel.com> References: <20120712001804.26542.2889.stgit@gitlad.jf.intel.com> User-Agent: StGIT/0.14.2 MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch does several things. First it reorders the netdev_alloc_frag code so that only one conditional check is needed in most cases instead of 2. Second it incorporates the atomic_set and atomic_sub_return logic from an earlier proposed patch by Eric Dumazet allowing for a reduction in the get_page/put_page overhead when dealing with frags. Finally it also incorporates the page reuse code so that if the page count is dropped to 0 we can just reinitialize the page and reuse it. Cc: Eric Dumazet Signed-off-by: Alexander Duyck --- net/core/skbuff.c | 37 +++++++++++++++++++++++++------------ 1 files changed, 25 insertions(+), 12 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 506f678..69f4add 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -296,9 +296,12 @@ EXPORT_SYMBOL(build_skb); struct netdev_alloc_cache { struct page *page; unsigned int offset; + unsigned int pagecnt_bias; }; static DEFINE_PER_CPU(struct netdev_alloc_cache, netdev_alloc_cache); +#define NETDEV_PAGECNT_BIAS (PAGE_SIZE / SMP_CACHE_BYTES) + /** * netdev_alloc_frag - allocate a page fragment * @fragsz: fragment size @@ -311,23 +314,33 @@ void *netdev_alloc_frag(unsigned int fragsz) struct netdev_alloc_cache *nc; void *data = NULL; unsigned long flags; + unsigned int offset; local_irq_save(flags); nc = &__get_cpu_var(netdev_alloc_cache); - if (unlikely(!nc->page)) { -refill: + offset = nc->offset; + if (unlikely(offset < fragsz)) { + BUG_ON(PAGE_SIZE < fragsz); + + if (likely(nc->page) && + atomic_sub_and_test(nc->pagecnt_bias, &nc->page->_count)) + goto recycle; + nc->page = alloc_page(GFP_ATOMIC | __GFP_COLD); - nc->offset = 0; - } - if (likely(nc->page)) { - if (nc->offset + fragsz > PAGE_SIZE) { - put_page(nc->page); - goto refill; + if (unlikely(!nc->page)) { + offset = 0; + goto end; } - data = page_address(nc->page) + nc->offset; - nc->offset += fragsz; - get_page(nc->page); - } +recycle: + atomic_set(&nc->page->_count, NETDEV_PAGECNT_BIAS); + nc->pagecnt_bias = NETDEV_PAGECNT_BIAS; + offset = PAGE_SIZE; + } + offset -= fragsz; + nc->pagecnt_bias--; + data = page_address(nc->page) + offset; +end: + nc->offset = offset; local_irq_restore(flags); return data; }