Patchwork [2/2] net: Update alloc frag to reduce get/put page usage and recycle pages

login
register
mail settings
Submitter Alexander Duyck
Date July 12, 2012, 12:18 a.m.
Message ID <20120712001810.26542.61967.stgit@gitlad.jf.intel.com>
Download mbox | patch
Permalink /patch/170534/
State Deferred
Delegated to: David Miller
Headers show

Comments

Alexander Duyck - July 12, 2012, 12:18 a.m.
This patch does several things.

First it reorders the netdev_alloc_frag code so that only one conditional
check is needed in most cases instead of 2.

Second it incorporates the atomic_set and atomic_sub_return logic from an
earlier proposed patch by Eric Dumazet allowing for a reduction in the
get_page/put_page overhead when dealing with frags.

Finally it also incorporates the page reuse code so that if the page count
is dropped to 0 we can just reinitialize the page and reuse it.

Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---

 net/core/skbuff.c |   37 +++++++++++++++++++++++++------------
 1 files changed, 25 insertions(+), 12 deletions(-)


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet - July 12, 2012, 12:29 a.m.
On Wed, 2012-07-11 at 17:18 -0700, Alexander Duyck wrote:
> This patch does several things.
> 
> First it reorders the netdev_alloc_frag code so that only one conditional
> check is needed in most cases instead of 2.
> 
> Second it incorporates the atomic_set and atomic_sub_return logic from an
> earlier proposed patch by Eric Dumazet allowing for a reduction in the
> get_page/put_page overhead when dealing with frags.
> 
> Finally it also incorporates the page reuse code so that if the page count
> is dropped to 0 we can just reinitialize the page and reuse it.
> 
> Cc: Eric Dumazet <edumazet@google.com>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---


Hmm, I was working on a version using order-3 pages if available.

(or more exactly 32768 bytes chunks)

I am not sure how your version can help with typical 1500 allocations
(2 skbs per page)


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller - July 12, 2012, 1:11 a.m.
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Thu, 12 Jul 2012 02:29:27 +0200

> On Wed, 2012-07-11 at 17:18 -0700, Alexander Duyck wrote:
>> This patch does several things.
>> 
>> First it reorders the netdev_alloc_frag code so that only one conditional
>> check is needed in most cases instead of 2.
>> 
>> Second it incorporates the atomic_set and atomic_sub_return logic from an
>> earlier proposed patch by Eric Dumazet allowing for a reduction in the
>> get_page/put_page overhead when dealing with frags.
>> 
>> Finally it also incorporates the page reuse code so that if the page count
>> is dropped to 0 we can just reinitialize the page and reuse it.
>> 
>> Cc: Eric Dumazet <edumazet@google.com>
>> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
>> ---
> 
> 
> Hmm, I was working on a version using order-3 pages if available.
> 
> (or more exactly 32768 bytes chunks)
> 
> I am not sure how your version can help with typical 1500 allocations
> (2 skbs per page)

I'd like you two to sort things out before I apply anything, thanks :)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Duyck - July 12, 2012, 2:02 a.m.
On 7/11/2012 5:29 PM, Eric Dumazet wrote:
> On Wed, 2012-07-11 at 17:18 -0700, Alexander Duyck wrote:
>> This patch does several things.
>>
>> First it reorders the netdev_alloc_frag code so that only one conditional
>> check is needed in most cases instead of 2.
>>
>> Second it incorporates the atomic_set and atomic_sub_return logic from an
>> earlier proposed patch by Eric Dumazet allowing for a reduction in the
>> get_page/put_page overhead when dealing with frags.
>>
>> Finally it also incorporates the page reuse code so that if the page count
>> is dropped to 0 we can just reinitialize the page and reuse it.
>>
>> Cc: Eric Dumazet<edumazet@google.com>
>> Signed-off-by: Alexander Duyck<alexander.h.duyck@intel.com>
>> ---
>
> Hmm, I was working on a version using order-3 pages if available.
>
> (or more exactly 32768 bytes chunks)
>
> I am not sure how your version can help with typical 1500 allocations
> (2 skbs per page)
>
>
The gain will be minimal if any with the 1500 byte allocations, however 
there shouldn't be a performance degradation.

I was thinking more of the ixgbe case where we are working with only 256 
byte allocations and can recycle pages in the case of GRO or TCP.  For 
ixgbe the advantages are significant since we drop a number of the 
get_page calls and get the advantage of the page recycling.  So for 
example with GRO enabled we should only have to allocate 1 page for 
headers every 16 buffers, and the 6 slots we use in that page have a 
good likelihood of being warm in the cache since we just keep looping on 
the same page.

Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet - July 12, 2012, 5:06 a.m.
On Wed, 2012-07-11 at 19:02 -0700, Alexander Duyck wrote:

> The gain will be minimal if any with the 1500 byte allocations, however 
> there shouldn't be a performance degradation.
> 
> I was thinking more of the ixgbe case where we are working with only 256 
> byte allocations and can recycle pages in the case of GRO or TCP.  For 
> ixgbe the advantages are significant since we drop a number of the 
> get_page calls and get the advantage of the page recycling.  So for 
> example with GRO enabled we should only have to allocate 1 page for 
> headers every 16 buffers, and the 6 slots we use in that page have a 
> good likelihood of being warm in the cache since we just keep looping on 
> the same page.
> 

Its not possible to get 16 buffers per 4096 bytes page.

sizeof(struct skb_shared_info)=0x140 320

Add 192 bytes (NET_SKB_PAD + 128)

Thats a minimum of 512 bytes (but ixgbe uses more) per skb.

In practice for ixgbe, its :

#define IXGBE_RXBUFFER_512   512    /* Used for packet split */
#define IXGBE_RX_HDR_SIZE IXGBE_RXBUFFER_512 

skb = netdev_alloc_skb_ip_align(rx_ring->netdev, IXGBE_RX_HDR_SIZE)

So 4 buffers per PAGE

Maybe you plan to use IXGBE_RXBUFFER_256 or IXGBE_RXBUFFER_128 ?



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Duyck - July 12, 2012, 3:33 p.m.
On 07/11/2012 10:06 PM, Eric Dumazet wrote:
> On Wed, 2012-07-11 at 19:02 -0700, Alexander Duyck wrote:
>
>> The gain will be minimal if any with the 1500 byte allocations, however 
>> there shouldn't be a performance degradation.
>>
>> I was thinking more of the ixgbe case where we are working with only 256 
>> byte allocations and can recycle pages in the case of GRO or TCP.  For 
>> ixgbe the advantages are significant since we drop a number of the 
>> get_page calls and get the advantage of the page recycling.  So for 
>> example with GRO enabled we should only have to allocate 1 page for 
>> headers every 16 buffers, and the 6 slots we use in that page have a 
>> good likelihood of being warm in the cache since we just keep looping on 
>> the same page.
>>
> Its not possible to get 16 buffers per 4096 bytes page.
Actually I was talking about buffers from the device, not buffers from
the page.  However, it is possible to get 16 head_frag buffers from the
same 4K page if we consider recycling.  In the case of GRO we will end
up with the first buffer keeping the head_frag, and all of the remaining
head_frags will be freed before we call netdev_alloc_frag again.  So
what will end up happening is that each GRO assembled frame from ixgbe
would start with a recycled page used for the previously freed
head_frags, the page will be dropped from netdev_alloc_frag after we run
out of space, a new page will be allocated for use as head_frags, and
finally those head_frags will be freed and recycled until we hit the end
of the GRO frame and start over.  So if you count them all then we end
up using the page up to 16 times, maybe even more depending on how the
page offset reset aligns with the start of the GRO frame.

> sizeof(struct skb_shared_info)=0x140 320
>
> Add 192 bytes (NET_SKB_PAD + 128)
>
> Thats a minimum of 512 bytes (but ixgbe uses more) per skb.
>
> In practice for ixgbe, its :
>
> #define IXGBE_RXBUFFER_512   512    /* Used for packet split */
> #define IXGBE_RX_HDR_SIZE IXGBE_RXBUFFER_512 
>
> skb = netdev_alloc_skb_ip_align(rx_ring->netdev, IXGBE_RX_HDR_SIZE)
>
> So 4 buffers per PAGE
>
> Maybe you plan to use IXGBE_RXBUFFER_256 or IXGBE_RXBUFFER_128 ?
I have a patch that is in testing in Jeff Kirsher's tree that uses
IXGBE_RXBUFFER_256.  With your recent changes it didn't make sense to
use 512 when we would only copy 256 bytes into the head.  With the size
set to 256 we will get 6 buffers per page without any recycling.

Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 506f678..69f4add 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -296,9 +296,12 @@  EXPORT_SYMBOL(build_skb);
 struct netdev_alloc_cache {
 	struct page *page;
 	unsigned int offset;
+	unsigned int pagecnt_bias;
 };
 static DEFINE_PER_CPU(struct netdev_alloc_cache, netdev_alloc_cache);
 
+#define NETDEV_PAGECNT_BIAS (PAGE_SIZE / SMP_CACHE_BYTES)
+
 /**
  * netdev_alloc_frag - allocate a page fragment
  * @fragsz: fragment size
@@ -311,23 +314,33 @@  void *netdev_alloc_frag(unsigned int fragsz)
 	struct netdev_alloc_cache *nc;
 	void *data = NULL;
 	unsigned long flags;
+	unsigned int offset;
 
 	local_irq_save(flags);
 	nc = &__get_cpu_var(netdev_alloc_cache);
-	if (unlikely(!nc->page)) {
-refill:
+	offset = nc->offset;
+	if (unlikely(offset < fragsz)) {
+		BUG_ON(PAGE_SIZE < fragsz);
+
+		if (likely(nc->page) &&
+		    atomic_sub_and_test(nc->pagecnt_bias, &nc->page->_count))
+			goto recycle;
+
 		nc->page = alloc_page(GFP_ATOMIC | __GFP_COLD);
-		nc->offset = 0;
-	}
-	if (likely(nc->page)) {
-		if (nc->offset + fragsz > PAGE_SIZE) {
-			put_page(nc->page);
-			goto refill;
+		if (unlikely(!nc->page)) {
+			offset = 0;
+			goto end;
 		}
-		data = page_address(nc->page) + nc->offset;
-		nc->offset += fragsz;
-		get_page(nc->page);
-	}
+recycle:
+		atomic_set(&nc->page->_count, NETDEV_PAGECNT_BIAS);
+		nc->pagecnt_bias = NETDEV_PAGECNT_BIAS;
+		offset = PAGE_SIZE;
+	}
+	offset -= fragsz;
+	nc->pagecnt_bias--;
+	data = page_address(nc->page) + offset;
+end:
+	nc->offset = offset;
 	local_irq_restore(flags);
 	return data;
 }