diff mbox

sky2: skb recycling

Message ID 20081020190922.7dd6510a@extreme
State Deferred, archived
Delegated to: Jeff Garzik
Headers show

Commit Message

stephen hemminger Oct. 21, 2008, 2:09 a.m. UTC
Add support for recycling tx buffers into receive buffers.
This is experimental at this point.

Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>

---
Since the merge window appears to have past, this can wait until 2.6.29

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

David Miller Oct. 21, 2008, 5:18 a.m. UTC | #1
From: Stephen Hemminger <shemminger@vyatta.com>
Date: Mon, 20 Oct 2008 19:09:22 -0700

> Add support for recycling tx buffers into receive buffers.
> This is experimental at this point.
> 
> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>

Experimental, but do you have any performance data at all?
Just curious...
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet Oct. 21, 2008, 5:54 a.m. UTC | #2
Stephen Hemminger a écrit :
> Add support for recycling tx buffers into receive buffers.
> This is experimental at this point.
> 

I really like this skb recycling

For best performance, driver should perform TX completion before RX completion, so that
freshly added skb in recycle queue have a chance being reused right after. Apparently sky2
tx handling is finegrained (events posted by NIC)

But still, this mechanism also use more skbs per device, especially on dormant ones.

If your RX ring has 256 skb, then adding the recycle queue can adds 256 more skbs... One MB or event more...

Maybe we should flush the recycle queue, every 5 seconds or so, especially if device is dormant
(no RX on it, litle heartbeat TX)





--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Robert Olsson Oct. 21, 2008, 8:25 a.m. UTC | #3
Eric Dumazet writes:
 > Stephen Hemminger a écrit :
 > > Add support for recycling tx buffers into receive buffers.
 > > This is experimental at this point.
 > > 
 > 
 > I really like this skb recycling

Hi,

Well the best and cleanest thing would be if the "global" recycler slab/slub 
was fast enough. Historically it seems like every time the link speed increases 
(now to 10g) alloc/kfree pops up on the profiles but that challenge has sofar
been handled by slab/slub folks. Maybe we should consult them first...

Also there was some discussions to have packet objects in slab.
 
Cheers.
					--ro
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Terry Oct. 21, 2008, 8:49 a.m. UTC | #4
Robert Olsson wrote:
> Eric Dumazet writes:
>  > Stephen Hemminger a écrit :
>  > > Add support for recycling tx buffers into receive buffers.
>  > > This is experimental at this point.
>  > > 
>  > 
>  > I really like this skb recycling
>
> Hi,
>
> Well the best and cleanest thing would be if the "global" recycler slab/slub 
> was fast enough. Historically it seems like every time the link speed increases 
> (now to 10g) alloc/kfree pops up on the profiles but that challenge has sofar
> been handled by slab/slub folks. Maybe we should consult them first...
>
> Also there was some discussions to have packet objects in slab.
>  
> Cheers.
> 					--ro
>
>
>   
Hi 
    yeah. In the forwarding scenario , skb recycling should boost the 
performance by avoiding slowpath slub alloc/free .But if  using 
multiqueue hardware and assigning proper  cpu affinities  could  make it 
always in the fastpath of slub  alloc/free, which is faster?

Terry


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet Oct. 21, 2008, 9:53 a.m. UTC | #5
Robert Olsson a écrit :
> Eric Dumazet writes:
>  > Stephen Hemminger a écrit :
>  > > Add support for recycling tx buffers into receive buffers.
>  > > This is experimental at this point.
>  > > 
>  > 
>  > I really like this skb recycling
> 
> Hi,
> 
> Well the best and cleanest thing would be if the "global" recycler slab/slub 
> was fast enough. Historically it seems like every time the link speed increases 
> (now to 10g) alloc/kfree pops up on the profiles but that challenge has sofar
> been handled by slab/slub folks. Maybe we should consult them first...

Yes, but typical problem is inbalance :

One CPU does the allocations, (interrupts handled by a given CPU)
Another CPU(s) do(es) the freeing

Such scenario is hard for slab/slub because it needs inter-CPU communication, while
slub/slab are optimized to deal with per CPU queues/structures (fast path)

skb recycling is an interesting technique because typical NIC handles RX & TX completion
in one CPU at the same time, thus we can reduce this slab/slub inter-CPU handling.

> 
> Also there was some discussions to have packet objects in slab.
>

skb recycling has a (nice ?) side effect on NUMA platforms, not for forwarding workloads,
(which are IMHO not the majority of linux machines workloads) but typical servers.

Current NIC drivers actually allocate their RX rings using the memory node
close to the device.

With RX recycling, we are re-using skb that were allocated by the process itself,
and this process is likely the one that will process received frames.

On loaded machines, I am not sure that NIC having to access to remote nodes is
the limiting factor, especially considering modern coalescing/mitigation that tends to
delay interrupts anyway (and thus notification to host)




--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Robert Olsson Oct. 21, 2008, 11:58 a.m. UTC | #6
Eric Dumazet writes:
 > 
 > One CPU does the allocations, (interrupts handled by a given CPU)
 > Another CPU(s) do(es) the freeing
 > 
 > Such scenario is hard for slab/slub because it needs inter-CPU communication, while
 > slub/slab are optimized to deal with per CPU queues/structures (fast path)
 > 
 > skb recycling is an interesting technique because typical NIC handles RX & TX completion
 > in one CPU at the same time, thus we can reduce this slab/slub inter-CPU handling.

 Slab added "Magazine Layer" when we last discussed this. This helped quite a bit. 
 In a longer run we'll decrease this passing of memory object between CPU's. 
 Multiqueue (classifiers in combination affinity) will be a very good start.

 > skb recycling has a (nice ?) side effect on NUMA platforms, not for forwarding workloads,
 > (which are IMHO not the majority of linux machines workloads) but typical servers.
 > 
 > Current NIC drivers actually allocate their RX rings using the memory node
 > close to the device.
 > 
 > With RX recycling, we are re-using skb that were allocated by the process itself,
 > and this process is likely the one that will process received frames.
 > 
 > On loaded machines, I am not sure that NIC having to access to remote nodes is
 > the limiting factor, especially considering modern coalescing/mitigation that tends to
 > delay interrupts anyway (and thus notification to host)

In a bigger system perspective a fear we're getting suboptimal as we are not 
using memory that's in the warm cache. Think we'll need a global recycler for 
this not private per driver or even per device.

FYI. I've experimented quite heavily with skb recycling some years ago but was 
actually quite happy kill this thread as slab memory performance increased.
 
Has all sorts variants skb-reuse:
ftp://robur.slu.se/pub/Linux/net-development/skb_recycling/

Well the major advantage was that PCI-mapping could be reused which was costly 
on some platforms.

Down side is the complexity of having extra memory systems, wrt low memory, cache 
etc. As said I would like to challange slab/slub folks... 


Cheers.
						--ro


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Robert Olsson Oct. 21, 2008, 12:17 p.m. UTC | #7
Terry writes:
 > Hi 
 >     yeah. In the forwarding scenario , skb recycling should boost the 
 > performance by avoiding slowpath slub alloc/free .But if  using 
 > multiqueue hardware and assigning proper  cpu affinities  could  make it 
 > always in the fastpath of slub  alloc/free, which is faster?

 For perfect setup with monotone work like forwarding you can make the skb 
 recycling a little faster.

 If you have many active NIC's and the recycling list is per device the situation
 it's probably different...

 Cheers.
						--ro
 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Terry Oct. 21, 2008, 12:34 p.m. UTC | #8
Robert Olsson wrote:
> Terry writes:
>  > Hi 
>  >     yeah. In the forwarding scenario , skb recycling should boost the 
>  > performance by avoiding slowpath slub alloc/free .But if  using 
>  > multiqueue hardware and assigning proper  cpu affinities  could  make it 
>  > always in the fastpath of slub  alloc/free, which is faster?
>
>  For perfect setup with monotone work like forwarding you can make the skb 
>  recycling a little faster.
>
>  If you have many active NIC's and the recycling list is per device the situation
>  it's probably different...
>
>  Cheers.
> 						--ro
>   

Yeah,it's different. So I am considering X-NICs  skb recycling like the 
"global" recycler you mentioned.It may help. But it may introduce some 
race-conditions too.

Rgds.

Terry

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet Oct. 21, 2008, 12:42 p.m. UTC | #9
Robert Olsson a écrit :
> Terry writes:
>  > Hi 
>  >     yeah. In the forwarding scenario , skb recycling should boost the 
>  > performance by avoiding slowpath slub alloc/free .But if  using 
>  > multiqueue hardware and assigning proper  cpu affinities  could  make it 
>  > always in the fastpath of slub  alloc/free, which is faster?
> 
>  For perfect setup with monotone work like forwarding you can make the skb 
>  recycling a little faster.
> 
>  If you have many active NIC's and the recycling list is per device the situation
>  it's probably different...
> 

I suspect all this skb recycling stuff in forwarding workloads is defeated anyway...

By the copybreak feature... (RX_COPY_THRESHOLD = 256 for example on tg3)

Checking drivers/net/tg3.c, we can even see that in case of copying to a smaller skb,
we allocate it with a call to netdev_alloc_skb(), while it would be better
to use a plain alloc_skb() ...




--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Robert Olsson Oct. 21, 2008, 1:04 p.m. UTC | #10
Eric Dumazet writes:

 > I suspect all this skb recycling stuff in forwarding workloads is defeated anyway...
 > 
 > By the copybreak feature... (RX_COPY_THRESHOLD = 256 for example on tg3)


 Right. I use to turn it off as forwarding is the major application. I'll remember 
 it was useful for you as it saved you lots of memory.

 Thinking of what we should be able to do with decent classifier on the NIC's
 
 Would could probably determine if packet is for localhost and tag skb for all 
 sorts of new tricks. copy_threshhold is one and maybe even skip dst cache on big 
 routers etc.

 > Checking drivers/net/tg3.c, we can even see that in case of copying to a smaller skb,
 > we allocate it with a call to netdev_alloc_skb(), while it would be better
 > to use a plain alloc_skb() ...

 I'll let others comment this...

 Cheers.
						--ro

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
stephen hemminger Oct. 21, 2008, 3:09 p.m. UTC | #11
On Tue, 21 Oct 2008 07:54:18 +0200
Eric Dumazet <dada1@cosmosbay.com> wrote:

> Stephen Hemminger a écrit :
> > Add support for recycling tx buffers into receive buffers.
> > This is experimental at this point.
> > 
> 
> I really like this skb recycling
> 
> For best performance, driver should perform TX completion before RX completion, so that
> freshly added skb in recycle queue have a chance being reused right after. Apparently sky2
> tx handling is finegrained (events posted by NIC)

The NIC always reports TX completion after RX, so recycling doesn't work that well.

> But still, this mechanism also use more skbs per device, especially on dormant ones.
> 
> If your RX ring has 256 skb, then adding the recycle queue can adds 256 more skbs... One MB or event more...
> 
> Maybe we should flush the recycle queue, every 5 seconds or so, especially if device is dormant
> (no RX on it, litle heartbeat TX)

I think dropping the recycle queue down to the typical max number of receive frames per interrupt
which is between 4 and 16 would be enough.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rick Jones Oct. 21, 2008, 7:59 p.m. UTC | #12
David Miller wrote:
> From: Stephen Hemminger <shemminger@vyatta.com>
> Date: Mon, 20 Oct 2008 19:09:22 -0700
> 
> 
>>Add support for recycling tx buffers into receive buffers.
>>This is experimental at this point.
>>
>>Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
> 
> 
> Experimental, but do you have any performance data at all?
> Just curious...

I've not had a good emily litella moment all day so I'll ask - is there 
really that much in the way of suitable skb's which are completely free 
after transmit completion?

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet Oct. 21, 2008, 8:15 p.m. UTC | #13
Rick Jones a écrit :
> David Miller wrote:
>> From: Stephen Hemminger <shemminger@vyatta.com>
>> Date: Mon, 20 Oct 2008 19:09:22 -0700
>>
>>
>>> Add support for recycling tx buffers into receive buffers.
>>> This is experimental at this point.
>>>
>>> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
>>
>>
>> Experimental, but do you have any performance data at all?
>> Just curious...
> 
> I've not had a good emily litella moment all day so I'll ask - is there 
> really that much in the way of suitable skb's which are completely free 
> after transmit completion?

Or, if we take another way, say a VOIP RTP machine sends and receive 20.000
 packets per second, each being 200 bytes long, are transmited packets
correctly sized at sendto() time to be candidates for recycling ?





--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
stephen hemminger Oct. 21, 2008, 8:38 p.m. UTC | #14
On Tue, 21 Oct 2008 22:15:43 +0200
Eric Dumazet <dada1@cosmosbay.com> wrote:

> Rick Jones a écrit :
> > David Miller wrote:
> >> From: Stephen Hemminger <shemminger@vyatta.com>
> >> Date: Mon, 20 Oct 2008 19:09:22 -0700
> >>
> >>
> >>> Add support for recycling tx buffers into receive buffers.
> >>> This is experimental at this point.
> >>>
> >>> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
> >>
> >>
> >> Experimental, but do you have any performance data at all?
> >> Just curious...
> > 
> > I've not had a good emily litella moment all day so I'll ask - is there 
> > really that much in the way of suitable skb's which are completely free 
> > after transmit completion?
> 
> Or, if we take another way, say a VOIP RTP machine sends and receive 20.000
>  packets per second, each being 200 bytes long, are transmited packets
> correctly sized at sendto() time to be candidates for recycling ?

No. Most locally generate packets aren't going to be right size because
they will be too small, cloned or fragmented.  It really only helps when forwarding.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jeff Garzik Oct. 22, 2008, 11:03 a.m. UTC | #15
Stephen Hemminger wrote:
> Add support for recycling tx buffers into receive buffers.
> This is experimental at this point.
> 
> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
> 
> ---
> Since the merge window appears to have past, this can wait until 2.6.29
> 

for what it's worth...  I am following DaveM's lead, and waiting until 
net-next opens to start taking patches for 2.6.29.  He wants us to focus 
on bug fixing for now.

So, please resend once the merge window opens... thanks.

	Jeff




--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rick Jones Oct. 22, 2008, 11:58 p.m. UTC | #16
Stephen Hemminger wrote:
> On Tue, 21 Oct 2008 22:15:43 +0200
> Eric Dumazet <dada1@cosmosbay.com> wrote:
> 
> 
>>Rick Jones a écrit :
>>> I've not had a good emily litella moment all day so I'll ask - is
>>> there really that much in the way of suitable skb's which are
>>> completely free after transmit completion?
>>
>> Or, if we take another way, say a VOIP RTP machine sends and
>> receive 20.000 packets per second, each being 200 bytes long, are
>> transmited packets correctly sized at sendto() time to be
>> candidates for recycling ?
> 
> No. Most locally generate packets aren't going to be right size
> because they will be too small, cloned or fragmented.  It really only
> helps when forwarding.

So we have a bit of "tension" between the desires of an end host vs 
those of a router right?

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
stephen hemminger Oct. 23, 2008, 12:05 a.m. UTC | #17
On Wed, 22 Oct 2008 16:58:17 -0700
Rick Jones <rick.jones2@hp.com> wrote:

> Stephen Hemminger wrote:
> > On Tue, 21 Oct 2008 22:15:43 +0200
> > Eric Dumazet <dada1@cosmosbay.com> wrote:
> > 
> > 
> >>Rick Jones a écrit :
> >>> I've not had a good emily litella moment all day so I'll ask - is
> >>> there really that much in the way of suitable skb's which are
> >>> completely free after transmit completion?
> >>
> >> Or, if we take another way, say a VOIP RTP machine sends and
> >> receive 20.000 packets per second, each being 200 bytes long, are
> >> transmited packets correctly sized at sendto() time to be
> >> candidates for recycling ?
> > 
> > No. Most locally generate packets aren't going to be right size
> > because they will be too small, cloned or fragmented.  It really only
> > helps when forwarding.
> 
> So we have a bit of "tension" between the desires of an end host vs 
> those of a router right?
> 
> rick jones

Not tension, just means on end host this optimization does nothing.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rick Jones Oct. 23, 2008, 12:17 a.m. UTC | #18
>>So we have a bit of "tension" between the desires of an end host vs 
>>those of a router right?
>>
>>rick jones
> 
> 
> Not tension, just means on end host this optimization does nothing.

Its free in terms of added cycles consumed?

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Oct. 23, 2008, 5:28 a.m. UTC | #19
From: Eric Dumazet <dada1@cosmosbay.com>
Date: Tue, 21 Oct 2008 11:53:58 +0200

> One CPU does the allocations, (interrupts handled by a given CPU)
> Another CPU(s) do(es) the freeing
> 
> Such scenario is hard for slab/slub because it needs inter-CPU communication, while
> slub/slab are optimized to deal with per CPU queues/structures (fast path)
> 
> skb recycling is an interesting technique because typical NIC handles RX & TX completion
> in one CPU at the same time, thus we can reduce this slab/slub inter-CPU handling.

This matches my feeling on the situation.  And that's basically why I
applied the recycling infrastructure patches from Lennert :-)

> skb recycling has a (nice ?) side effect on NUMA platforms, not for forwarding workloads,
> (which are IMHO not the majority of linux machines workloads) but typical servers.
> 
> Current NIC drivers actually allocate their RX rings using the memory node
> close to the device.
> 
> With RX recycling, we are re-using skb that were allocated by the process itself,
> and this process is likely the one that will process received frames.

I am not so sure about this, however.

It's just trading one NUMA penalty for another.

Also, the device is going to interrupt on a particular cpu within the NUMA
node normally.  The scheduler will notice that socket wakeups happen there
and likely migrate the task there.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

--- a/drivers/net/sky2.c	2008-10-20 16:16:32.000000000 -0700
+++ b/drivers/net/sky2.c	2008-10-20 17:03:29.000000000 -0700
@@ -1165,6 +1165,8 @@  static void sky2_rx_clean(struct sky2_po
 			re->skb = NULL;
 		}
 	}
+
+	skb_queue_purge(&sky2->rx_recycle);
 }
 
 /* Basic MII support */
@@ -1250,26 +1252,24 @@  static struct sk_buff *sky2_rx_alloc(str
 	struct sk_buff *skb;
 	int i;
 
+	skb = __skb_dequeue(&sky2->rx_recycle);
+	if (!skb) {
+		skb = netdev_alloc_skb(sky2->netdev, sky2->rx_data_size);
+		if (!skb)
+			goto nomem;
+	}
+
 	if (sky2->hw->flags & SKY2_HW_RAM_BUFFER) {
-		unsigned char *start;
 		/*
 		 * Workaround for a bug in FIFO that cause hang
 		 * if the FIFO if the receive buffer is not 64 byte aligned.
 		 * The buffer returned from netdev_alloc_skb is
 		 * aligned except if slab debugging is enabled.
 		 */
-		skb = netdev_alloc_skb(sky2->netdev, sky2->rx_data_size + 8);
-		if (!skb)
-			goto nomem;
-		start = PTR_ALIGN(skb->data, 8);
+		unsigned char *start = PTR_ALIGN(skb->data, 8);
 		skb_reserve(skb, start - skb->data);
-	} else {
-		skb = netdev_alloc_skb(sky2->netdev,
-				       sky2->rx_data_size + NET_IP_ALIGN);
-		if (!skb)
-			goto nomem;
+	} else
 		skb_reserve(skb, NET_IP_ALIGN);
-	}
 
 	for (i = 0; i < sky2->rx_nfrags; i++) {
 		struct page *page = alloc_page(GFP_ATOMIC);
@@ -1344,7 +1344,9 @@  static int sky2_rx_start(struct sky2_por
 	if (size < ETH_HLEN)
 		size = ETH_HLEN;
 
-	sky2->rx_data_size = size;
+	/* Add padding for either IP or DMA alignment */
+	sky2->rx_data_size = size + 8;
+	skb_queue_head_init(&sky2->rx_recycle);
 
 	/* Fill Rx ring */
 	for (i = 0; i < sky2->rx_pending; i++) {
@@ -1684,7 +1686,7 @@  static int sky2_xmit_frame(struct sk_buf
  * NB: the hardware will tell us about partial completion of multi-part
  *     buffers so make sure not to free skb to early.
  */
-static void sky2_tx_complete(struct sky2_port *sky2, u16 done)
+static void sky2_tx_complete(struct sky2_port *sky2, u16 done, int recycle)
 {
 	struct net_device *dev = sky2->netdev;
 	struct pci_dev *pdev = sky2->hw->pdev;
@@ -1713,14 +1715,21 @@  static void sky2_tx_complete(struct sky2
 		}
 
 		if (le->ctrl & EOP) {
+			struct sk_buff *skb = re->skb;
+
 			if (unlikely(netif_msg_tx_done(sky2)))
 				printk(KERN_DEBUG "%s: tx done %u\n",
 				       dev->name, idx);
 
 			dev->stats.tx_packets++;
-			dev->stats.tx_bytes += re->skb->len;
+			dev->stats.tx_bytes += skb->len;
 
-			dev_kfree_skb_any(re->skb);
+			if (recycle
+			    && skb_queue_len(&sky2->rx_recycle) < sky2->rx_pending
+			    && skb_recycle_check(skb, sky2->rx_data_size))
+				__skb_queue_head(&sky2->rx_recycle, skb);
+			else
+				dev_kfree_skb_any(skb);
 			sky2->tx_next = RING_NEXT(idx, TX_RING_SIZE);
 		}
 	}
@@ -1738,7 +1747,7 @@  static void sky2_tx_clean(struct net_dev
 	struct sky2_port *sky2 = netdev_priv(dev);
 
 	netif_tx_lock_bh(dev);
-	sky2_tx_complete(sky2, sky2->tx_prod);
+	sky2_tx_complete(sky2, sky2->tx_prod, 0);
 	netif_tx_unlock_bh(dev);
 }
 
@@ -2291,7 +2300,7 @@  static inline void sky2_tx_done(struct n
 
 	if (netif_running(dev)) {
 		netif_tx_lock(dev);
-		sky2_tx_complete(sky2, last);
+		sky2_tx_complete(sky2, last, 1);
 		netif_tx_unlock(dev);
 	}
 }
--- a/drivers/net/sky2.h	2008-10-20 16:16:32.000000000 -0700
+++ b/drivers/net/sky2.h	2008-10-20 16:29:42.000000000 -0700
@@ -2020,7 +2020,8 @@  struct sky2_port {
 	u16		     tx_last_mss;
 	u32		     tx_tcpsum;
 
-	struct rx_ring_info  *rx_ring ____cacheline_aligned_in_smp;
+	struct sk_buff_head  rx_recycle;
+	struct rx_ring_info  *rx_ring;
 	struct sky2_rx_le    *rx_le;
 
 	u16		     rx_next;		/* next re to check */