From patchwork Mon Feb 16 03:36:44 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 23199 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id F1929DDDA0 for ; Mon, 16 Feb 2009 14:37:00 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755923AbZBPDgx (ORCPT ); Sun, 15 Feb 2009 22:36:53 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1755873AbZBPDgx (ORCPT ); Sun, 15 Feb 2009 22:36:53 -0500 Received: from rhun.apana.org.au ([64.62.148.172]:52019 "EHLO arnor.apana.org.au" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755546AbZBPDgw (ORCPT ); Sun, 15 Feb 2009 22:36:52 -0500 Received: from gondolin.me.apana.org.au ([192.168.0.6]) by arnor.apana.org.au with esmtp (Exim 4.63 #1 (Debian)) id 1LYuHp-0002wf-WB; Mon, 16 Feb 2009 14:36:50 +1100 Received: from herbert by gondolin.me.apana.org.au with local (Exim 4.69) (envelope-from ) id 1LYuHl-0003l4-5Y; Mon, 16 Feb 2009 11:36:45 +0800 Date: Mon, 16 Feb 2009 11:36:44 +0800 From: Herbert Xu To: Divy Le Ray Cc: netdev@vger.kernel.org Subject: Re: cxgb3: Replace LRO with GRO Message-ID: <20090216033644.GA14431@gondor.apana.org.au> References: <20090120101418.13898.57172.stgit@speedy5> <20090121082937.GA1116@gondor.apana.org.au> <49783F7E.6000202@chelsio.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <49783F7E.6000202@chelsio.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu, Jan 22, 2009 at 01:42:22AM -0800, Divy Le Ray wrote: > > I'm now getting about an average 5.7Gbs, oprofile typically showing: > > 37.294500 copy_user_generic_unrolled vmlinux > 8.733800 process_responses cxgb3.ko > 6.123700 refill_fl cxgb3.ko > 5.006900 put_page vmlinux > 3.400700 napi_fraginfo_skb vmlinux > 2.484600 tcp_gro_receive vmlinux > 2.308900 inet_gro_receive vmlinux > 2.196000 free_hot_cold_page vmlinux > 2.032900 skb_gro_header vmlinux > 1.970100 get_page_from_freelist vmlinux > 1.869700 skb_copy_datagram_iovec vmlinux > 1.380300 dev_gro_receive vmlinux > 1.242300 tcp_recvmsg vmlinux > 1.079200 irq_entries_start vmlinux > 1.003900 get_pageblock_flags_group vmlinux > 0.991300 skb_gro_receive vmlinux > 0.878400 _raw_spin_lock vmlinux > 0.878400 memcpy vmlinux When you can get a chance can you see if this patch makes any Thanks, difference at all? diff --git a/net/core/skbuff.c b/net/core/skbuff.c index d7efaf9..6a542fa 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2586,8 +2586,10 @@ int skb_gro_receive(struct sk_buff **head, struct sk_buff *skb) { struct sk_buff *p = *head; struct sk_buff *nskb; + skb_frag_t *frag; unsigned int headroom; unsigned int len = skb_gro_len(skb); + int i; if (p->len + len >= 65536) return -E2BIG; @@ -2604,9 +2606,9 @@ int skb_gro_receive(struct sk_buff **head, struct sk_buff *skb) skb_shinfo(skb)->frags[0].size -= skb_gro_offset(skb) - skb_headlen(skb); - memcpy(skb_shinfo(p)->frags + skb_shinfo(p)->nr_frags, - skb_shinfo(skb)->frags, - skb_shinfo(skb)->nr_frags * sizeof(skb_frag_t)); + frag = skb_shinfo(p)->frags + skb_shinfo(p)->nr_frags; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) + *frag++ = skb_shinfo(skb)->frags[i]; skb_shinfo(p)->nr_frags += skb_shinfo(skb)->nr_frags; skb_shinfo(skb)->nr_frags = 0;