From patchwork Fri Sep 3 09:09:32 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 63597 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 154A2B716E for ; Fri, 3 Sep 2010 19:10:37 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752414Ab0ICJKc (ORCPT ); Fri, 3 Sep 2010 05:10:32 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:41330 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751593Ab0ICJKb (ORCPT ); Fri, 3 Sep 2010 05:10:31 -0400 Received: by wyb35 with SMTP id 35so18430wyb.19 for ; Fri, 03 Sep 2010 02:09:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:from:to:cc :in-reply-to:references:content-type:date:message-id:mime-version :x-mailer:content-transfer-encoding; bh=v/UNIeWmjJx7TBlGLWGP2PK64Uo6X8ihOFS0TSHWXtE=; b=XSxmc6J2s/jxj0gbFkoH5xV7oiCyXkR4Mtc6YS/uYXAr27JL0EeT1ofaXCuPDV0ctn vYbt/eM+Kfl9a/EYAk1Kx2/U2LVNusHMC3Fmk1C84jJJo9T/PdtPyLx8TcJfhdLVfF6c 2KPC8Qh1vrcGwIA7WeK3du0VUaLFXEXuk3QAM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:cc:in-reply-to:references:content-type:date :message-id:mime-version:x-mailer:content-transfer-encoding; b=dRpZCWvUN5eSywTYRVxtjX6CLisDNfO4RYlVGCD+Veo6HeAdCQDMlLdxfwEB+cR+Ch FOKYJtNoRj7GaTviXqVYVrNiLTLEin7lNlAd99kVhQYxPeNfPuaImohCNPdr7PgUOXmX fFCmgAFGrknfeuH7DrVEbtk+p2X6w8JR88DAg= Received: by 10.216.193.195 with SMTP id k45mr381946wen.32.1283504975548; Fri, 03 Sep 2010 02:09:35 -0700 (PDT) Received: from [10.150.51.211] (gw0.net.jmsp.net [212.23.165.14]) by mx.google.com with ESMTPS id e56sm1012614wer.46.2010.09.03.02.09.34 (version=SSLv3 cipher=RC4-MD5); Fri, 03 Sep 2010 02:09:34 -0700 (PDT) Subject: [PATCH net-next-2.6] net: pskb_expand_head() optimization From: Eric Dumazet To: David Miller Cc: netdev@vger.kernel.org In-Reply-To: <1283492880.3699.1437.camel@edumazet-laptop> References: <20100902.204332.02275687.davem@davemloft.net> <1283492880.3699.1437.camel@edumazet-laptop> Date: Fri, 03 Sep 2010 11:09:32 +0200 Message-ID: <1283504972.2453.257.camel@edumazet-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.28.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Le vendredi 03 septembre 2010 à 07:48 +0200, Eric Dumazet a écrit : > David, I had this same idea some days ago when reviewing this code, > but when I came to conclusion we could not avoid the get_page / > put_page() on skb_shinfo(skb)->frags[i].page. I thought it was not worth > trying to avoid the frag_list grab/release operation. > Here is the patch I cooked, for net-next-2.6 [PATCH net-next-2.6] net: pskb_expand_head() optimization pskb_expand_head() blindly takes references on fragments before calling skb_release_data(), potentially releasing these references. We can add a fast path, avoiding these atomic operations, if we own the last reference on skb->head. Based on a previous patch from David Signed-off-by: Eric Dumazet --- net/core/skbuff.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 231dff0..59b96fe 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -781,6 +781,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, u8 *data; int size = nhead + (skb_end_pointer(skb) - skb->head) + ntail; long off; + bool fastpath; BUG_ON(nhead < 0); @@ -802,14 +803,28 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, skb_shinfo(skb), offsetof(struct skb_shared_info, frags[skb_shinfo(skb)->nr_frags])); - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) - get_page(skb_shinfo(skb)->frags[i].page); + /* Check if we can avoid taking references on fragments if we own + * the last reference on skb->head. (see skb_release_data()) + */ + if (!skb->cloned) + fastpath = true; + else { + int delta = skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1; - if (skb_has_frag_list(skb)) - skb_clone_fraglist(skb); + fastpath = atomic_read(&skb_shinfo(skb)->dataref) == delta; + } - skb_release_data(skb); + if (fastpath) { + kfree(skb->head); + } else { + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) + get_page(skb_shinfo(skb)->frags[i].page); + if (skb_has_frag_list(skb)) + skb_clone_fraglist(skb); + + skb_release_data(skb); + } off = (data + nhead) - skb->head; skb->head = data;