From patchwork Thu Jul 15 14:24:23 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junchang Wang X-Patchwork-Id: 58985 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 7C029B6F11 for ; Fri, 16 Jul 2010 00:25:00 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933310Ab0GOOY4 (ORCPT ); Thu, 15 Jul 2010 10:24:56 -0400 Received: from mail-pw0-f46.google.com ([209.85.160.46]:41176 "EHLO mail-pw0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933300Ab0GOOYz (ORCPT ); Thu, 15 Jul 2010 10:24:55 -0400 Received: by pwi5 with SMTP id 5so271796pwi.19 for ; Thu, 15 Jul 2010 07:24:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:subject :message-id:mime-version:content-type:content-disposition:user-agent; bh=MhsrP9ikm09YUloVb4+IcyZvHnIv0P/y6NyAv9CCWKI=; b=mxHcrTEceP0yAsDQdBq/g07veCBo70bGWAoZkveJbxXKFaz46MlrF5N8axiiB1xtcG 2QU/PUFho9hPquXLDwBbytZHiSHQSZZBPhDxaU3ePpBV53gXjwKuczFF3RR7ySj3E0rS HRf4zSyPP5NNW4In+W2OLX0uff2F4NXuRnMNM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:subject:message-id:mime-version:content-type :content-disposition:user-agent; b=rRhjcAdZsJn7H+tGmgIpNG3Z2sDPEpTpCGvaDT9oqKYrtr0sadDXghu4d5CeMlurSA AuYlgGIbgIKL5y+wdGhITLltH0eM5OMjcrqTlZ9djxQU8S3OYZcRKL2TUVqRTfiUC6eW YGYR7JOa10TX3qpyWfSQBQSRXODC4ntFse/vk= Received: by 10.142.233.8 with SMTP id f8mr6978970wfh.122.1279203885956; Thu, 15 Jul 2010 07:24:45 -0700 (PDT) Received: from host-a-229.ustcsz.edu.cn ([58.211.218.74]) by mx.google.com with ESMTPS id b1sm11913080rvn.14.2010.07.15.07.24.43 (version=TLSv1/SSLv3 cipher=RC4-MD5); Thu, 15 Jul 2010 07:24:44 -0700 (PDT) Date: Thu, 15 Jul 2010 22:24:23 +0800 From: Junchang Wang To: romieu@fr.zoreil.com, netdev@vger.kernel.org Subject: Question about way that NICs deliver packets to the kernel Message-ID: <20100715142418.GA26491@host-a-229.ustcsz.edu.cn> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.20 (2009-06-14) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Hi list, My understand of the way that NICs deliver packets to the kernel is as follows. Correct me if any of this is wrong. Thanks. 1) The device buffer is fixed. When the kernel is acknowledged arrival of a new packet, it dynamically allocate a new skb and copy the packet into it. For example, 8139too. 2) The device buffer is mapped by streaming DMA. When the kernel is acknowledged arrival of a new packet, it unmaps the region previously mapped. Obviously, there is NO memcpy operation. Additional cost is streaming DMA map/unmap operations. For example, e100 and e1000. Here comes my question: 1) Is there a principle indicating which one is better? Is streaming DMA map/unmap operations more expensive than memcpy operation? 2) Why does r8169 bias towards the first approach even if it support both? I convert r8169 to the second one and get a 5% performance boost. Below is result running netperf TCP_STREAM test with 1.6K byte packet length. scheme 1 scheme 2 Imp. r8169 683M 718M 5% The following patch shows what I did: Thanks in advance. --Junchang --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/net/r8169.c b/drivers/net/r8169.c index 239d7ef..707876f 100644 --- a/drivers/net/r8169.c +++ b/drivers/net/r8169.c @@ -4556,15 +4556,9 @@ static int rtl8169_rx_interrupt(struct net_device *dev, rtl8169_rx_csum(skb, desc); - if (rtl8169_try_rx_copy(&skb, tp, pkt_size, addr)) { - pci_dma_sync_single_for_device(pdev, addr, - pkt_size, PCI_DMA_FROMDEVICE); - rtl8169_mark_to_asic(desc, tp->rx_buf_sz); - } else { - pci_unmap_single(pdev, addr, tp->rx_buf_sz, - PCI_DMA_FROMDEVICE); - tp->Rx_skbuff[entry] = NULL; - } + pci_unmap_single(pdev, addr, tp->rx_buf_sz, + PCI_DMA_FROMDEVICE); + tp->Rx_skbuff[entry] = NULL; skb_put(skb, pkt_size); skb->protocol = eth_type_trans(skb, dev);