From patchwork Wed Oct 30 07:00:00 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?UmFmYcWCIE1pxYJlY2tp?= X-Patchwork-Id: 287141 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 53F522C0371 for ; Wed, 30 Oct 2013 18:00:21 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753044Ab3J3HAO (ORCPT ); Wed, 30 Oct 2013 03:00:14 -0400 Received: from mail-ee0-f41.google.com ([74.125.83.41]:42012 "EHLO mail-ee0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752985Ab3J3HAN (ORCPT ); Wed, 30 Oct 2013 03:00:13 -0400 Received: by mail-ee0-f41.google.com with SMTP id e53so404001eek.0 for ; Wed, 30 Oct 2013 00:00:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:mime-version:content-type :content-transfer-encoding; bh=L0/Xl35k8MSbc5zb0VFPfFW56vlgB8Qs63qzpT7u5ZI=; b=o9/n0YlBN+4WtjWnMgXhYc1hKxW2RU4lK2H8m9z9KcmkgARfCxFHI4FQWhP0Jym1Dz 3GcxdBloI1hsdfDhfIa9PxUvF2K5/CKq+TqrX/3S5On/dksmBfRbr7BzEli0ZorbH7IP 9ThE0jNE+uh2t92GFF1Ls/mSHEdz8YPE92QBYzjPa4fMZlAe9LJT8RHUIb3HXj03neFe wNbpXGB49ynfXAodclvlBpVG9xcK5gOngFUInNXUwE6L8Hy9L2Uj830hStaSnKLmnDsM xciR4yjbUhpFnaqnN2sH43diVWXyQT8VRTJRRI/Qk7QmuA2fJT8alSvOTLHrxs/NfWsk x7KQ== X-Received: by 10.14.3.9 with SMTP id 9mr782662eeg.72.1383116411530; Wed, 30 Oct 2013 00:00:11 -0700 (PDT) Received: from linux-t0zw.lan (static-91-227-21-4.devs.futuro.pl. [91.227.21.4]) by mx.google.com with ESMTPSA id a1sm79755438eem.1.2013.10.30.00.00.07 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 30 Oct 2013 00:00:10 -0700 (PDT) From: =?UTF-8?q?Rafa=C5=82=20Mi=C5=82ecki?= To: netdev@vger.kernel.org, "David S. Miller" Cc: openwrt-devel@lists.openwrt.org, Nathan Hintz , =?UTF-8?q?Rafa=C5=82=20Mi=C5=82ecki?= Subject: [PATCH] bgmac: pass received packet to the netif instead of copying it Date: Wed, 30 Oct 2013 08:00:00 +0100 Message-Id: <1383116400-29905-1-git-send-email-zajec5@gmail.com> X-Mailer: git-send-email 1.7.10.4 MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Copying whole packets with skb_copy_from_linear_data_offset is a pretty bad idea. CPU was spending time in __copy_user_common and network performance was lower. With the new solution iperf-measured speed increased from 116Mb/s to 134Mb/s. Signed-off-by: Rafał Miłecki --- Changes since [RFC TRY#2]: 1) Fixed arguments alignment 2) Dropped code fixing old slot in case of bgmac_dma_rx_skb_for_slot failure. Thanks to Nathan patch bgmac_dma_rx_skb_for_slot doesn't change anything in slot in case it failed somewhere. --- drivers/net/ethernet/broadcom/bgmac.c | 66 +++++++++++++++++++-------------- 1 file changed, 39 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c index 9a3fc89..eeb651d 100644 --- a/drivers/net/ethernet/broadcom/bgmac.c +++ b/drivers/net/ethernet/broadcom/bgmac.c @@ -315,7 +315,6 @@ static int bgmac_dma_rx_read(struct bgmac *bgmac, struct bgmac_dma_ring *ring, struct device *dma_dev = bgmac->core->dma_dev; struct bgmac_slot_info *slot = &ring->slots[ring->start]; struct sk_buff *skb = slot->skb; - struct sk_buff *new_skb; struct bgmac_rx_header *rx; u16 len, flags; @@ -328,38 +327,51 @@ static int bgmac_dma_rx_read(struct bgmac *bgmac, struct bgmac_dma_ring *ring, len = le16_to_cpu(rx->len); flags = le16_to_cpu(rx->flags); - /* Check for poison and drop or pass the packet */ - if (len == 0xdead && flags == 0xbeef) { - bgmac_err(bgmac, "Found poisoned packet at slot %d, DMA issue!\n", - ring->start); - } else { + do { + dma_addr_t old_dma_addr = slot->dma_addr; + int err; + + /* Check for poison and drop or pass the packet */ + if (len == 0xdead && flags == 0xbeef) { + bgmac_err(bgmac, "Found poisoned packet at slot %d, DMA issue!\n", + ring->start); + dma_sync_single_for_device(dma_dev, + slot->dma_addr, + BGMAC_RX_BUF_SIZE, + DMA_FROM_DEVICE); + break; + } + /* Omit CRC. */ len -= ETH_FCS_LEN; - new_skb = netdev_alloc_skb_ip_align(bgmac->net_dev, len); - if (new_skb) { - skb_put(new_skb, len); - skb_copy_from_linear_data_offset(skb, BGMAC_RX_FRAME_OFFSET, - new_skb->data, - len); - skb_checksum_none_assert(skb); - new_skb->protocol = - eth_type_trans(new_skb, bgmac->net_dev); - netif_receive_skb(new_skb); - handled++; - } else { - bgmac->net_dev->stats.rx_dropped++; - bgmac_err(bgmac, "Allocation of skb for copying packet failed!\n"); + /* Prepare new skb as replacement */ + err = bgmac_dma_rx_skb_for_slot(bgmac, slot); + if (err) { + /* Poison the old skb */ + rx->len = cpu_to_le16(0xdead); + rx->flags = cpu_to_le16(0xbeef); + + dma_sync_single_for_device(dma_dev, + slot->dma_addr, + BGMAC_RX_BUF_SIZE, + DMA_FROM_DEVICE); + break; } + bgmac_dma_rx_setup_desc(bgmac, ring, ring->start); - /* Poison the old skb */ - rx->len = cpu_to_le16(0xdead); - rx->flags = cpu_to_le16(0xbeef); - } + /* Unmap old skb, we'll pass it to the netfif */ + dma_unmap_single(dma_dev, old_dma_addr, + BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE); + + skb_put(skb, BGMAC_RX_FRAME_OFFSET + len); + skb_pull(skb, BGMAC_RX_FRAME_OFFSET); - /* Make it back accessible to the hardware */ - dma_sync_single_for_device(dma_dev, slot->dma_addr, - BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE); + skb_checksum_none_assert(skb); + skb->protocol = eth_type_trans(skb, bgmac->net_dev); + netif_receive_skb(skb); + handled++; + } while (0); if (++ring->start >= BGMAC_RX_RING_SLOTS) ring->start = 0;