From patchwork Tue Jan 17 16:35:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander H Duyck X-Patchwork-Id: 716294 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3v2whh0H50z9t22 for ; Wed, 18 Jan 2017 03:36:00 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="omdLM2j8"; dkim-atps=neutral Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 53A3689381; Tue, 17 Jan 2017 16:35:59 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fCYJuVtKQAwi; Tue, 17 Jan 2017 16:35:57 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by whitealder.osuosl.org (Postfix) with ESMTP id AE97D89013; Tue, 17 Jan 2017 16:35:57 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by ash.osuosl.org (Postfix) with ESMTP id 5FB5F1C0306 for ; Tue, 17 Jan 2017 16:35:56 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 5D86688ED4 for ; Tue, 17 Jan 2017 16:35:56 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UF5e6St3+a2s for ; Tue, 17 Jan 2017 16:35:55 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mail-pg0-f67.google.com (mail-pg0-f67.google.com [74.125.83.67]) by whitealder.osuosl.org (Postfix) with ESMTPS id D403288FA6 for ; Tue, 17 Jan 2017 16:35:55 +0000 (UTC) Received: by mail-pg0-f67.google.com with SMTP id 194so8406362pgd.0 for ; Tue, 17 Jan 2017 08:35:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:from:to:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=3nIID1p0RIua5ivF5hK57Eo5ZQB/rMdFx12EpYZmYzU=; b=omdLM2j8//8z8tFcWz8zvpAbyOanRa6SS2rFJyy/yIOGuF3x81EZErCb7wzswDZecB LPnR/Ezf8C8WcrH1Ai0tS7bqXpSIpvKRGFrvzcZ8gdBasZ/dvSfIPyeqzwrC1rreSG4t v4LWQRqzZ1NsUffArvvgz+yTY4y1f3mDIEHxZf2TyYrl56fYFlqzhiamwNJDep4WLTkc SzTMznwVKVrgGW3+/Xnhktcme1Po8bO+EwW3M96qgeKlLOtYB52iOs22G75VPWhWRxBA Y5uc0gFTUiKNAV407ZHjFDyJam1UXNui+T8nhf00yua4hCy+VkEorYRxuQlJal5JllF1 z8/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=3nIID1p0RIua5ivF5hK57Eo5ZQB/rMdFx12EpYZmYzU=; b=uLSMSG9c1HVxSraacVmd/s/+8c63gCNN6uMhSRCnmf7A1Ud6jmi12jzzs1E1fj1tU+ ZTI1pULCE5m9HHmjG/hNKEQa3QiNYuRC0WUCoSXRBS6qszMDi7FTlL64M4gtbBib+afG fwBCfZv5HEX9H70OYhiCyIDQeicNZsJ6Wj3EkPlmv8AOzUiKSYhJnNe2sLn4oiAnDgR2 6bK9aR67t7gOP/pLaVZ+fNI1S7kGVHhkTm90tG2qE+acyh23M6FKFk0f2Ag92PYKReIi MsNN/C5dg9JmcpHceTXaYiUW+uOTcPmjUlRoHmylg3w9Moum3dQrJnlyxAFCtEK14ufw Foug== X-Gm-Message-State: AIkVDXKpFrFu5lh+4lscwb4BvXY8kLTql87TkLbxYD/TFZ05HmqGnNCaMMQQcOSOMpiDhg== X-Received: by 10.98.39.133 with SMTP id n127mr44686824pfn.26.1484670955265; Tue, 17 Jan 2017 08:35:55 -0800 (PST) Received: from localhost.localdomain ([2001:470:b:9c3:9e5c:8eff:fe4f:f2d0]) by smtp.gmail.com with ESMTPSA id 89sm56959210pfi.70.2017.01.17.08.35.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 17 Jan 2017 08:35:54 -0800 (PST) From: Alexander Duyck To: intel-wired-lan@lists.osuosl.org, jeffrey.t.kirsher@intel.com Date: Tue, 17 Jan 2017 08:35:54 -0800 Message-ID: <20170117163550.5423.60153.stgit@localhost.localdomain> In-Reply-To: <20170117163401.5423.37993.stgit@localhost.localdomain> References: <20170117163401.5423.37993.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Subject: [Intel-wired-lan] [next PATCH v2 03/11] ixgbe: Update driver to make use of DMA attributes in Rx path X-BeenThere: intel-wired-lan@lists.osuosl.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@lists.osuosl.org Sender: "Intel-wired-lan" From: Alexander Duyck This patch adds support for DMA_ATTR_SKIP_CPU_SYNC and DMA_ATTR_WEAK_ORDERING. By enabling both of these for the Rx path we are able to see performance improvements on architectures that implement either one due to the fact that page mapping and unmapping only has to sync what is actually being used instead of the entire buffer. In addition by enabling the weak ordering attribute enables a performance improvement for architectures that can associate a memory ordering with a DMA buffer such as Sparc. Signed-off-by: Alexander Duyck Tested-by: Andrew Bowers --- drivers/net/ethernet/intel/ixgbe/ixgbe.h | 3 + drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 56 +++++++++++++++++-------- 2 files changed, 40 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h index 9c6ccfc34177..97e74deecae2 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h @@ -107,6 +107,9 @@ /* How many Rx Buffers do we bundle into one write to the hardware ? */ #define IXGBE_RX_BUFFER_WRITE 16 /* Must be power of 2 */ +#define IXGBE_RX_DMA_ATTR \ + (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) + enum ixgbe_tx_flags { /* cmd_type flags */ IXGBE_TX_FLAGS_HW_VLAN = 0x01, diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index cfc097d07661..8ee8fcf0fe21 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -1583,8 +1583,10 @@ static bool ixgbe_alloc_mapped_page(struct ixgbe_ring *rx_ring, } /* map page for use */ - dma = dma_map_page(rx_ring->dev, page, 0, - ixgbe_rx_pg_size(rx_ring), DMA_FROM_DEVICE); + dma = dma_map_page_attrs(rx_ring->dev, page, 0, + ixgbe_rx_pg_size(rx_ring), + DMA_FROM_DEVICE, + IXGBE_RX_DMA_ATTR); /* * if mapping failed free memory back to system since @@ -1627,6 +1629,12 @@ void ixgbe_alloc_rx_buffers(struct ixgbe_ring *rx_ring, u16 cleaned_count) if (!ixgbe_alloc_mapped_page(rx_ring, bi)) break; + /* sync the buffer for use by the device */ + dma_sync_single_range_for_device(rx_ring->dev, bi->dma, + bi->page_offset, + ixgbe_rx_bufsz(rx_ring), + DMA_FROM_DEVICE); + /* * Refresh the desc even if buffer_addrs didn't change * because each write-back erases this info. @@ -1849,8 +1857,10 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring, { /* if the page was released unmap it, else just sync our portion */ if (unlikely(IXGBE_CB(skb)->page_released)) { - dma_unmap_page(rx_ring->dev, IXGBE_CB(skb)->dma, - ixgbe_rx_pg_size(rx_ring), DMA_FROM_DEVICE); + dma_unmap_page_attrs(rx_ring->dev, IXGBE_CB(skb)->dma, + ixgbe_rx_pg_size(rx_ring), + DMA_FROM_DEVICE, + IXGBE_RX_DMA_ATTR); IXGBE_CB(skb)->page_released = false; } else { struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0]; @@ -1934,12 +1944,6 @@ static void ixgbe_reuse_rx_page(struct ixgbe_ring *rx_ring, /* transfer page from old buffer to new buffer */ *new_buff = *old_buff; - - /* sync the buffer for use by the device */ - dma_sync_single_range_for_device(rx_ring->dev, new_buff->dma, - new_buff->page_offset, - ixgbe_rx_bufsz(rx_ring), - DMA_FROM_DEVICE); } static inline bool ixgbe_page_is_reserved(struct page *page) @@ -2106,9 +2110,10 @@ static struct sk_buff *ixgbe_fetch_rx_buffer(struct ixgbe_ring *rx_ring, IXGBE_CB(skb)->page_released = true; } else { /* we are not reusing the buffer so unmap it */ - dma_unmap_page(rx_ring->dev, rx_buffer->dma, - ixgbe_rx_pg_size(rx_ring), - DMA_FROM_DEVICE); + dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, + ixgbe_rx_pg_size(rx_ring), + DMA_FROM_DEVICE, + IXGBE_RX_DMA_ATTR); } /* clear contents of buffer_info */ @@ -4942,10 +4947,11 @@ static void ixgbe_clean_rx_ring(struct ixgbe_ring *rx_ring) if (rx_buffer->skb) { struct sk_buff *skb = rx_buffer->skb; if (IXGBE_CB(skb)->page_released) - dma_unmap_page(dev, - IXGBE_CB(skb)->dma, - ixgbe_rx_bufsz(rx_ring), - DMA_FROM_DEVICE); + dma_unmap_page_attrs(dev, + IXGBE_CB(skb)->dma, + ixgbe_rx_pg_size(rx_ring), + DMA_FROM_DEVICE, + IXGBE_RX_DMA_ATTR); dev_kfree_skb(skb); rx_buffer->skb = NULL; } @@ -4953,8 +4959,20 @@ static void ixgbe_clean_rx_ring(struct ixgbe_ring *rx_ring) if (!rx_buffer->page) continue; - dma_unmap_page(dev, rx_buffer->dma, - ixgbe_rx_pg_size(rx_ring), DMA_FROM_DEVICE); + /* Invalidate cache lines that may have been written to by + * device so that we avoid corrupting memory. + */ + dma_sync_single_range_for_cpu(rx_ring->dev, + rx_buffer->dma, + rx_buffer->page_offset, + ixgbe_rx_bufsz(rx_ring), + DMA_FROM_DEVICE); + + /* free resources associated with mapping */ + dma_unmap_page_attrs(dev, rx_buffer->dma, + ixgbe_rx_pg_size(rx_ring), + DMA_FROM_DEVICE, + IXGBE_RX_DMA_ATTR); __free_pages(rx_buffer->page, ixgbe_rx_pg_order(rx_ring)); rx_buffer->page = NULL;