diff mbox

[v4,1/4] i40e: Sync DMA region prior skbuff allocation

Message ID 20161217134000.31640-2-bjorn.topel@gmail.com
State Changes Requested
Delegated to: Jeff Kirsher
Headers show

Commit Message

Björn Töpel Dec. 17, 2016, 1:39 p.m. UTC
From: Björn Töpel <bjorn.topel@intel.com>

This patch prepares i40e_fetch_rx_buffer() for upcoming XDP support,
where there's a need to access the device buffers prior skbuff
allocation.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_txrx.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

Comments

Bowers, AndrewX Dec. 23, 2016, 5:35 p.m. UTC | #1
> -----Original Message-----

> From: Intel-wired-lan [mailto:intel-wired-lan-bounces@lists.osuosl.org] On

> Behalf Of Björn Töpel

> Sent: Saturday, December 17, 2016 5:40 AM

> To: Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; intel-wired-

> lan@lists.osuosl.org

> Cc: daniel@iogearbox.net; Topel, Bjorn <bjorn.topel@intel.com>; Karlsson,

> Magnus <magnus.karlsson@intel.com>

> Subject: [Intel-wired-lan] [PATCH v4 1/4] i40e: Sync DMA region prior skbuff

> allocation

> 

> From: Björn Töpel <bjorn.topel@intel.com>

> 

> This patch prepares i40e_fetch_rx_buffer() for upcoming XDP support,

> where there's a need to access the device buffers prior skbuff allocation.

> 

> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>

> ---

>  drivers/net/ethernet/intel/i40e/i40e_txrx.c | 14 +++++++-------

>  1 file changed, 7 insertions(+), 7 deletions(-)


Tested-by: Andrew Bowers <andrewx.bowers@intel.com>

Does not break base driver
diff mbox

Patch

diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 9de05a0e8201..8bdc95c9e9b7 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -1646,6 +1646,13 @@  struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
 	page = rx_buffer->page;
 	prefetchw(page);
 
+	/* we are reusing so sync this buffer for CPU use */
+	dma_sync_single_range_for_cpu(rx_ring->dev,
+				      rx_buffer->dma,
+				      rx_buffer->page_offset,
+				      size,
+				      DMA_FROM_DEVICE);
+
 	if (likely(!skb)) {
 		void *page_addr = page_address(page) + rx_buffer->page_offset;
 
@@ -1671,13 +1678,6 @@  struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
 		prefetchw(skb->data);
 	}
 
-	/* we are reusing so sync this buffer for CPU use */
-	dma_sync_single_range_for_cpu(rx_ring->dev,
-				      rx_buffer->dma,
-				      rx_buffer->page_offset,
-				      size,
-				      DMA_FROM_DEVICE);
-
 	/* pull page into skb */
 	if (i40e_add_rx_frag(rx_ring, rx_buffer, size, skb)) {
 		/* hand second half of page back to the ring */