Patchwork [3.5.y.z,extended,stable] Patch "sfc: Properly sync RX DMA buffer when it is not the last in" has been added to staging queue

mail settings
Submitter Luis Henriques
Date March 25, 2013, 6:03 p.m.
Message ID <>
Download mbox | patch
Permalink /patch/230939/
State New
Headers show


Luis Henriques - March 25, 2013, 6:03 p.m.
This is a note to let you know that I have just added a patch titled

    sfc: Properly sync RX DMA buffer when it is not the last in

to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree 
which can be found at:;a=shortlog;h=refs/heads/linux-3.5.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.5.y.z tree, see



From 8f43b31450fb5973f86febdaa3f579dad8774aa9 Mon Sep 17 00:00:00 2001
From: Ben Hutchings <>
Date: Thu, 20 Dec 2012 18:48:20 +0000
Subject: [PATCH] sfc: Properly sync RX DMA buffer when it is not the last in
 the page

commit 3a68f19d7afb80f548d016effbc6ed52643a8085 upstream.

We may currently allocate two RX DMA buffers to a page, and only unmap
the page when the second is completed.  We do not sync the first RX
buffer to be completed; this can result in packet loss or corruption
if the last RX buffer completed in a NAPI poll is the first in a page
and is not DMA-coherent.  (In the middle of a NAPI poll, we will
handle the following RX completion and unmap the page *before* looking
at the content of the first buffer.)

Signed-off-by: Ben Hutchings <>
Signed-off-by: Luis Henriques <>
 drivers/net/ethernet/sfc/rx.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)



diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c
index 243e91f..622b0d7 100644
--- a/drivers/net/ethernet/sfc/rx.c
+++ b/drivers/net/ethernet/sfc/rx.c
@@ -240,7 +240,8 @@  static int efx_init_rx_buffers_page(struct efx_rx_queue *rx_queue)

 static void efx_unmap_rx_buffer(struct efx_nic *efx,
-				struct efx_rx_buffer *rx_buf)
+				struct efx_rx_buffer *rx_buf,
+				unsigned int used_len)
 	if ((rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf-> {
 		struct efx_rx_page_state *state;
@@ -251,6 +252,10 @@  static void efx_unmap_rx_buffer(struct efx_nic *efx,
+		} else if (used_len) {
+			dma_sync_single_for_cpu(&efx->pci_dev->dev,
+						rx_buf->dma_addr, used_len,
 	} else if (!(rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.skb) {
 		pci_unmap_single(efx->pci_dev, rx_buf->dma_addr,
@@ -273,7 +278,7 @@  static void efx_free_rx_buffer(struct efx_nic *efx,
 static void efx_fini_rx_buffer(struct efx_rx_queue *rx_queue,
 			       struct efx_rx_buffer *rx_buf)
-	efx_unmap_rx_buffer(rx_queue->efx, rx_buf);
+	efx_unmap_rx_buffer(rx_queue->efx, rx_buf, 0);
 	efx_free_rx_buffer(rx_queue->efx, rx_buf);

@@ -538,10 +543,10 @@  void efx_rx_packet(struct efx_rx_queue *rx_queue, unsigned int index,
 		goto out;

-	/* Release card resources - assumes all RX buffers consumed in-order
-	 * per RX queue
+	/* Release and/or sync DMA mapping - assumes all RX buffers
+	 * consumed in-order per RX queue
-	efx_unmap_rx_buffer(efx, rx_buf);
+	efx_unmap_rx_buffer(efx, rx_buf, len);

 	/* Prefetch nice and early so data will (hopefully) be in cache by
 	 * the time we look at it.