From patchwork Tue Mar 26 15:20:32 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luis Henriques X-Patchwork-Id: 231395 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 381DE2C00D2 for ; Wed, 27 Mar 2013 02:24:52 +1100 (EST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1UKVjw-00035e-Qd; Tue, 26 Mar 2013 15:24:44 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1UKVjA-0001n2-OZ for kernel-team@lists.ubuntu.com; Tue, 26 Mar 2013 15:23:56 +0000 Received: from bl20-143-66.dsl.telepac.pt ([2.81.143.66] helo=localhost) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1UKVjA-0004r8-EK; Tue, 26 Mar 2013 15:23:56 +0000 From: Luis Henriques To: linux-kernel@vger.kernel.org, stable@vger.kernel.org, kernel-team@lists.ubuntu.com Subject: [PATCH 133/150] sfc: Properly sync RX DMA buffer when it is not the last in the page Date: Tue, 26 Mar 2013 15:20:32 +0000 Message-Id: <1364311249-14454-134-git-send-email-luis.henriques@canonical.com> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1364311249-14454-1-git-send-email-luis.henriques@canonical.com> References: <1364311249-14454-1-git-send-email-luis.henriques@canonical.com> X-Extended-Stable: 3.5 Cc: Ben Hutchings X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com 3.5.7.9 -stable review patch. If anyone has any objections, please let me know. ------------------ From: Ben Hutchings commit 3a68f19d7afb80f548d016effbc6ed52643a8085 upstream. We may currently allocate two RX DMA buffers to a page, and only unmap the page when the second is completed. We do not sync the first RX buffer to be completed; this can result in packet loss or corruption if the last RX buffer completed in a NAPI poll is the first in a page and is not DMA-coherent. (In the middle of a NAPI poll, we will handle the following RX completion and unmap the page *before* looking at the content of the first buffer.) Signed-off-by: Ben Hutchings Signed-off-by: Luis Henriques --- drivers/net/ethernet/sfc/rx.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c index 243e91f..622b0d7 100644 --- a/drivers/net/ethernet/sfc/rx.c +++ b/drivers/net/ethernet/sfc/rx.c @@ -240,7 +240,8 @@ static int efx_init_rx_buffers_page(struct efx_rx_queue *rx_queue) } static void efx_unmap_rx_buffer(struct efx_nic *efx, - struct efx_rx_buffer *rx_buf) + struct efx_rx_buffer *rx_buf, + unsigned int used_len) { if ((rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.page) { struct efx_rx_page_state *state; @@ -251,6 +252,10 @@ static void efx_unmap_rx_buffer(struct efx_nic *efx, state->dma_addr, efx_rx_buf_size(efx), PCI_DMA_FROMDEVICE); + } else if (used_len) { + dma_sync_single_for_cpu(&efx->pci_dev->dev, + rx_buf->dma_addr, used_len, + DMA_FROM_DEVICE); } } else if (!(rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.skb) { pci_unmap_single(efx->pci_dev, rx_buf->dma_addr, @@ -273,7 +278,7 @@ static void efx_free_rx_buffer(struct efx_nic *efx, static void efx_fini_rx_buffer(struct efx_rx_queue *rx_queue, struct efx_rx_buffer *rx_buf) { - efx_unmap_rx_buffer(rx_queue->efx, rx_buf); + efx_unmap_rx_buffer(rx_queue->efx, rx_buf, 0); efx_free_rx_buffer(rx_queue->efx, rx_buf); } @@ -538,10 +543,10 @@ void efx_rx_packet(struct efx_rx_queue *rx_queue, unsigned int index, goto out; } - /* Release card resources - assumes all RX buffers consumed in-order - * per RX queue + /* Release and/or sync DMA mapping - assumes all RX buffers + * consumed in-order per RX queue */ - efx_unmap_rx_buffer(efx, rx_buf); + efx_unmap_rx_buffer(efx, rx_buf, len); /* Prefetch nice and early so data will (hopefully) be in cache by * the time we look at it.