From patchwork Mon Jul 2 08:22:07 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laxman Dewangan X-Patchwork-Id: 168506 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 37FC92C01F1 for ; Mon, 2 Jul 2012 18:26:03 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754271Ab2GBI0B (ORCPT ); Mon, 2 Jul 2012 04:26:01 -0400 Received: from hqemgate04.nvidia.com ([216.228.121.35]:13514 "EHLO hqemgate04.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753161Ab2GBI0A (ORCPT ); Mon, 2 Jul 2012 04:26:00 -0400 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate04.nvidia.com id ; Mon, 02 Jul 2012 01:25:14 -0700 Received: from hqemhub03.nvidia.com ([172.17.108.22]) by hqnvupgp07.nvidia.com (PGP Universal service); Mon, 02 Jul 2012 01:22:39 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Mon, 02 Jul 2012 01:22:39 -0700 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQEMHUB03.nvidia.com (172.20.150.15) with Microsoft SMTP Server id 8.3.264.0; Mon, 2 Jul 2012 01:25:58 -0700 Received: from thelma.nvidia.com (Not Verified[172.16.212.77]) by hqnvemgw02.nvidia.com with MailMarshal (v6,7,2,8378) id ; Mon, 02 Jul 2012 01:26:21 -0700 Received: from ldewangan-ubuntu.nvidia.com ([10.19.65.30]) by thelma.nvidia.com (8.13.8+Sun/8.8.8) with ESMTP id q628Pt90005557; Mon, 2 Jul 2012 01:25:56 -0700 (PDT) From: Laxman Dewangan To: , , CC: , , Laxman Dewangan Subject: [PATCH 2/2] dma: tegra: fix residual calculation for cyclic case Date: Mon, 2 Jul 2012 13:52:07 +0530 Message-ID: <1341217328-6676-1-git-send-email-ldewangan@nvidia.com> X-Mailer: git-send-email 1.7.1.1 MIME-Version: 1.0 Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org In cyclic mode of DMA, the byte transferred can be more than the requested size and in this case, calculating residuals based on the current position of DMA transfer to bytes requested i.e. bytes required to transfer to reach bytes requested from current DMA position. Signed-off-by: Laxman Dewangan Acked-by: Stephen Warren --- drivers/dma/tegra20-apb-dma.c | 15 +++++++++------ 1 files changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c index 340c617..d52dbc6 100644 --- a/drivers/dma/tegra20-apb-dma.c +++ b/drivers/dma/tegra20-apb-dma.c @@ -731,6 +731,7 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, struct tegra_dma_sg_req *sg_req; enum dma_status ret; unsigned long flags; + unsigned int residual; spin_lock_irqsave(&tdc->lock, flags); @@ -744,9 +745,10 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, /* Check on wait_ack desc status */ list_for_each_entry(dma_desc, &tdc->free_dma_desc, node) { if (dma_desc->txd.cookie == cookie) { - dma_set_residue(txstate, - dma_desc->bytes_requested - - dma_desc->bytes_transferred); + residual = dma_desc->bytes_requested - + (dma_desc->bytes_transferred % + dma_desc->bytes_requested); + dma_set_residue(txstate, residual); ret = dma_desc->dma_status; spin_unlock_irqrestore(&tdc->lock, flags); return ret; @@ -757,9 +759,10 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, list_for_each_entry(sg_req, &tdc->pending_sg_req, node) { dma_desc = sg_req->dma_desc; if (dma_desc->txd.cookie == cookie) { - dma_set_residue(txstate, - dma_desc->bytes_requested - - dma_desc->bytes_transferred); + residual = dma_desc->bytes_requested - + (dma_desc->bytes_transferred % + dma_desc->bytes_requested); + dma_set_residue(txstate, residual); ret = dma_desc->dma_status; spin_unlock_irqrestore(&tdc->lock, flags); return ret;