Message ID | 1341217328-6676-1-git-send-email-ldewangan@nvidia.com |
---|---|
State | Not Applicable, archived |
Headers | show |
On 07/02/2012 02:22 AM, Laxman Dewangan wrote: > In cyclic mode of DMA, the byte transferred can be more > than the requested size and in this case, calculating > residuals based on the current position of DMA transfer to > bytes requested i.e. bytes required to transfer to reach > bytes requested from current DMA position. > > Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com> This makes sense to me, although I wonder if details like this aren't something that the dmaengine core should be handling. Acked-by: Stephen Warren <swarren@wwwdotorg.org> -- To unsubscribe from this list: send the line "unsubscribe linux-tegra" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 2012-07-02 at 10:02 -0600, Stephen Warren wrote: > On 07/02/2012 02:22 AM, Laxman Dewangan wrote: > > In cyclic mode of DMA, the byte transferred can be more > > than the requested size and in this case, calculating > > residuals based on the current position of DMA transfer to > > bytes requested i.e. bytes required to transfer to reach > > bytes requested from current DMA position. > > > > Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com> > > This makes sense to me, although I wonder if details like this aren't > something that the dmaengine core should be handling. No core doesn't know anything about the how much you are transferring and where you are. That is the driver to calculate and provide.
On Friday 13 July 2012 08:45 AM, Vinod Koul wrote: > On Mon, 2012-07-02 at 10:02 -0600, Stephen Warren wrote: >> On 07/02/2012 02:22 AM, Laxman Dewangan wrote: >>> In cyclic mode of DMA, the byte transferred can be more >>> than the requested size and in this case, calculating >>> residuals based on the current position of DMA transfer to >>> bytes requested i.e. bytes required to transfer to reach >>> bytes requested from current DMA position. >>> >>> Signed-off-by: Laxman Dewangan<ldewangan@nvidia.com> >> This makes sense to me, although I wonder if details like this aren't >> something that the dmaengine core should be handling. > No core doesn't know anything about the how much you are transferring > and where you are. That is the driver to calculate and provide. Just for confirmation, are you going to apply this patch or do I need to do anything here. Thanks, Laxman -- To unsubscribe from this list: send the line "unsubscribe linux-tegra" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 2012-07-13 at 11:09 +0530, Laxman Dewangan wrote: > On Friday 13 July 2012 08:45 AM, Vinod Koul wrote: > > On Mon, 2012-07-02 at 10:02 -0600, Stephen Warren wrote: > >> On 07/02/2012 02:22 AM, Laxman Dewangan wrote: > >>> In cyclic mode of DMA, the byte transferred can be more > >>> than the requested size and in this case, calculating > >>> residuals based on the current position of DMA transfer to > >>> bytes requested i.e. bytes required to transfer to reach > >>> bytes requested from current DMA position. > >>> > >>> Signed-off-by: Laxman Dewangan<ldewangan@nvidia.com> > >> This makes sense to me, although I wonder if details like this aren't > >> something that the dmaengine core should be handling. > > No core doesn't know anything about the how much you are transferring > > and where you are. That is the driver to calculate and provide. > > Just for confirmation, are you going to apply this patch or do I need to > do anything here. ??? You didnt get my other mail about applying?
On Friday 13 July 2012 11:58 AM, Vinod Koul wrote: > On Fri, 2012-07-13 at 11:09 +0530, Laxman Dewangan wrote: > You didnt get my other mail about applying? > Read carefully now and saw both are applied. Thanks for care. Thanks, Laxman -- To unsubscribe from this list: send the line "unsubscribe linux-tegra" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c index 340c617..d52dbc6 100644 --- a/drivers/dma/tegra20-apb-dma.c +++ b/drivers/dma/tegra20-apb-dma.c @@ -731,6 +731,7 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, struct tegra_dma_sg_req *sg_req; enum dma_status ret; unsigned long flags; + unsigned int residual; spin_lock_irqsave(&tdc->lock, flags); @@ -744,9 +745,10 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, /* Check on wait_ack desc status */ list_for_each_entry(dma_desc, &tdc->free_dma_desc, node) { if (dma_desc->txd.cookie == cookie) { - dma_set_residue(txstate, - dma_desc->bytes_requested - - dma_desc->bytes_transferred); + residual = dma_desc->bytes_requested - + (dma_desc->bytes_transferred % + dma_desc->bytes_requested); + dma_set_residue(txstate, residual); ret = dma_desc->dma_status; spin_unlock_irqrestore(&tdc->lock, flags); return ret; @@ -757,9 +759,10 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, list_for_each_entry(sg_req, &tdc->pending_sg_req, node) { dma_desc = sg_req->dma_desc; if (dma_desc->txd.cookie == cookie) { - dma_set_residue(txstate, - dma_desc->bytes_requested - - dma_desc->bytes_transferred); + residual = dma_desc->bytes_requested - + (dma_desc->bytes_transferred % + dma_desc->bytes_requested); + dma_set_residue(txstate, residual); ret = dma_desc->dma_status; spin_unlock_irqrestore(&tdc->lock, flags); return ret;
In cyclic mode of DMA, the byte transferred can be more than the requested size and in this case, calculating residuals based on the current position of DMA transfer to bytes requested i.e. bytes required to transfer to reach bytes requested from current DMA position. Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com> --- drivers/dma/tegra20-apb-dma.c | 15 +++++++++------ 1 files changed, 9 insertions(+), 6 deletions(-)