diff mbox series

[v1] dmaengine: tegra-apb: Handle DMA_PREP_INTERRUPT flag properly

Message ID 20190505181235.14798-1-digetx@gmail.com
State Deferred
Headers show
Series [v1] dmaengine: tegra-apb: Handle DMA_PREP_INTERRUPT flag properly | expand

Commit Message

Dmitry Osipenko May 5, 2019, 6:12 p.m. UTC
The DMA_PREP_INTERRUPT flag means that descriptor's callback should be
invoked upon transfer completion and that's it. For some reason driver
completely disables the hardware interrupt handling, leaving channel in
unusable state if transfer is issued with the flag being unset. Note
that there are no occurrences in the relevant drivers that do not set
the flag, hence this patch doesn't fix any actual bug and merely fixes
potential problem.

Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
---
 drivers/dma/tegra20-apb-dma.c | 41 ++++++++++++++++++++++++-----------
 1 file changed, 28 insertions(+), 13 deletions(-)

Comments

Jon Hunter May 8, 2019, 9:24 a.m. UTC | #1
On 05/05/2019 19:12, Dmitry Osipenko wrote:
> The DMA_PREP_INTERRUPT flag means that descriptor's callback should be
> invoked upon transfer completion and that's it. For some reason driver
> completely disables the hardware interrupt handling, leaving channel in
> unusable state if transfer is issued with the flag being unset. Note
> that there are no occurrences in the relevant drivers that do not set
> the flag, hence this patch doesn't fix any actual bug and merely fixes
> potential problem.
> 
> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>

From having a look at this, I am guessing that we have never really
tested the case where DMA_PREP_INTERRUPT flag is not set because as you
mentioned it does not look like this will work at all!

Is there are use-case you are looking at where you don't set the
DMA_PREP_INTERRUPT flag?

If not I am wondering if we should even bother supporting this and warn
if it is not set. AFAICT it does not appear to be mandatory, but maybe
Vinod can comment more on this.

Cheers
Jon
Dmitry Osipenko May 8, 2019, 12:37 p.m. UTC | #2
08.05.2019 12:24, Jon Hunter пишет:
> 
> On 05/05/2019 19:12, Dmitry Osipenko wrote:
>> The DMA_PREP_INTERRUPT flag means that descriptor's callback should be
>> invoked upon transfer completion and that's it. For some reason driver
>> completely disables the hardware interrupt handling, leaving channel in
>> unusable state if transfer is issued with the flag being unset. Note
>> that there are no occurrences in the relevant drivers that do not set
>> the flag, hence this patch doesn't fix any actual bug and merely fixes
>> potential problem.
>>
>> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
> 
> From having a look at this, I am guessing that we have never really
> tested the case where DMA_PREP_INTERRUPT flag is not set because as you
> mentioned it does not look like this will work at all!
> 
> Is there are use-case you are looking at where you don't set the
> DMA_PREP_INTERRUPT flag?

No. I just noticed it while was checking whether we really need to
handle the BUSY bit state for the Ben's "accurate reporting" patch.

> If not I am wondering if we should even bother supporting this and warn
> if it is not set. AFAICT it does not appear to be mandatory, but maybe
> Vinod can comment more on this.

The warning message will be also okay if it's not mandatory.
Vinod Koul May 21, 2019, 4:55 a.m. UTC | #3
On 08-05-19, 10:24, Jon Hunter wrote:
> 
> On 05/05/2019 19:12, Dmitry Osipenko wrote:
> > The DMA_PREP_INTERRUPT flag means that descriptor's callback should be
> > invoked upon transfer completion and that's it. For some reason driver
> > completely disables the hardware interrupt handling, leaving channel in
> > unusable state if transfer is issued with the flag being unset. Note
> > that there are no occurrences in the relevant drivers that do not set
> > the flag, hence this patch doesn't fix any actual bug and merely fixes
> > potential problem.
> > 
> > Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
> 
> >From having a look at this, I am guessing that we have never really
> tested the case where DMA_PREP_INTERRUPT flag is not set because as you
> mentioned it does not look like this will work at all!

That is a fair argument
> 
> Is there are use-case you are looking at where you don't set the
> DMA_PREP_INTERRUPT flag?
> 
> If not I am wondering if we should even bother supporting this and warn
> if it is not set. AFAICT it does not appear to be mandatory, but maybe
> Vinod can comment more on this.

This is supposed to be used in the cases where you submit a bunch of
descriptors and selectively dont want an interrupt in few cases...

Is this such a case?

Thanks
~Vinod
Dmitry Osipenko May 21, 2019, 1:46 p.m. UTC | #4
21.05.2019 7:55, Vinod Koul пишет:
> On 08-05-19, 10:24, Jon Hunter wrote:
>>
>> On 05/05/2019 19:12, Dmitry Osipenko wrote:
>>> The DMA_PREP_INTERRUPT flag means that descriptor's callback should be
>>> invoked upon transfer completion and that's it. For some reason driver
>>> completely disables the hardware interrupt handling, leaving channel in
>>> unusable state if transfer is issued with the flag being unset. Note
>>> that there are no occurrences in the relevant drivers that do not set
>>> the flag, hence this patch doesn't fix any actual bug and merely fixes
>>> potential problem.
>>>
>>> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
>>
>> >From having a look at this, I am guessing that we have never really
>> tested the case where DMA_PREP_INTERRUPT flag is not set because as you
>> mentioned it does not look like this will work at all!
> 
> That is a fair argument
>>
>> Is there are use-case you are looking at where you don't set the
>> DMA_PREP_INTERRUPT flag?
>>
>> If not I am wondering if we should even bother supporting this and warn
>> if it is not set. AFAICT it does not appear to be mandatory, but maybe
>> Vinod can comment more on this.
> 
> This is supposed to be used in the cases where you submit a bunch of
> descriptors and selectively dont want an interrupt in few cases...
> 
> Is this such a case?

The flag is set by device drivers. AFAIK, none of the drivers that are
used on Tegra SoC's make a use of that flag, at least not in upstream.
diff mbox series

Patch

diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
index cf462b1abc0b..29d972b7546f 100644
--- a/drivers/dma/tegra20-apb-dma.c
+++ b/drivers/dma/tegra20-apb-dma.c
@@ -561,6 +561,9 @@  static void tegra_dma_abort_all(struct tegra_dma_channel *tdc)
 			dma_desc->dma_status = DMA_ERROR;
 			list_add_tail(&dma_desc->node, &tdc->free_dma_desc);
 
+			if (dma_desc->cb_count < 0)
+				continue;
+
 			/* Add in cb list if it is not there. */
 			if (!dma_desc->cb_count)
 				list_add_tail(&dma_desc->cb_node,
@@ -616,9 +619,13 @@  static void handle_once_dma_done(struct tegra_dma_channel *tdc,
 	if (sgreq->last_sg) {
 		dma_desc->dma_status = DMA_COMPLETE;
 		dma_cookie_complete(&dma_desc->txd);
-		if (!dma_desc->cb_count)
-			list_add_tail(&dma_desc->cb_node, &tdc->cb_desc);
-		dma_desc->cb_count++;
+		if (dma_desc->cb_count >= 0) {
+			if (!dma_desc->cb_count)
+				list_add_tail(&dma_desc->cb_node,
+					      &tdc->cb_desc);
+
+			dma_desc->cb_count++;
+		}
 		list_add_tail(&dma_desc->node, &tdc->free_dma_desc);
 	}
 	list_add_tail(&sgreq->node, &tdc->free_sg_req);
@@ -645,9 +652,11 @@  static void handle_cont_sngl_cycle_dma_done(struct tegra_dma_channel *tdc,
 		dma_desc->bytes_requested;
 
 	/* Callback need to be call */
-	if (!dma_desc->cb_count)
-		list_add_tail(&dma_desc->cb_node, &tdc->cb_desc);
-	dma_desc->cb_count++;
+	if (dma_desc->cb_count >= 0) {
+		if (!dma_desc->cb_count)
+			list_add_tail(&dma_desc->cb_node, &tdc->cb_desc);
+		dma_desc->cb_count++;
+	}
 
 	/* If not last req then put at end of pending list */
 	if (!list_is_last(&sgreq->node, &tdc->pending_sg_req)) {
@@ -802,7 +811,7 @@  static int tegra_dma_terminate_all(struct dma_chan *dc)
 		dma_desc  = list_first_entry(&tdc->cb_desc,
 					typeof(*dma_desc), cb_node);
 		list_del(&dma_desc->cb_node);
-		dma_desc->cb_count = 0;
+		dma_desc->cb_count = -1;
 	}
 	spin_unlock_irqrestore(&tdc->lock, flags);
 	return 0;
@@ -988,8 +997,7 @@  static struct dma_async_tx_descriptor *tegra_dma_prep_slave_sg(
 		csr |= tdc->slave_id << TEGRA_APBDMA_CSR_REQ_SEL_SHIFT;
 	}
 
-	if (flags & DMA_PREP_INTERRUPT)
-		csr |= TEGRA_APBDMA_CSR_IE_EOC;
+	csr |= TEGRA_APBDMA_CSR_IE_EOC;
 
 	apb_seq |= TEGRA_APBDMA_APBSEQ_WRAP_WORD_1;
 
@@ -1000,11 +1008,15 @@  static struct dma_async_tx_descriptor *tegra_dma_prep_slave_sg(
 	}
 	INIT_LIST_HEAD(&dma_desc->tx_list);
 	INIT_LIST_HEAD(&dma_desc->cb_node);
-	dma_desc->cb_count = 0;
 	dma_desc->bytes_requested = 0;
 	dma_desc->bytes_transferred = 0;
 	dma_desc->dma_status = DMA_IN_PROGRESS;
 
+	if (flags & DMA_PREP_INTERRUPT)
+		dma_desc->cb_count = 0;
+	else
+		dma_desc->cb_count = -1;
+
 	/* Make transfer requests */
 	for_each_sg(sgl, sg, sg_len, i) {
 		u32 len, mem;
@@ -1131,8 +1143,7 @@  static struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic(
 		csr |= tdc->slave_id << TEGRA_APBDMA_CSR_REQ_SEL_SHIFT;
 	}
 
-	if (flags & DMA_PREP_INTERRUPT)
-		csr |= TEGRA_APBDMA_CSR_IE_EOC;
+	csr |= TEGRA_APBDMA_CSR_IE_EOC;
 
 	apb_seq |= TEGRA_APBDMA_APBSEQ_WRAP_WORD_1;
 
@@ -1144,7 +1155,11 @@  static struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic(
 
 	INIT_LIST_HEAD(&dma_desc->tx_list);
 	INIT_LIST_HEAD(&dma_desc->cb_node);
-	dma_desc->cb_count = 0;
+
+	if (flags & DMA_PREP_INTERRUPT)
+		dma_desc->cb_count = 0;
+	else
+		dma_desc->cb_count = -1;
 
 	dma_desc->bytes_transferred = 0;
 	dma_desc->bytes_requested = buf_len;