diff mbox series

[10/20] powerpc/dma-noncoherent: don't disable irqs over kmap_atomic

Message ID 20180730163824.10064-11-hch@lst.de (mailing list archive)
State Not Applicable
Headers show
Series [01/20] kernel/dma/direct: take DMA offset into account in dma_direct_supported | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success next/apply_patch Successfully applied
snowpatch_ozlabs/checkpatch success Test checkpatch on branch next

Commit Message

Christoph Hellwig July 30, 2018, 4:38 p.m. UTC
The requirement to disable local irqs over kmap_atomic is long gone,
so remove those calls.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/powerpc/mm/dma-noncoherent.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

Comments

Benjamin Herrenschmidt Aug. 9, 2018, 12:27 a.m. UTC | #1
On Mon, 2018-07-30 at 18:38 +0200, Christoph Hellwig wrote:
> The requirement to disable local irqs over kmap_atomic is long gone,
> so remove those calls.

Really ? I'm trying to verify that and getting lost in a mess of macros
from hell in the per-cpu stuff but if you look at our implementation
of kmap_atomic_prot(), all it does is a preempt_disable(), and then
it uses kmap_atomic_idx_push():

	int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;

Note the use of __this_cpu_inc_return(), not this_cpu_inc_return(),
ie this is the non-interrupt safe version...

Ben.

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  arch/powerpc/mm/dma-noncoherent.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/mm/dma-noncoherent.c b/arch/powerpc/mm/dma-noncoherent.c
> index 382528475433..d1c16456abac 100644
> --- a/arch/powerpc/mm/dma-noncoherent.c
> +++ b/arch/powerpc/mm/dma-noncoherent.c
> @@ -357,12 +357,10 @@ static inline void __dma_sync_page_highmem(struct page *page,
>  {
>  	size_t seg_size = min((size_t)(PAGE_SIZE - offset), size);
>  	size_t cur_size = seg_size;
> -	unsigned long flags, start, seg_offset = offset;
> +	unsigned long start, seg_offset = offset;
>  	int nr_segs = 1 + ((size - seg_size) + PAGE_SIZE - 1)/PAGE_SIZE;
>  	int seg_nr = 0;
>  
> -	local_irq_save(flags);
> -
>  	do {
>  		start = (unsigned long)kmap_atomic(page + seg_nr) + seg_offset;
>  
> @@ -378,8 +376,6 @@ static inline void __dma_sync_page_highmem(struct page *page,
>  		cur_size += seg_size;
>  		seg_offset = 0;
>  	} while (seg_nr < nr_segs);
> -
> -	local_irq_restore(flags);
>  }
>  #endif /* CONFIG_HIGHMEM */
>
Christoph Hellwig Aug. 22, 2018, 7:02 a.m. UTC | #2
On Thu, Aug 09, 2018 at 10:27:46AM +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2018-07-30 at 18:38 +0200, Christoph Hellwig wrote:
> > The requirement to disable local irqs over kmap_atomic is long gone,
> > so remove those calls.
> 
> Really ? I'm trying to verify that and getting lost in a mess of macros
> from hell in the per-cpu stuff but if you look at our implementation
> of kmap_atomic_prot(), all it does is a preempt_disable(), and then
> it uses kmap_atomic_idx_push():
> 
> 	int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;
> 
> Note the use of __this_cpu_inc_return(), not this_cpu_inc_return(),
> ie this is the non-interrupt safe version...

Looks like the powerpc variant indeed isn't save.

I did look a bit more through the code and history, and it seems
like we remove the need to disable irqs when called from process
context a while ago, but we still require disabling irqs when called
from irq context.  Given that this code can also be called from
irq context we'll have to keep the local_irq_save.
Benjamin Herrenschmidt Aug. 22, 2018, 11:45 p.m. UTC | #3
On Wed, 2018-08-22 at 09:02 +0200, Christoph Hellwig wrote:
> On Thu, Aug 09, 2018 at 10:27:46AM +1000, Benjamin Herrenschmidt wrote:
> > On Mon, 2018-07-30 at 18:38 +0200, Christoph Hellwig wrote:
> > > The requirement to disable local irqs over kmap_atomic is long gone,
> > > so remove those calls.
> > 
> > Really ? I'm trying to verify that and getting lost in a mess of macros
> > from hell in the per-cpu stuff but if you look at our implementation
> > of kmap_atomic_prot(), all it does is a preempt_disable(), and then
> > it uses kmap_atomic_idx_push():
> > 
> > 	int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;
> > 
> > Note the use of __this_cpu_inc_return(), not this_cpu_inc_return(),
> > ie this is the non-interrupt safe version...
> 
> Looks like the powerpc variant indeed isn't save.
> 
> I did look a bit more through the code and history, and it seems
> like we remove the need to disable irqs when called from process
> context a while ago, but we still require disabling irqs when called
> from irq context.  Given that this code can also be called from
> irq context we'll have to keep the local_irq_save.

This is the same with x86 no ?

32-bit x86 kmap_atomic_prot is the same as ours...

In fact I wonder why the preempt_disable() in there since it needs to
be protected against interrupt ?

Or is it that we never actually call kmap_atomic_* these days from
interrupt, and the atomic versions are just about dealing with
spinlocks ?

Cheers,
Ben.
diff mbox series

Patch

diff --git a/arch/powerpc/mm/dma-noncoherent.c b/arch/powerpc/mm/dma-noncoherent.c
index 382528475433..d1c16456abac 100644
--- a/arch/powerpc/mm/dma-noncoherent.c
+++ b/arch/powerpc/mm/dma-noncoherent.c
@@ -357,12 +357,10 @@  static inline void __dma_sync_page_highmem(struct page *page,
 {
 	size_t seg_size = min((size_t)(PAGE_SIZE - offset), size);
 	size_t cur_size = seg_size;
-	unsigned long flags, start, seg_offset = offset;
+	unsigned long start, seg_offset = offset;
 	int nr_segs = 1 + ((size - seg_size) + PAGE_SIZE - 1)/PAGE_SIZE;
 	int seg_nr = 0;
 
-	local_irq_save(flags);
-
 	do {
 		start = (unsigned long)kmap_atomic(page + seg_nr) + seg_offset;
 
@@ -378,8 +376,6 @@  static inline void __dma_sync_page_highmem(struct page *page,
 		cur_size += seg_size;
 		seg_offset = 0;
 	} while (seg_nr < nr_segs);
-
-	local_irq_restore(flags);
 }
 #endif /* CONFIG_HIGHMEM */