diff mbox series

powerpc/64s/radix: fix preempt imbalance in TLB flush

Message ID 20171010101641.5714-1-npiggin@gmail.com (mailing list archive)
State Superseded
Headers show
Series powerpc/64s/radix: fix preempt imbalance in TLB flush | expand

Commit Message

Nicholas Piggin Oct. 10, 2017, 10:16 a.m. UTC
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/tlb-radix.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Aneesh Kumar K.V Oct. 10, 2017, 10:22 a.m. UTC | #1
On 10/10/2017 03:46 PM, Nicholas Piggin wrote:
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/mm/tlb-radix.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
> index b3e849c4886e..de414460287a 100644
> --- a/arch/powerpc/mm/tlb-radix.c
> +++ b/arch/powerpc/mm/tlb-radix.c
> @@ -358,7 +358,7 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
>   	unsigned long ap = mmu_get_ap(mmu_virtual_psize);
>   	unsigned long pid, end;
> 
> -
> +	preempt_disable();
>   	pid = mm ? mm->context.id : 0;
>   	if (unlikely(pid == MMU_NO_CONTEXT))
>   		goto no_context;
> @@ -366,6 +366,7 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
>   	/* 4k page size, just blow the world */
>   	if (PAGE_SIZE == 0x1000) {
>   		radix__flush_all_mm(mm);
> +		preempt_enable();
>   		return;
>   	}
> 

Can't we do a preempt_disable before the if (local) check?. That way we 
don't need that prempt_enable in that PAGE_SIZE==0x1000 path.We already 
do disable/enable correctly in radix__flush_all_mm(mm)

-aneesh
Nicholas Piggin Oct. 10, 2017, 10:32 a.m. UTC | #2
On Tue, 10 Oct 2017 15:52:02 +0530
"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> wrote:

> On 10/10/2017 03:46 PM, Nicholas Piggin wrote:
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
> >   arch/powerpc/mm/tlb-radix.c | 3 ++-
> >   1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
> > index b3e849c4886e..de414460287a 100644
> > --- a/arch/powerpc/mm/tlb-radix.c
> > +++ b/arch/powerpc/mm/tlb-radix.c
> > @@ -358,7 +358,7 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
> >   	unsigned long ap = mmu_get_ap(mmu_virtual_psize);
> >   	unsigned long pid, end;
> > 
> > -
> > +	preempt_disable();
> >   	pid = mm ? mm->context.id : 0;
> >   	if (unlikely(pid == MMU_NO_CONTEXT))
> >   		goto no_context;
> > @@ -366,6 +366,7 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
> >   	/* 4k page size, just blow the world */
> >   	if (PAGE_SIZE == 0x1000) {
> >   		radix__flush_all_mm(mm);
> > +		preempt_enable();
> >   		return;
> >   	}
> >   
> 
> Can't we do a preempt_disable before the if (local) check?. That way we 
> don't need that prempt_enable in that PAGE_SIZE==0x1000 path.We already 
> do disable/enable correctly in radix__flush_all_mm(mm)

Well this is just to fix the imbalance. Nested preempt doesn't matter
much, and these are all no-ops for !preempt kernel, unless you turn on
debugging.

I already proposed another patch to bring those local tests under
preempt disable but no response yet

https://patchwork.ozlabs.org/patch/811061/

Thanks,
Nick
Aneesh Kumar K.V Oct. 10, 2017, 1:39 p.m. UTC | #3
On 10/10/2017 04:02 PM, Nicholas Piggin wrote:
> On Tue, 10 Oct 2017 15:52:02 +0530
> "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> wrote:
>
>> On 10/10/2017 03:46 PM, Nicholas Piggin wrote:
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>    arch/powerpc/mm/tlb-radix.c | 3 ++-
>>>    1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
>>> index b3e849c4886e..de414460287a 100644
>>> --- a/arch/powerpc/mm/tlb-radix.c
>>> +++ b/arch/powerpc/mm/tlb-radix.c
>>> @@ -358,7 +358,7 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
>>>    	unsigned long ap = mmu_get_ap(mmu_virtual_psize);
>>>    	unsigned long pid, end;
>>>
>>> -
>>> +	preempt_disable();
>>>    	pid = mm ? mm->context.id : 0;
>>>    	if (unlikely(pid == MMU_NO_CONTEXT))
>>>    		goto no_context;
>>> @@ -366,6 +366,7 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
>>>    	/* 4k page size, just blow the world */
>>>    	if (PAGE_SIZE == 0x1000) {
>>>    		radix__flush_all_mm(mm);
>>> +		preempt_enable();
>>>    		return;
>>>    	}
>>>    
>> Can't we do a preempt_disable before the if (local) check?. That way we
>> don't need that prempt_enable in that PAGE_SIZE==0x1000 path.We already
>> do disable/enable correctly in radix__flush_all_mm(mm)
> Well this is just to fix the imbalance. Nested preempt doesn't matter
> much, and these are all no-ops for !preempt kernel, unless you turn on
> debugging.

But this patch is still doing the mm_is_thread_local() test outside 
preempt_disable() right?


> I already proposed another patch to bring those local tests under
> preempt disable but no response yet
>
> https://patchwork.ozlabs.org/patch/811061/
>

That is a much better patch?

-aneesh
Nicholas Piggin Oct. 10, 2017, 2:09 p.m. UTC | #4
On Tue, 10 Oct 2017 19:09:54 +0530
"Aneesh Kumar K.V" <aneeshkumar.opensource@gmail.com> wrote:

> 
> 
> On 10/10/2017 04:02 PM, Nicholas Piggin wrote:
> > On Tue, 10 Oct 2017 15:52:02 +0530
> > "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> wrote:
> >
> >> On 10/10/2017 03:46 PM, Nicholas Piggin wrote:
> >>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> >>> ---
> >>>    arch/powerpc/mm/tlb-radix.c | 3 ++-
> >>>    1 file changed, 2 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
> >>> index b3e849c4886e..de414460287a 100644
> >>> --- a/arch/powerpc/mm/tlb-radix.c
> >>> +++ b/arch/powerpc/mm/tlb-radix.c
> >>> @@ -358,7 +358,7 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
> >>>    	unsigned long ap = mmu_get_ap(mmu_virtual_psize);
> >>>    	unsigned long pid, end;
> >>>
> >>> -
> >>> +	preempt_disable();
> >>>    	pid = mm ? mm->context.id : 0;
> >>>    	if (unlikely(pid == MMU_NO_CONTEXT))
> >>>    		goto no_context;
> >>> @@ -366,6 +366,7 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
> >>>    	/* 4k page size, just blow the world */
> >>>    	if (PAGE_SIZE == 0x1000) {
> >>>    		radix__flush_all_mm(mm);
> >>> +		preempt_enable();
> >>>    		return;
> >>>    	}
> >>>    
> >> Can't we do a preempt_disable before the if (local) check?. That way we
> >> don't need that prempt_enable in that PAGE_SIZE==0x1000 path.We already
> >> do disable/enable correctly in radix__flush_all_mm(mm)
> > Well this is just to fix the imbalance. Nested preempt doesn't matter
> > much, and these are all no-ops for !preempt kernel, unless you turn on
> > debugging.
> 
> But this patch is still doing the mm_is_thread_local() test outside 
> preempt_disable() right?

Yes. As it does in some other place too. It's just a minimal fix for
the imbalance issue as I said, because that's messing up debugging.

> > I already proposed another patch to bring those local tests under
> > preempt disable but no response yet
> >
> > https://patchwork.ozlabs.org/patch/811061/
> >
> 
> That is a much better patch?

I'm planning to repost the series but have been side-tracked hitting
testing it due to hitting bugs (!preempt though, so this has not been
top priority).

Thanks,
Nick
diff mbox series

Patch

diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
index b3e849c4886e..de414460287a 100644
--- a/arch/powerpc/mm/tlb-radix.c
+++ b/arch/powerpc/mm/tlb-radix.c
@@ -358,7 +358,7 @@  void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
 	unsigned long ap = mmu_get_ap(mmu_virtual_psize);
 	unsigned long pid, end;
 
-
+	preempt_disable();
 	pid = mm ? mm->context.id : 0;
 	if (unlikely(pid == MMU_NO_CONTEXT))
 		goto no_context;
@@ -366,6 +366,7 @@  void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
 	/* 4k page size, just blow the world */
 	if (PAGE_SIZE == 0x1000) {
 		radix__flush_all_mm(mm);
+		preempt_enable();
 		return;
 	}