diff mbox series

powerpc/mm/tlbflush: update the mmu_gather page size while iterating address range

Message ID 20180809133659.16230-1-aneesh.kumar@linux.ibm.com (mailing list archive)
State Accepted
Commit 0b6aa1a20add96437c46db77c9bae2d7529dfbc1
Headers show
Series powerpc/mm/tlbflush: update the mmu_gather page size while iterating address range | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success next/apply_patch Successfully applied
snowpatch_ozlabs/checkpatch warning Test checkpatch on branch next
snowpatch_ozlabs/build-ppc64le success Test build-ppc64le on branch next
snowpatch_ozlabs/build-ppc64be success Test build-ppc64be on branch next
snowpatch_ozlabs/build-ppc64e success Test build-ppc64e on branch next
snowpatch_ozlabs/build-ppc32 success Test build-ppc32 on branch next

Commit Message

Aneesh Kumar K V Aug. 9, 2018, 1:36 p.m. UTC
This patch makes sure we update the mmu_gather page size even if we are
requesting for a fullmm flush. This avoids triggering VM_WARN_ON in code
paths like __tlb_remove_page_size that explicitly check for removing range page
size to be same as mmu gather page size.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/tlb.h | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

Comments

Nicholas Piggin Aug. 10, 2018, 12:58 a.m. UTC | #1
On Thu,  9 Aug 2018 19:06:59 +0530
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:

> This patch makes sure we update the mmu_gather page size even if we are
> requesting for a fullmm flush. This avoids triggering VM_WARN_ON in code
> paths like __tlb_remove_page_size that explicitly check for removing range page
> size to be same as mmu gather page size.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

Acked-by: Nicholas Piggin <npiggin@gmail.com>

Thanks, sorry bout that.

> ---
>  arch/powerpc/include/asm/tlb.h | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h
> index 97ecef697e1b..f0e571b2dc7c 100644
> --- a/arch/powerpc/include/asm/tlb.h
> +++ b/arch/powerpc/include/asm/tlb.h
> @@ -49,13 +49,11 @@ static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep,
>  static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb,
>  						     unsigned int page_size)
>  {
> -	if (tlb->fullmm)
> -		return;
> -
>  	if (!tlb->page_size)
>  		tlb->page_size = page_size;
>  	else if (tlb->page_size != page_size) {
> -		tlb_flush_mmu(tlb);
> +		if (!tlb->fullmm)
> +			tlb_flush_mmu(tlb);
>  		/*
>  		 * update the page size after flush for the new
>  		 * mmu_gather.
Michael Ellerman Aug. 10, 2018, 7:09 a.m. UTC | #2
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:

> This patch makes sure we update the mmu_gather page size even if we are
> requesting for a fullmm flush. This avoids triggering VM_WARN_ON in code
> paths like __tlb_remove_page_size that explicitly check for removing range page
> size to be same as mmu gather page size.

I take it this is a fix for 5a6099346c41 ("powerpc/64s/radix: tlb do not flush on page size when fullmm") ?

cheers

> diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h
> index 97ecef697e1b..f0e571b2dc7c 100644
> --- a/arch/powerpc/include/asm/tlb.h
> +++ b/arch/powerpc/include/asm/tlb.h
> @@ -49,13 +49,11 @@ static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep,
>  static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb,
>  						     unsigned int page_size)
>  {
> -	if (tlb->fullmm)
> -		return;
> -
>  	if (!tlb->page_size)
>  		tlb->page_size = page_size;
>  	else if (tlb->page_size != page_size) {
> -		tlb_flush_mmu(tlb);
> +		if (!tlb->fullmm)
> +			tlb_flush_mmu(tlb);
>  		/*
>  		 * update the page size after flush for the new
>  		 * mmu_gather.
> -- 
> 2.17.1
Michael Ellerman Aug. 13, 2018, 11:23 a.m. UTC | #3
On Thu, 2018-08-09 at 13:36:59 UTC, "Aneesh Kumar K.V" wrote:
> This patch makes sure we update the mmu_gather page size even if we are
> requesting for a fullmm flush. This avoids triggering VM_WARN_ON in code
> paths like __tlb_remove_page_size that explicitly check for removing range page
> size to be same as mmu gather page size.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Acked-by: Nicholas Piggin <npiggin@gmail.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/0b6aa1a20add96437c46db77c9bae2

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h
index 97ecef697e1b..f0e571b2dc7c 100644
--- a/arch/powerpc/include/asm/tlb.h
+++ b/arch/powerpc/include/asm/tlb.h
@@ -49,13 +49,11 @@  static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep,
 static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb,
 						     unsigned int page_size)
 {
-	if (tlb->fullmm)
-		return;
-
 	if (!tlb->page_size)
 		tlb->page_size = page_size;
 	else if (tlb->page_size != page_size) {
-		tlb_flush_mmu(tlb);
+		if (!tlb->fullmm)
+			tlb_flush_mmu(tlb);
 		/*
 		 * update the page size after flush for the new
 		 * mmu_gather.