diff mbox series

[V3] powerpc/mm/hash64: memset the pagetable pages on allocation.

Message ID 20180213110933.5491-1-aneesh.kumar@linux.vnet.ibm.com (mailing list archive)
State Accepted
Commit fc5c2f4a55a2c258e12013cdf287cf266dbcd2a7
Headers show
Series [V3] powerpc/mm/hash64: memset the pagetable pages on allocation. | expand

Commit Message

Aneesh Kumar K.V Feb. 13, 2018, 11:09 a.m. UTC
On powerpc we allocate page table pages from slab cache of different sizes. For
now we have a constructor that zero out the objects when we allocate then for
the first time. We expect the objects to be zeroed out when we free the the
object back to slab cache. This happens in the unmap path. For hugetlb pages
we call huge_pte_get_and_clear to do that. With the current configuration of
page table size, both pud and pgd level tables get allocated from the same slab
cache. At the pud level, we use the second half of the table to store the slot
information. But never clear that when unmapping. When such an freed object get
allocated at pgd level, we will have part of the page table page not initlaized
correctly. This result in kernel crash

Simplify this by calling the object initialization after kmem_cache_alloc

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/pgalloc.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

Ram Pai Feb. 13, 2018, 7:51 p.m. UTC | #1
On Tue, Feb 13, 2018 at 04:39:33PM +0530, Aneesh Kumar K.V wrote:
> On powerpc we allocate page table pages from slab cache of different sizes. For
> now we have a constructor that zero out the objects when we allocate then for
> the first time. We expect the objects to be zeroed out when we free the the
> object back to slab cache. This happens in the unmap path. For hugetlb pages
> we call huge_pte_get_and_clear to do that. With the current configuration of
> page table size, both pud and pgd level tables get allocated from the same slab
> cache. At the pud level, we use the second half of the table to store the slot
> information. But never clear that when unmapping. When such an freed object get
> allocated at pgd level, we will have part of the page table page not initlaized
> correctly. This result in kernel crash
> 
> Simplify this by calling the object initialization after kmem_cache_alloc
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/book3s/64/pgalloc.h | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
> index 53df86d3cfce..e4d154a4d114 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
> @@ -73,10 +73,13 @@ static inline void radix__pgd_free(struct mm_struct *mm, pgd_t *pgd)
> 
>  static inline pgd_t *pgd_alloc(struct mm_struct *mm)
>  {
> +	pgd_t *pgd;
>  	if (radix_enabled())
>  		return radix__pgd_alloc(mm);
> -	return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
> -		pgtable_gfp_flags(mm, GFP_KERNEL));

kmem_cache_zalloc() wont work?

RP

> +	pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
> +			       pgtable_gfp_flags(mm, GFP_KERNEL));
> +	memset(pgd, 0, PGD_TABLE_SIZE);
> +	return pgd;
>  }
> 
>  static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
> -- 
> 2.14.3
Michael Ellerman Feb. 14, 2018, 5:43 a.m. UTC | #2
On Tue, 2018-02-13 at 11:09:33 UTC, "Aneesh Kumar K.V" wrote:
> On powerpc we allocate page table pages from slab cache of different sizes. For
> now we have a constructor that zero out the objects when we allocate then for
> the first time. We expect the objects to be zeroed out when we free the the
> object back to slab cache. This happens in the unmap path. For hugetlb pages
> we call huge_pte_get_and_clear to do that. With the current configuration of
> page table size, both pud and pgd level tables get allocated from the same slab
> cache. At the pud level, we use the second half of the table to store the slot
> information. But never clear that when unmapping. When such an freed object get
> allocated at pgd level, we will have part of the page table page not initlaized
> correctly. This result in kernel crash
> 
> Simplify this by calling the object initialization after kmem_cache_alloc
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/fc5c2f4a55a2c258e12013cdf287cf

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
index 53df86d3cfce..e4d154a4d114 100644
--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
+++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
@@ -73,10 +73,13 @@  static inline void radix__pgd_free(struct mm_struct *mm, pgd_t *pgd)
 
 static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 {
+	pgd_t *pgd;
 	if (radix_enabled())
 		return radix__pgd_alloc(mm);
-	return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
-		pgtable_gfp_flags(mm, GFP_KERNEL));
+	pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
+			       pgtable_gfp_flags(mm, GFP_KERNEL));
+	memset(pgd, 0, PGD_TABLE_SIZE);
+	return pgd;
 }
 
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)