Patchwork powerpc/mm: Fix potential access to freed pages when using hugetlbfs

login
register
mail settings
Submitter Benjamin Herrenschmidt
Date June 16, 2009, 2:53 a.m.
Message ID <20090616025419.A5581DDD1B@ozlabs.org>
Download mbox | patch
Permalink /patch/28714/
State Accepted, archived
Commit 6c16a74d423f584ed80815ee7b944f5b578dd37a
Delegated to: Benjamin Herrenschmidt
Headers show

Comments

Benjamin Herrenschmidt - June 16, 2009, 2:53 a.m.
When using 64k page sizes, our PTE pages are split in two halves,
the second half containing the "extension" used to keep track of
individual 4k pages when not using HW 64k pages.

However, our page tables used for hugetlb have a slightly different
format and don't carry that "second half".

Our code that batched PTEs to be invalidated unconditionally reads
the "second half" (to put it into the batch), which means that when
called to invalidate hugetlb PTEs, it will access unrelated memory.

It breaks when CONFIG_DEBUG_PAGEALLOC is enabled.

This fixes it by only accessing the second half when the _PAGE_COMBO
bit is set in the first half, which indicates that we are dealing with
a "combo" page which represents 16x4k subpages. Anything else shouldn't
have this bit set and thus not require loading from the second half.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---


 arch/powerpc/include/asm/pte-hash64-64k.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
Sachin P. Sant - June 17, 2009, 9:18 a.m.
Benjamin Herrenschmidt wrote:
> When using 64k page sizes, our PTE pages are split in two halves,
> the second half containing the "extension" used to keep track of
> individual 4k pages when not using HW 64k pages.
>
> However, our page tables used for hugetlb have a slightly different
> format and don't carry that "second half".
>
> Our code that batched PTEs to be invalidated unconditionally reads
> the "second half" (to put it into the batch), which means that when
> called to invalidate hugetlb PTEs, it will access unrelated memory.
>
> It breaks when CONFIG_DEBUG_PAGEALLOC is enabled.
>
> This fixes it by only accessing the second half when the _PAGE_COMBO
> bit is set in the first half, which indicates that we are dealing with
> a "combo" page which represents 16x4k subpages. Anything else shouldn't
> have this bit set and thus not require loading from the second half.
>
> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Thanks for the patch. The machine survived after two days of
testing with hugetlbfs tests.


Regards
-Sachin
Benjamin Herrenschmidt - June 17, 2009, 10:11 a.m.
> Thanks for the patch. The machine survived after two days of
> testing with hugetlbfs tests.

Excellent, thanks for testing, I'll merge it with my next batch.

Cheers,
Ben.

Patch

--- linux-work.orig/arch/powerpc/include/asm/pte-hash64-64k.h	2009-06-16 11:27:05.000000000 +1000
+++ linux-work/arch/powerpc/include/asm/pte-hash64-64k.h	2009-06-16 12:03:29.000000000 +1000
@@ -47,7 +47,8 @@ 
  * generic accessors and iterators here
  */
 #define __real_pte(e,p) 	((real_pte_t) { \
-	(e), pte_val(*((p) + PTRS_PER_PTE)) })
+			(e), ((e) & _PAGE_COMBO) ? \
+				(pte_val(*((p) + PTRS_PER_PTE))) : 0 })
 #define __rpte_to_hidx(r,index)	((pte_val((r).pte) & _PAGE_COMBO) ? \
         (((r).hidx >> ((index)<<2)) & 0xf) : ((pte_val((r).pte) >> 12) & 0xf))
 #define __rpte_to_pte(r)	((r).pte)