Patchwork [-V5,04/25] powerpc: Reduce the PTE_INDEX_SIZE

login
register
mail settings
Submitter Aneesh Kumar K.V
Date April 4, 2013, 5:57 a.m.
Message ID <1365055083-31956-5-git-send-email-aneesh.kumar@linux.vnet.ibm.com>
Download mbox | patch
Permalink /patch/233627/
State Changes Requested, archived
Delegated to: Michael Ellerman
Headers show

Comments

Aneesh Kumar K.V - April 4, 2013, 5:57 a.m.
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

This make one PMD cover 16MB range. That helps in easier implementation of THP
on power. THP core code make use of one pmd entry to track the hugepage and
the range mapped by a single pmd entry should be equal to the hugepage size
supported by the hardware.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgtable-ppc64-64k.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
David Gibson - April 11, 2013, 7:10 a.m.
On Thu, Apr 04, 2013 at 11:27:42AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> This make one PMD cover 16MB range. That helps in easier implementation of THP
> on power. THP core code make use of one pmd entry to track the hugepage and
> the range mapped by a single pmd entry should be equal to the hugepage size
> supported by the hardware.
> 
> Acked-by: Paul Mackerras <paulus@samba.org>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/pgtable-ppc64-64k.h |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/pgtable-ppc64-64k.h b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> index be4e287..3c529b4 100644
> --- a/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> +++ b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> @@ -4,10 +4,10 @@
>  #include <asm-generic/pgtable-nopud.h>
>  
>  
> -#define PTE_INDEX_SIZE  12
> +#define PTE_INDEX_SIZE  8
>  #define PMD_INDEX_SIZE  12
>  #define PUD_INDEX_SIZE	0
> -#define PGD_INDEX_SIZE  6
> +#define PGD_INDEX_SIZE  10
>  
>  #ifndef __ASSEMBLY__
>  #define PTE_TABLE_SIZE	(sizeof(real_pte_t) << PTE_INDEX_SIZE)

Actually, I've realised there's a much more serious problem here.
This patch as is will break existing hugpage support.  With the
previous numbers we had pagetable levels covering 256M and 1TB.  That
meant that at whichever level we split off a hugepd, it would line up
with the slice/segment boundaries.  Now it won't, and that means that
(explicitly) mapping hugepages and normal pages with correctly
constructed alignments will lead to the normal page fault paths
attempting to walk down hugepds or vice versa which will cause
crashes.

In fact.. with the new boundaries, we will attempt to put explicit 16M
hugepages in a hugepd of 4096 entries covering a total of 64G.  Which
means any attempt to use explicit hugepages in a 32-bit process will
blow up horribly.

The obvious solution is to make explicit hugepages also use your new
hugepage encoding, as a PMD entry pointing directly to the page data.
That's also a good idea, to avoid yet more variants on the pagetable
encoding.  But this conversion of the explicit hugepage code really
needs to be done before attempting to implement THP.

Patch

diff --git a/arch/powerpc/include/asm/pgtable-ppc64-64k.h b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
index be4e287..3c529b4 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64-64k.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
@@ -4,10 +4,10 @@ 
 #include <asm-generic/pgtable-nopud.h>
 
 
-#define PTE_INDEX_SIZE  12
+#define PTE_INDEX_SIZE  8
 #define PMD_INDEX_SIZE  12
 #define PUD_INDEX_SIZE	0
-#define PGD_INDEX_SIZE  6
+#define PGD_INDEX_SIZE  10
 
 #ifndef __ASSEMBLY__
 #define PTE_TABLE_SIZE	(sizeof(real_pte_t) << PTE_INDEX_SIZE)