Patchwork Reduce hashtable size when using 64kB pages

login
register
mail settings
Submitter Anton Blanchard
Date Feb. 13, 2009, 9:57 p.m.
Message ID <20090213215730.GF23273@kryten>
Download mbox | patch
Permalink /patch/23132/
State Accepted, archived
Commit 13870b657578bcce167978ee93dc02bf54e3beb0
Delegated to: Benjamin Herrenschmidt
Headers show

Comments

Anton Blanchard - Feb. 13, 2009, 9:57 p.m.
At the moment we size the hashtable based on 4kB pages / 2, even on a
64kB kernel. This results in a hashtable that is much larger than it
needs to be.

Grab the real page size and size the hashtable based on that. Note: this
only works on non hypervisor machines.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Patch

diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 8d5b475..f5bc1b2 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -516,7 +516,7 @@  static int __init htab_dt_scan_pftsize(unsigned long node,
 
 static unsigned long __init htab_get_table_size(void)
 {
-	unsigned long mem_size, rnd_mem_size, pteg_count;
+	unsigned long mem_size, rnd_mem_size, pteg_count, psize;
 
 	/* If hash size isn't already provided by the platform, we try to
 	 * retrieve it from the device-tree. If it's not there neither, we
@@ -534,7 +534,8 @@  static unsigned long __init htab_get_table_size(void)
 		rnd_mem_size <<= 1;
 
 	/* # pages / 2 */
-	pteg_count = max(rnd_mem_size >> (12 + 1), 1UL << 11);
+	psize = mmu_psize_defs[mmu_virtual_psize].shift;
+	pteg_count = max(rnd_mem_size >> (psize + 1), 1UL << 11);
 
 	return pteg_count << 7;
 }