get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/1167/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 1167,
    "url": "http://patchwork.ozlabs.org/api/patches/1167/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1222201125-15690-1-git-send-email-becky.bruce@freescale.com/",
    "project": {
        "id": 2,
        "url": "http://patchwork.ozlabs.org/api/projects/2/?format=api",
        "name": "Linux PPC development",
        "link_name": "linuxppc-dev",
        "list_id": "linuxppc-dev.lists.ozlabs.org",
        "list_email": "linuxppc-dev@lists.ozlabs.org",
        "web_url": "https://github.com/linuxppc/wiki/wiki",
        "scm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git",
        "webscm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/",
        "list_archive_url": "https://lore.kernel.org/linuxppc-dev/",
        "list_archive_url_format": "https://lore.kernel.org/linuxppc-dev/{}/",
        "commit_url_format": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?id={}"
    },
    "msgid": "<1222201125-15690-1-git-send-email-becky.bruce@freescale.com>",
    "list_archive_url": "https://lore.kernel.org/linuxppc-dev/1222201125-15690-1-git-send-email-becky.bruce@freescale.com/",
    "date": "2008-09-23T20:18:45",
    "name": "[v5] POWERPC: Allow 32-bit hashed pgtable code to support 36-bit physical",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "b9c22266347a84730b400c98eb7c4a67adbc7754",
    "submitter": {
        "id": 12,
        "url": "http://patchwork.ozlabs.org/api/people/12/?format=api",
        "name": "Becky Bruce",
        "email": "becky.bruce@freescale.com"
    },
    "delegate": null,
    "mbox": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1222201125-15690-1-git-send-email-becky.bruce@freescale.com/mbox/",
    "series": [],
    "comments": "http://patchwork.ozlabs.org/api/patches/1167/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/1167/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@ozlabs.org>",
        "X-Original-To": [
            "patchwork-incoming@ozlabs.org",
            "linuxppc-dev@ozlabs.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@ozlabs.org",
            "linuxppc-dev@ozlabs.org"
        ],
        "Received": [
            "from ozlabs.org (localhost [127.0.0.1])\n\tby ozlabs.org (Postfix) with ESMTP id 5F3F6DE7FA\n\tfor <patchwork-incoming@ozlabs.org>;\n\tWed, 24 Sep 2008 06:19:07 +1000 (EST)",
            "from az33egw02.freescale.net (az33egw02.freescale.net\n\t[192.88.158.103])\n\t(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))\n\t(Client CN \"az33egw02.freescale.net\",\n\tIssuer \"Thawte Premium Server CA\" (verified OK))\n\tby ozlabs.org (Postfix) with ESMTPS id 58B51DE227\n\tfor <linuxppc-dev@ozlabs.org>; Wed, 24 Sep 2008 06:18:53 +1000 (EST)",
            "from az33smr01.freescale.net (az33smr01.freescale.net\n\t[10.64.34.199])\n\tby az33egw02.freescale.net (8.12.11/az33egw02) with ESMTP id\n\tm8NKIku8009816\n\tfor <linuxppc-dev@ozlabs.org>; Tue, 23 Sep 2008 13:18:46 -0700 (MST)",
            "from blarg.am.freescale.net (blarg.am.freescale.net [10.82.19.176])\n\tby az33smr01.freescale.net (8.13.1/8.13.0) with ESMTP id\n\tm8NKIjP1006548\n\tfor <linuxppc-dev@ozlabs.org>; Tue, 23 Sep 2008 15:18:46 -0500 (CDT)",
            "from blarg.am.freescale.net (localhost.localdomain [127.0.0.1])\n\tby blarg.am.freescale.net (8.14.2/8.14.2) with ESMTP id\n\tm8NKIjeW015713; Tue, 23 Sep 2008 15:18:45 -0500",
            "(from bgill@localhost)\n\tby blarg.am.freescale.net (8.14.2/8.14.2/Submit) id m8NKIje8015711;\n\tTue, 23 Sep 2008 15:18:45 -0500"
        ],
        "From": "Becky Bruce <becky.bruce@freescale.com>",
        "To": "linuxppc-dev@ozlabs.org",
        "Subject": "[PATCH v5] POWERPC: Allow 32-bit hashed pgtable code to support\n\t36-bit physical",
        "Date": "Tue, 23 Sep 2008 15:18:45 -0500",
        "Message-Id": "<1222201125-15690-1-git-send-email-becky.bruce@freescale.com>",
        "X-Mailer": "git-send-email 1.5.5.1",
        "X-BeenThere": "linuxppc-dev@ozlabs.org",
        "X-Mailman-Version": "2.1.11",
        "Precedence": "list",
        "List-Id": "Linux on PowerPC Developers Mail List <linuxppc-dev.ozlabs.org>",
        "List-Unsubscribe": "<https://ozlabs.org/mailman/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@ozlabs.org?subject=unsubscribe>",
        "List-Archive": "<http://ozlabs.org/pipermail/linuxppc-dev>",
        "List-Post": "<mailto:linuxppc-dev@ozlabs.org>",
        "List-Help": "<mailto:linuxppc-dev-request@ozlabs.org?subject=help>",
        "List-Subscribe": "<https://ozlabs.org/mailman/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@ozlabs.org?subject=subscribe>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain; charset=\"us-ascii\"",
        "Content-Transfer-Encoding": "7bit",
        "Sender": "linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@ozlabs.org",
        "Errors-To": "linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@ozlabs.org"
    },
    "content": "This rearranges a bit of code, and adds support for\n36-bit physical addressing for configs that use a\nhashed page table.  The 36b physical support is not\nenabled by default on any config - it must be\nexplicitly enabled via the config system.\n\nThis patch *only* expands the page table code to accomodate\nlarge physical addresses on 32-bit systems and enables the\nPHYS_64BIT config option for 86xx.  It does *not*\nallow you to boot a board with more than about 3.5GB of\nRAM - for that, SWIOTLB support is also required (and\ncoming soon).\n\nSigned-off-by: Becky Bruce <becky.bruce@freescale.com>\n---\nThis patch depends on Kumar's recent patch\n[PATCH v9 2/4] powerpc: Fixes for CONFIG_PTE_64BIT for SMP support -\nwe've both made modifications to set_pte_at.\n\n arch/powerpc/include/asm/io.h            |    2 +-\n arch/powerpc/include/asm/page_32.h       |    8 +++-\n arch/powerpc/include/asm/pgtable-ppc32.h |   17 +++++-\n arch/powerpc/kernel/asm-offsets.c        |    1 +\n arch/powerpc/kernel/head_32.S            |    4 +-\n arch/powerpc/kernel/head_fsl_booke.S     |    2 -\n arch/powerpc/kernel/ppc_ksyms.c          |    1 +\n arch/powerpc/mm/hash_low_32.S            |   86 ++++++++++++++++++++++++------\n arch/powerpc/mm/pgtable_32.c             |    4 +-\n arch/powerpc/platforms/Kconfig.cputype   |   17 ++++---\n 10 files changed, 109 insertions(+), 33 deletions(-)",
    "diff": "diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h\nindex 77c7fa0..08266d2 100644\n--- a/arch/powerpc/include/asm/io.h\n+++ b/arch/powerpc/include/asm/io.h\n@@ -711,7 +711,7 @@ static inline void * phys_to_virt(unsigned long address)\n /*\n  * Change \"struct page\" to physical address.\n  */\n-#define page_to_phys(page)\t(page_to_pfn(page) << PAGE_SHIFT)\n+#define page_to_phys(page)\t((phys_addr_t)page_to_pfn(page) << PAGE_SHIFT)\n \n /* We do NOT want virtual merging, it would put too much pressure on\n  * our iommu allocator. Instead, we want drivers to be smart enough\ndiff --git a/arch/powerpc/include/asm/page_32.h b/arch/powerpc/include/asm/page_32.h\nindex ebfae53..d77072a 100644\n--- a/arch/powerpc/include/asm/page_32.h\n+++ b/arch/powerpc/include/asm/page_32.h\n@@ -13,10 +13,16 @@\n #define ARCH_KMALLOC_MINALIGN\tL1_CACHE_BYTES\n #endif\n \n+#ifdef CONFIG_PTE_64BIT\n+#define PTE_FLAGS_OFFSET\t4\t/* offset of PTE flags, in bytes */\n+#else\n+#define PTE_FLAGS_OFFSET\t0\n+#endif\n+\n #ifndef __ASSEMBLY__\n /*\n  * The basic type of a PTE - 64 bits for those CPUs with > 32 bit\n- * physical addressing.  For now this just the IBM PPC440.\n+ * physical addressing.\n  */\n #ifdef CONFIG_PTE_64BIT\n typedef unsigned long long pte_basic_t;\ndiff --git a/arch/powerpc/include/asm/pgtable-ppc32.h b/arch/powerpc/include/asm/pgtable-ppc32.h\nindex c2e58b4..29c83d8 100644\n--- a/arch/powerpc/include/asm/pgtable-ppc32.h\n+++ b/arch/powerpc/include/asm/pgtable-ppc32.h\n@@ -369,7 +369,12 @@ extern int icache_44x_need_flush;\n #define _PAGE_RW\t0x400\t/* software: user write access allowed */\n #define _PAGE_SPECIAL\t0x800\t/* software: Special page */\n \n+#ifdef CONFIG_PTE_64BIT\n+/* We never clear the high word of the pte */\n+#define _PTE_NONE_MASK\t(0xffffffff00000000ULL | _PAGE_HASHPTE)\n+#else\n #define _PTE_NONE_MASK\t_PAGE_HASHPTE\n+#endif\n \n #define _PMD_PRESENT\t0\n #define _PMD_PRESENT_MASK (PAGE_MASK)\n@@ -587,6 +592,10 @@ extern int flush_hash_pages(unsigned context, unsigned long va,\n extern void add_hash_page(unsigned context, unsigned long va,\n \t\t\t  unsigned long pmdval);\n \n+/* Flush an entry from the TLB/hash table */\n+extern void flush_hash_entry(struct mm_struct *mm, pte_t *ptep,\n+\t\t\t     unsigned long address);\n+\n /*\n  * Atomic PTE updates.\n  *\n@@ -665,9 +674,13 @@ static inline unsigned long long pte_update(pte_t *p,\n static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,\n \t\t\t      pte_t *ptep, pte_t pte)\n {\n-#if _PAGE_HASHPTE != 0\n+#if (_PAGE_HASHPTE != 0) && defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT)\n \tpte_update(ptep, ~_PAGE_HASHPTE, pte_val(pte) & ~_PAGE_HASHPTE);\n #elif defined(CONFIG_PTE_64BIT) && defined(CONFIG_SMP)\n+#if _PAGE_HASHPTE != 0\n+\tif (pte_val(*ptep) & _PAGE_HASHPTE)\n+\t\tflush_hash_entry(mm, ptep, addr);\n+#endif\n \t__asm__ __volatile__(\"\\\n \t\tstw%U0%X0 %2,%0\\n\\\n \t\teieio\\n\\\n@@ -675,7 +688,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,\n \t: \"=m\" (*ptep), \"=m\" (*((unsigned char *)ptep+4))\n \t: \"r\" (pte) : \"memory\");\n #else\n-\t*ptep = pte;\n+\t*ptep = (*ptep & _PAGE_HASHPTE) | (pte & ~_PAGE_HASHPTE);\n #endif\n }\n \ndiff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c\nindex e9c4044..09febc5 100644\n--- a/arch/powerpc/kernel/asm-offsets.c\n+++ b/arch/powerpc/kernel/asm-offsets.c\n@@ -352,6 +352,7 @@ int main(void)\n #endif\n \n \tDEFINE(PGD_TABLE_SIZE, PGD_TABLE_SIZE);\n+\tDEFINE(PTE_SIZE, sizeof(pte_t));\n \n #ifdef CONFIG_KVM\n \tDEFINE(TLBE_BYTES, sizeof(struct tlbe));\ndiff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S\nindex 8bb6575..a6de6db 100644\n--- a/arch/powerpc/kernel/head_32.S\n+++ b/arch/powerpc/kernel/head_32.S\n@@ -369,13 +369,13 @@ i##n:\t\t\t\t\t\t\t\t\\\n DataAccess:\n \tEXCEPTION_PROLOG\n \tmfspr\tr10,SPRN_DSISR\n+\tstw\tr10,_DSISR(r11)\n \tandis.\tr0,r10,0xa470\t\t/* weird error? */\n \tbne\t1f\t\t\t/* if not, try to put a PTE */\n \tmfspr\tr4,SPRN_DAR\t\t/* into the hash table */\n \trlwinm\tr3,r10,32-15,21,21\t/* DSISR_STORE -> _PAGE_RW */\n \tbl\thash_page\n-1:\tstw\tr10,_DSISR(r11)\n-\tmr\tr5,r10\n+1:\tlwz\tr5,_DSISR(r11)\t\t/* get DSISR value */\n \tmfspr\tr4,SPRN_DAR\n \tEXC_XFER_EE_LITE(0x300, handle_page_fault)\n \ndiff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S\nindex 377e0c1..18c0093 100644\n--- a/arch/powerpc/kernel/head_fsl_booke.S\n+++ b/arch/powerpc/kernel/head_fsl_booke.S\n@@ -422,7 +422,6 @@ skpinv:\taddi\tr6,r6,1\t\t\t\t/* Increment */\n  *   r12 is pointer to the pte\n  */\n #ifdef CONFIG_PTE_64BIT\n-#define PTE_FLAGS_OFFSET\t4\n #define FIND_PTE\t\\\n \trlwinm\tr12, r10, 13, 19, 29;\t/* Compute pgdir/pmd offset */\t\\\n \tlwzx\tr11, r12, r11;\t\t/* Get pgd/pmd entry */\t\t\\\n@@ -431,7 +430,6 @@ skpinv:\taddi\tr6,r6,1\t\t\t\t/* Increment */\n \trlwimi\tr12, r10, 23, 20, 28;\t/* Compute pte address */\t\\\n \tlwz\tr11, 4(r12);\t\t/* Get pte entry */\n #else\n-#define PTE_FLAGS_OFFSET\t0\n #define FIND_PTE\t\\\n \trlwimi\tr11, r10, 12, 20, 29;\t/* Create L1 (pgdir/pmd) address */\t\\\n \tlwz\tr11, 0(r11);\t\t/* Get L1 entry */\t\t\t\\\ndiff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c\nindex 8edc235..5682edf 100644\n--- a/arch/powerpc/kernel/ppc_ksyms.c\n+++ b/arch/powerpc/kernel/ppc_ksyms.c\n@@ -116,6 +116,7 @@ EXPORT_SYMBOL(giveup_spe);\n \n #ifndef CONFIG_PPC64\n EXPORT_SYMBOL(flush_instruction_cache);\n+EXPORT_SYMBOL(flush_hash_entry);\n EXPORT_SYMBOL(flush_tlb_kernel_range);\n EXPORT_SYMBOL(flush_tlb_page);\n EXPORT_SYMBOL(_tlbie);\ndiff --git a/arch/powerpc/mm/hash_low_32.S b/arch/powerpc/mm/hash_low_32.S\nindex c41d658..7bffb70 100644\n--- a/arch/powerpc/mm/hash_low_32.S\n+++ b/arch/powerpc/mm/hash_low_32.S\n@@ -75,7 +75,7 @@ _GLOBAL(hash_page_sync)\n  * Returns to the caller if the access is illegal or there is no\n  * mapping for the address.  Otherwise it places an appropriate PTE\n  * in the hash table and returns from the exception.\n- * Uses r0, r3 - r8, ctr, lr.\n+ * Uses r0, r3 - r8, r10, ctr, lr.\n  */\n \t.text\n _GLOBAL(hash_page)\n@@ -106,9 +106,15 @@ _GLOBAL(hash_page)\n \taddi\tr5,r5,swapper_pg_dir@l\t/* kernel page table */\n \trlwimi\tr3,r9,32-12,29,29\t/* MSR_PR -> _PAGE_USER */\n 112:\tadd\tr5,r5,r7\t\t/* convert to phys addr */\n+#ifndef CONFIG_PTE_64BIT\n \trlwimi\tr5,r4,12,20,29\t\t/* insert top 10 bits of address */\n \tlwz\tr8,0(r5)\t\t/* get pmd entry */\n \trlwinm.\tr8,r8,0,0,19\t\t/* extract address of pte page */\n+#else\n+\trlwinm\tr8,r4,13,19,29\t\t/* Compute pgdir/pmd offset */\n+\tlwzx\tr8,r8,r5\t\t/* Get L1 entry */\n+\trlwinm.\tr8,r8,0,0,20\t\t/* extract pt base address */\n+#endif\n #ifdef CONFIG_SMP\n \tbeq-\thash_page_out\t\t/* return if no mapping */\n #else\n@@ -118,7 +124,11 @@ _GLOBAL(hash_page)\n \t   to the address following the rfi. */\n \tbeqlr-\n #endif\n+#ifndef CONFIG_PTE_64BIT\n \trlwimi\tr8,r4,22,20,29\t\t/* insert next 10 bits of address */\n+#else\n+\trlwimi\tr8,r4,23,20,28\t\t/* compute pte address */\n+#endif\n \trlwinm\tr0,r3,32-3,24,24\t/* _PAGE_RW access -> _PAGE_DIRTY */\n \tori\tr0,r0,_PAGE_ACCESSED|_PAGE_HASHPTE\n \n@@ -127,9 +137,15 @@ _GLOBAL(hash_page)\n \t * because almost always, there won't be a permission violation\n \t * and there won't already be an HPTE, and thus we will have\n \t * to update the PTE to set _PAGE_HASHPTE.  -- paulus.\n+\t *\n+\t * If PTE_64BIT is set, the low word is the flags word; use that\n+\t * word for locking since it contains all the interesting bits.\n \t */\n+#if (PTE_FLAGS_OFFSET != 0)\n+\taddi\tr8,r8,PTE_FLAGS_OFFSET\n+#endif\n retry:\n-\tlwarx\tr6,0,r8\t\t\t/* get linux-style pte */\n+\tlwarx\tr6,0,r8\t\t\t/* get linux-style pte, flag word */\n \tandc.\tr5,r3,r6\t\t/* check access & ~permission */\n #ifdef CONFIG_SMP\n \tbne-\thash_page_out\t\t/* return if access not permitted */\n@@ -137,6 +153,15 @@ retry:\n \tbnelr-\n #endif\n \tor\tr5,r0,r6\t\t/* set accessed/dirty bits */\n+#ifdef CONFIG_PTE_64BIT\n+#ifdef CONFIG_SMP\n+\tsubf\tr10,r6,r8\t\t/* create false data dependency */\n+\tsubi\tr10,r10,PTE_FLAGS_OFFSET\n+\tlwzx\tr10,r6,r10\t\t/* Get upper PTE word */\n+#else\n+\tlwz\tr10,-PTE_FLAGS_OFFSET(r8)\n+#endif /* CONFIG_SMP */\n+#endif /* CONFIG_PTE_64BIT */\n \tstwcx.\tr5,0,r8\t\t\t/* attempt to update PTE */\n \tbne-\tretry\t\t\t/* retry if someone got there first */\n \n@@ -203,9 +228,9 @@ _GLOBAL(add_hash_page)\n \t * we can't take a hash table miss (assuming the code is\n \t * covered by a BAT).  -- paulus\n \t */\n-\tmfmsr\tr10\n+\tmfmsr\tr9\n \tSYNC\n-\trlwinm\tr0,r10,0,17,15\t\t/* clear bit 16 (MSR_EE) */\n+\trlwinm\tr0,r9,0,17,15\t\t/* clear bit 16 (MSR_EE) */\n \trlwinm\tr0,r0,0,28,26\t\t/* clear MSR_DR */\n \tmtmsr\tr0\n \tSYNC_601\n@@ -214,14 +239,14 @@ _GLOBAL(add_hash_page)\n \ttophys(r7,0)\n \n #ifdef CONFIG_SMP\n-\taddis\tr9,r7,mmu_hash_lock@ha\n-\taddi\tr9,r9,mmu_hash_lock@l\n-10:\tlwarx\tr0,0,r9\t\t\t/* take the mmu_hash_lock */\n+\taddis\tr6,r7,mmu_hash_lock@ha\n+\taddi\tr6,r6,mmu_hash_lock@l\n+10:\tlwarx\tr0,0,r6\t\t\t/* take the mmu_hash_lock */\n \tcmpi\t0,r0,0\n \tbne-\t11f\n-\tstwcx.\tr8,0,r9\n+\tstwcx.\tr8,0,r6\n \tbeq+\t12f\n-11:\tlwz\tr0,0(r9)\n+11:\tlwz\tr0,0(r6)\n \tcmpi\t0,r0,0\n \tbeq\t10b\n \tb\t11b\n@@ -234,10 +259,24 @@ _GLOBAL(add_hash_page)\n \t * HPTE, so we just unlock and return.\n \t */\n \tmr\tr8,r5\n+#ifndef CONFIG_PTE_64BIT\n \trlwimi\tr8,r4,22,20,29\n+#else\n+\trlwimi\tr8,r4,23,20,28\n+\taddi\tr8,r8,PTE_FLAGS_OFFSET\n+#endif\n 1:\tlwarx\tr6,0,r8\n \tandi.\tr0,r6,_PAGE_HASHPTE\n \tbne\t9f\t\t\t/* if HASHPTE already set, done */\n+#ifdef CONFIG_PTE_64BIT\n+#ifdef CONFIG_SMP\n+\tsubf\tr10,r6,r8\t\t/* create false data dependency */\n+\tsubi\tr10,r10,PTE_FLAGS_OFFSET\n+\tlwzx\tr10,r6,r10\t\t/* Get upper PTE word */\n+#else\n+\tlwz\tr10,-PTE_FLAGS_OFFSET(r8)\n+#endif /* CONFIG_SMP */\n+#endif /* CONFIG_PTE_64BIT */\n \tori\tr5,r6,_PAGE_HASHPTE\n \tstwcx.\tr5,0,r8\n \tbne-\t1b\n@@ -246,13 +285,15 @@ _GLOBAL(add_hash_page)\n \n 9:\n #ifdef CONFIG_SMP\n+\taddis\tr6,r7,mmu_hash_lock@ha\n+\taddi\tr6,r6,mmu_hash_lock@l\n \teieio\n \tli\tr0,0\n-\tstw\tr0,0(r9)\t\t/* clear mmu_hash_lock */\n+\tstw\tr0,0(r6)\t\t/* clear mmu_hash_lock */\n #endif\n \n \t/* reenable interrupts and DR */\n-\tmtmsr\tr10\n+\tmtmsr\tr9\n \tSYNC_601\n \tisync\n \n@@ -267,7 +308,8 @@ _GLOBAL(add_hash_page)\n  * r5 contains the linux PTE, r6 contains the old value of the\n  * linux PTE (before setting _PAGE_HASHPTE) and r7 contains the\n  * offset to be added to addresses (0 if the MMU is on,\n- * -KERNELBASE if it is off).\n+ * -KERNELBASE if it is off).  r10 contains the upper half of\n+ * the PTE if CONFIG_PTE_64BIT.\n  * On SMP, the caller should have the mmu_hash_lock held.\n  * We assume that the caller has (or will) set the _PAGE_HASHPTE\n  * bit in the linux PTE in memory.  The value passed in r6 should\n@@ -313,6 +355,11 @@ _GLOBAL(create_hpte)\n BEGIN_FTR_SECTION\n \tori\tr8,r8,_PAGE_COHERENT\t/* set M (coherence required) */\n END_FTR_SECTION_IFSET(CPU_FTR_NEED_COHERENT)\n+#ifdef CONFIG_PTE_64BIT\n+\t/* Put the XPN bits into the PTE */\n+\trlwimi\tr8,r10,8,20,22\n+\trlwimi\tr8,r10,2,29,29\n+#endif\n \n \t/* Construct the high word of the PPC-style PTE (r5) */\n \trlwinm\tr5,r3,7,1,24\t\t/* put VSID in 0x7fffff80 bits */\n@@ -499,14 +546,18 @@ _GLOBAL(flush_hash_pages)\n \tisync\n \n \t/* First find a PTE in the range that has _PAGE_HASHPTE set */\n+#ifndef CONFIG_PTE_64BIT\n \trlwimi\tr5,r4,22,20,29\n-1:\tlwz\tr0,0(r5)\n+#else\n+\trlwimi\tr5,r4,23,20,28\n+#endif\n+1:\tlwz\tr0,PTE_FLAGS_OFFSET(r5)\n \tcmpwi\tcr1,r6,1\n \tandi.\tr0,r0,_PAGE_HASHPTE\n \tbne\t2f\n \tble\tcr1,19f\n \taddi\tr4,r4,0x1000\n-\taddi\tr5,r5,4\n+\taddi\tr5,r5,PTE_SIZE\n \taddi\tr6,r6,-1\n \tb\t1b\n \n@@ -545,7 +596,10 @@ _GLOBAL(flush_hash_pages)\n \t * already clear, we're done (for this pte).  If not,\n \t * clear it (atomically) and proceed.  -- paulus.\n \t */\n-33:\tlwarx\tr8,0,r5\t\t\t/* fetch the pte */\n+#if (PTE_FLAGS_OFFSET != 0)\n+\taddi\tr5,r5,PTE_FLAGS_OFFSET\n+#endif\n+33:\tlwarx\tr8,0,r5\t\t\t/* fetch the pte flags word */\n \tandi.\tr0,r8,_PAGE_HASHPTE\n \tbeq\t8f\t\t\t/* done if HASHPTE is already clear */\n \trlwinm\tr8,r8,0,31,29\t\t/* clear HASHPTE bit */\n@@ -590,7 +644,7 @@ _GLOBAL(flush_hash_patch_B)\n \n 8:\tble\tcr1,9f\t\t\t/* if all ptes checked */\n 81:\taddi\tr6,r6,-1\n-\taddi\tr5,r5,4\t\t\t/* advance to next pte */\n+\taddi\tr5,r5,PTE_SIZE\n \taddi\tr4,r4,0x1000\n \tlwz\tr0,0(r5)\t\t/* check next pte */\n \tcmpwi\tcr1,r6,1\ndiff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c\nindex 2001abd..c31d6d2 100644\n--- a/arch/powerpc/mm/pgtable_32.c\n+++ b/arch/powerpc/mm/pgtable_32.c\n@@ -73,7 +73,7 @@ extern unsigned long p_mapped_by_tlbcam(unsigned long pa);\n #endif /* HAVE_TLBCAM */\n \n #ifdef CONFIG_PTE_64BIT\n-/* 44x uses an 8kB pgdir because it has 8-byte Linux PTEs. */\n+/* Some processors use an 8kB pgdir because they have 8-byte Linux PTEs. */\n #define PGDIR_ORDER\t1\n #else\n #define PGDIR_ORDER\t0\n@@ -288,7 +288,7 @@ int map_page(unsigned long va, phys_addr_t pa, int flags)\n }\n \n /*\n- * Map in all of physical memory starting at KERNELBASE.\n+ * Map in a big chunk of physical memory starting at KERNELBASE.\n  */\n void __init mapin_ram(void)\n {\ndiff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype\nindex 7f65127..439c5ba 100644\n--- a/arch/powerpc/platforms/Kconfig.cputype\n+++ b/arch/powerpc/platforms/Kconfig.cputype\n@@ -50,6 +50,7 @@ config 44x\n \tselect PPC_UDBG_16550\n \tselect 4xx_SOC\n \tselect PPC_PCI_CHOICE\n+\tselect PHYS_64BIT\n \n config E200\n \tbool \"Freescale e200\"\n@@ -128,18 +129,20 @@ config FSL_EMB_PERFMON\n \n config PTE_64BIT\n \tbool\n-\tdepends on 44x || E500\n-\tdefault y if 44x\n-\tdefault y if E500 && PHYS_64BIT\n+\tdepends on 44x || E500 || PPC_86xx\n+\tdefault y if PHYS_64BIT\n \n config PHYS_64BIT\n-\tbool 'Large physical address support' if E500\n-\tdepends on 44x || E500\n+\tbool 'Large physical address support' if E500 || PPC_86xx\n+\tdepends on (44x || E500 || PPC_86xx) && !PPC_83xx && !PPC_82xx\n \tselect RESOURCES_64BIT\n-\tdefault y if 44x\n \t---help---\n \t  This option enables kernel support for larger than 32-bit physical\n-\t  addresses.  This features is not be available on all e500 cores.\n+\t  addresses.  This feature may not be available on all cores.\n+\n+\t  If you have more than 3.5GB of RAM or so, you also need to enable\n+\t  SWIOTLB under Kernel Options for this to work.  The actual number\n+\t  is platform-dependent.\n \n \t  If in doubt, say N here.\n \n",
    "prefixes": [
        "v5"
    ]
}