diff mbox

[RFC,2/2] powerpc/mm: Replace _PAGE_USER with _PAGE_PRIV

Message ID 1456456850-2679-2-git-send-email-aneesh.kumar@linux.vnet.ibm.com (mailing list archive)
State Superseded
Headers show

Commit Message

Aneesh Kumar K.V Feb. 26, 2016, 3:20 a.m. UTC
_PAGE_PRIV means the page can be accessed only by kernel. This is done
to keep pte bits similar to PowerISA 3.0 radix PTE format. User
pages are now makred by clearing _PAGE_PRIV bit.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/hash.h | 23 ++++++++++-------------
 arch/powerpc/mm/hash64_4k.c               |  5 +++++
 arch/powerpc/mm/hash64_64k.c              | 10 ++++++++++
 arch/powerpc/mm/hash_utils_64.c           | 15 +++++++++++----
 arch/powerpc/mm/hugepage-hash64.c         |  5 +++++
 arch/powerpc/mm/hugetlbpage-hash64.c      |  5 +++++
 arch/powerpc/mm/hugetlbpage.c             |  2 +-
 arch/powerpc/mm/pgtable-hash64.c          | 12 ++++++++----
 arch/powerpc/mm/pgtable_32.c              |  2 +-
 arch/powerpc/mm/pgtable_64.c              |  2 +-
 arch/powerpc/perf/callchain.c             |  2 +-
 11 files changed, 58 insertions(+), 25 deletions(-)

Comments

Benjamin Herrenschmidt Feb. 29, 2016, 2:36 a.m. UTC | #1
On Fri, 2016-02-26 at 08:50 +0530, Aneesh Kumar K.V wrote:
> _PAGE_PRIV means the page can be accessed only by kernel. This is
> done
> to keep pte bits similar to PowerISA 3.0 radix PTE format. User
> pages are now makred by clearing _PAGE_PRIV bit.

Very minor nit: It's not clear that PRIV means PRIVILEGED (rather than
PRIVATE). Can you spell it out completely instead ? I like things being
self-describing ;-)

Cheers,
Ben.
Aneesh Kumar K.V Feb. 29, 2016, 7:49 a.m. UTC | #2
Benjamin Herrenschmidt <benh@kernel.crashing.org> writes:

> On Fri, 2016-02-26 at 08:50 +0530, Aneesh Kumar K.V wrote:
>> _PAGE_PRIV means the page can be accessed only by kernel. This is
>> done
>> to keep pte bits similar to PowerISA 3.0 radix PTE format. User
>> pages are now makred by clearing _PAGE_PRIV bit.
>
> Very minor nit: It's not clear that PRIV means PRIVILEGED (rather than
> PRIVATE). Can you spell it out completely instead ? I like things being
> self-describing ;-)
>

Ok, I will update this

Thanks,
-aneesh
Paul Mackerras March 2, 2016, 1:07 a.m. UTC | #3
On Fri, Feb 26, 2016 at 08:50:50AM +0530, Aneesh Kumar K.V wrote:
> _PAGE_PRIV means the page can be accessed only by kernel. This is done
> to keep pte bits similar to PowerISA 3.0 radix PTE format. User
> pages are now makred by clearing _PAGE_PRIV bit.

Mostly looks good, but some comments below...

> @@ -174,15 +174,12 @@
>   * Note due to the way vm flags are laid out, the bits are XWR
>   */
>  #define PAGE_NONE	__pgprot(_PAGE_BASE)
> -#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
> -#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | \
> -				 _PAGE_EXEC)
> -#define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_READ )
> -#define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_READ| \
> -				 _PAGE_EXEC)
> -#define PAGE_READONLY	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_READ)
> -#define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_READ| \
> -				 _PAGE_EXEC)
> +#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_RW)
> +#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_RW | _PAGE_EXEC)

These need _PAGE_READ as far as I can see.

> @@ -40,6 +40,11 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
>  		if (unlikely(access & ~old_pte))
>  			return 1;

This check is going to do a different thing now as far as
_PAGE_USER/_PAGE_PRIV is concerned: previously it would prevent a
non-privileged access to a privileged page from creating a HPTE, now
it prevents a privileged access to a non-privileged page from creating
a HPTE.  A privileged access means an access by the kernel to a high
address, and arguably we would never have a non-privileged PTE at a
high (i.e. kernel) address, but it's still a semantic change that
should have been flagged in the patch description.

In fact it wouldn't really matter if we didn't check the privilege
here and went ahead and created the HPTE - if it's a non-privileged
access to a privileged page, the HPTE will get its permission bits set
so as to prevent non-privileged access anyway.

>  		/*
> +		 * access from user, but pte in _PAGE_PRIV
> +		 */
> +		if (unlikely((access & _PAGE_PRIV) != (old_pte & _PAGE_PRIV)))
> +			return 1;

And this catches both the privileged access to non-privileged page
case (which the previous statement already caught) and the
non-privileged access to privileged page case.

By the way, "pte in _PAGE_PRIV" is not good English.  You could say
"_PAGE_PRIV is set in pte" or "pte has _PAGE_PRIV set".

> +	/*
> +	 * access from user, but pte in _PAGE_PRIV
> +	 */
> +	if ((access & _PAGE_PRIV) != (pte_val(*ptep) & _PAGE_PRIV)) {
> +		DBG_LOW(" no access !\n");
> +		rc = 1;
> +		goto bail;
> +	}

Why do we need this added here?  Haven't you added the same check in
all the lower-level functions that this calls?

> @@ -158,7 +158,7 @@ ioremap_prot(phys_addr_t addr, unsigned long size, unsigned long flags)
>  		flags |= _PAGE_DIRTY | _PAGE_HWWRITE;
>  
>  	/* we don't want to let _PAGE_USER and _PAGE_EXEC leak out */
> -	flags &= ~(_PAGE_USER | _PAGE_EXEC);
> +	flags &= ~(_PAGE_PRIV | _PAGE_EXEC);

Need to fix the comment too.

> @@ -198,7 +198,7 @@ void __iomem * ioremap_prot(phys_addr_t addr, unsigned long size,
>  		flags |= _PAGE_DIRTY;
>  
>  	/* we don't want to let _PAGE_USER and _PAGE_EXEC leak out */
> -	flags &= ~(_PAGE_USER | _PAGE_EXEC);
> +	flags &= ~(_PAGE_PRIV | _PAGE_EXEC);

ditto

Paul.
Aneesh Kumar K.V March 5, 2016, 8:29 a.m. UTC | #4
Paul Mackerras <paulus@ozlabs.org> writes:

> [ text/plain ]
> On Fri, Feb 26, 2016 at 08:50:50AM +0530, Aneesh Kumar K.V wrote:
>> _PAGE_PRIV means the page can be accessed only by kernel. This is done
>> to keep pte bits similar to PowerISA 3.0 radix PTE format. User
>> pages are now makred by clearing _PAGE_PRIV bit.
>
>

.....

>> @@ -40,6 +40,11 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
>>  		if (unlikely(access & ~old_pte))
>>  			return 1;
>
> This check is going to do a different thing now as far as
> _PAGE_USER/_PAGE_PRIV is concerned: previously it would prevent a
> non-privileged access to a privileged page from creating a HPTE, now
> it prevents a privileged access to a non-privileged page from creating
> a HPTE.  A privileged access means an access by the kernel to a high
> address, and arguably we would never have a non-privileged PTE at a
> high (i.e. kernel) address, but it's still a semantic change that
> should have been flagged in the patch description.


We don't set _PAGE_PRIVILGED when we have a privilged acess to a non
privilged page. We set it as below (with updated comments)

	/*
	 * We set _PAGE_PRIVILEGED only when
	 * kernel mode access kernel space.
	 *
	 * _PAGE_PRIVILGED is NOT set
	 * 1) when kernel mode access user space
	 * 2) user space access kernel space.
	 */
	if (!(msr & MSR_PR) && !(REGION_ID(ea) == USER_REGION_ID))
		access |= _PAGE_PRIVILEGED;



>
> In fact it wouldn't really matter if we didn't check the privilege
> here and went ahead and created the HPTE - if it's a non-privileged
> access to a privileged page, the HPTE will get its permission bits set
> so as to prevent non-privileged access anyway.
>
>>  		/*
>> +		 * access from user, but pte in _PAGE_PRIV
>> +		 */
>> +		if (unlikely((access & _PAGE_PRIV) != (old_pte & _PAGE_PRIV)))
>> +			return 1;
>
> And this catches both the privileged access to non-privileged page
> case (which the previous statement already caught) and the
> non-privileged access to privileged page case.

The actual PRIVILEGED check is done by the above conditional. 


-aneesh
Paul Mackerras March 5, 2016, 10:41 a.m. UTC | #5
On Sat, Mar 05, 2016 at 01:59:02PM +0530, Aneesh Kumar K.V wrote:
> Paul Mackerras <paulus@ozlabs.org> writes:
> 
> > [ text/plain ]
> > On Fri, Feb 26, 2016 at 08:50:50AM +0530, Aneesh Kumar K.V wrote:
> >> _PAGE_PRIV means the page can be accessed only by kernel. This is done
> >> to keep pte bits similar to PowerISA 3.0 radix PTE format. User
> >> pages are now makred by clearing _PAGE_PRIV bit.
> >
> >
> 
> .....
> 
> >> @@ -40,6 +40,11 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
> >>  		if (unlikely(access & ~old_pte))
> >>  			return 1;
> >
> > This check is going to do a different thing now as far as
> > _PAGE_USER/_PAGE_PRIV is concerned: previously it would prevent a
> > non-privileged access to a privileged page from creating a HPTE, now
> > it prevents a privileged access to a non-privileged page from creating
> > a HPTE.  A privileged access means an access by the kernel to a high
> > address, and arguably we would never have a non-privileged PTE at a
> > high (i.e. kernel) address, but it's still a semantic change that
> > should have been flagged in the patch description.
> 
> 
> We don't set _PAGE_PRIVILGED when we have a privilged acess to a non
> privilged page. We set it as below (with updated comments)
> 
> 	/*
> 	 * We set _PAGE_PRIVILEGED only when
> 	 * kernel mode access kernel space.
> 	 *
> 	 * _PAGE_PRIVILGED is NOT set
> 	 * 1) when kernel mode access user space
> 	 * 2) user space access kernel space.
> 	 */
> 	if (!(msr & MSR_PR) && !(REGION_ID(ea) == USER_REGION_ID))
> 		access |= _PAGE_PRIVILEGED;

You're confusing a page in the user part of the address space with a
non-privileged page.  Now, we would certainly expect that all pages in
the user part of the address space would be non-privileged pages and
all pages in the kernel part would be privileged pages, but nothing
actually enforces that.  The semantic change is that if we did somehow
happen to have a non-privileged page (one without _PAGE_PRIVILEGED
set) in the kernel part of the address space, we can no longer access
it from the kernel.  Now you can argue that we never have
non-privileged pages in the kernel part of the address space, and so
the change doesn't matter.  That is probably a good argument, but you
do need to mention the change and make that argument in your patch
description.

Paul.
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 244f2c322c43..2a38883c9187 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -22,7 +22,7 @@ 
 #define _PAGE_RW		0x00002 /* read & write access allowed */
 #define _PAGE_READ		0x00004	/* read access allowed */
 #define _PAGE_RWX		(_PAGE_READ | _PAGE_RW | _PAGE_EXEC)
-#define _PAGE_USER		0x00008 /* page may be accessed by userspace */
+#define _PAGE_PRIV		0x00008 /* page can only be accessed by kernel*/
 #define _PAGE_GUARDED		0x00010 /* G: guarded (side-effect) page */
 /* M (memory coherence) is always set in the HPTE, so we don't need it here */
 #define _PAGE_COHERENT		0x0
@@ -117,9 +117,9 @@ 
 #endif /* CONFIG_PPC_MM_SLICES */
 
 /* No separate kernel read-only */
-#define _PAGE_KERNEL_RW		(_PAGE_RW | _PAGE_DIRTY) /* user access blocked by key */
+#define _PAGE_KERNEL_RW		(_PAGE_PRIV | _PAGE_RW | _PAGE_DIRTY) /* user access blocked by key */
 #define _PAGE_KERNEL_RO		 _PAGE_KERNEL_RW
-#define _PAGE_KERNEL_RWX	(_PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC)
+#define _PAGE_KERNEL_RWX	(_PAGE_PRIV | _PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC)
 
 /* Strong Access Ordering */
 #define _PAGE_SAO		(_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
@@ -151,7 +151,7 @@ 
  */
 #define PAGE_PROT_BITS	(_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
 			 _PAGE_WRITETHRU | _PAGE_4K_PFN | \
-			 _PAGE_USER | _PAGE_ACCESSED |  \
+			 _PAGE_PRIV| _PAGE_ACCESSED |  \
 			 _PAGE_RW |  _PAGE_DIRTY | _PAGE_EXEC | \
 			 _PAGE_SOFT_DIRTY)
 /*
@@ -174,15 +174,12 @@ 
  * Note due to the way vm flags are laid out, the bits are XWR
  */
 #define PAGE_NONE	__pgprot(_PAGE_BASE)
-#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
-#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | \
-				 _PAGE_EXEC)
-#define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_READ )
-#define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_READ| \
-				 _PAGE_EXEC)
-#define PAGE_READONLY	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_READ)
-#define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_READ| \
-				 _PAGE_EXEC)
+#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_RW)
+#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_RW | _PAGE_EXEC)
+#define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_READ )
+#define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_READ| _PAGE_EXEC)
+#define PAGE_READONLY	__pgprot(_PAGE_BASE | _PAGE_READ)
+#define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_READ| _PAGE_EXEC)
 
 #define __P000	PAGE_NONE
 #define __P001	PAGE_READONLY
diff --git a/arch/powerpc/mm/hash64_4k.c b/arch/powerpc/mm/hash64_4k.c
index 1a862eb6fef1..8c83be3f67ef 100644
--- a/arch/powerpc/mm/hash64_4k.c
+++ b/arch/powerpc/mm/hash64_4k.c
@@ -40,6 +40,11 @@  int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
 		if (unlikely(access & ~old_pte))
 			return 1;
 		/*
+		 * access from user, but pte in _PAGE_PRIV
+		 */
+		if (unlikely((access & _PAGE_PRIV) != (old_pte & _PAGE_PRIV)))
+			return 1;
+		/*
 		 * Try to lock the PTE, add ACCESSED and DIRTY if it was
 		 * a write access. Since this is 4K insert of 64K page size
 		 * also add _PAGE_COMBO
diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c
index 976eb5f6e492..3465b5d44223 100644
--- a/arch/powerpc/mm/hash64_64k.c
+++ b/arch/powerpc/mm/hash64_64k.c
@@ -72,6 +72,11 @@  int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
 		if (unlikely(access & ~old_pte))
 			return 1;
 		/*
+		 * access from user, but pte in _PAGE_PRIV
+		 */
+		if (unlikely((access & _PAGE_PRIV) != (old_pte & _PAGE_PRIV)))
+			return 1;
+		/*
 		 * Try to lock the PTE, add ACCESSED and DIRTY if it was
 		 * a write access. Since this is 4K insert of 64K page size
 		 * also add _PAGE_COMBO
@@ -242,6 +247,11 @@  int __hash_page_64K(unsigned long ea, unsigned long access,
 		if (unlikely(access & ~old_pte))
 			return 1;
 		/*
+		 * access from user, but pte in _PAGE_PRIV
+		 */
+		if (unlikely((access & _PAGE_PRIV) != (old_pte & _PAGE_PRIV)))
+			return 1;
+		/*
 		 * Check if PTE has the cache-inhibit bit set
 		 * If so, bail out and refault as a 4k page
 		 */
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 79b81cd0d254..ec4f76891a8a 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -175,7 +175,7 @@  unsigned long htab_convert_pte_flags(unsigned long pteflags)
 	 * or PP=0x3 for read-only (including writeable but clean pages).
 	 * We also map user prot none as with PP=00.
 	 */
-	if (pteflags & _PAGE_USER) {
+	if (!(pteflags & _PAGE_PRIV)) {
 		if ((pteflags & _PAGE_READ) || (pteflags & _PAGE_RW))
 			rflags |= 0x2;
 		if (!((pteflags & _PAGE_RW) && (pteflags & _PAGE_DIRTY)))
@@ -1109,7 +1109,14 @@  int hash_page_mm(struct mm_struct *mm, unsigned long ea,
 		rc = 1;
 		goto bail;
 	}
-
+	/*
+	 * access from user, but pte in _PAGE_PRIV
+	 */
+	if ((access & _PAGE_PRIV) != (pte_val(*ptep) & _PAGE_PRIV)) {
+		DBG_LOW(" no access !\n");
+		rc = 1;
+		goto bail;
+	}
 	if (hugeshift) {
 		if (is_thp)
 			rc = __hash_page_thp(ea, access, vsid, (pmd_t *)ptep,
@@ -1246,8 +1253,8 @@  int __hash_page(unsigned long ea, unsigned long msr, unsigned long trap,
 	 * accessing a userspace segment (even from the kernel). We assume
 	 * kernel addresses always have the high bit set.
 	 */
-	if ((msr & MSR_PR) || (REGION_ID(ea) == USER_REGION_ID))
-		access |= _PAGE_USER;
+	if (!(msr & MSR_PR) && !(REGION_ID(ea) == USER_REGION_ID))
+		access |= _PAGE_PRIV;
 
 	if (trap == 0x400)
 		access |= _PAGE_EXEC;
diff --git a/arch/powerpc/mm/hugepage-hash64.c b/arch/powerpc/mm/hugepage-hash64.c
index d9675f1d606e..f71600e98ff6 100644
--- a/arch/powerpc/mm/hugepage-hash64.c
+++ b/arch/powerpc/mm/hugepage-hash64.c
@@ -43,6 +43,11 @@  int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
 		if (unlikely(access & ~old_pmd))
 			return 1;
 		/*
+		 * access from user, but pmd in _PAGE_PRIV
+		 */
+		if (unlikely((access & _PAGE_PRIV) != (old_pmd & _PAGE_PRIV)))
+			return 1;
+		/*
 		 * Try to lock the PTE, add ACCESSED and DIRTY if it was
 		 * a write access
 		 */
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index 0a1f87ffde8c..45ab33705f03 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -63,6 +63,11 @@  int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 		/* If PTE permissions don't match, take page fault */
 		if (unlikely(access & ~old_pte))
 			return 1;
+		/*
+		 * access from user, but pte in _PAGE_PRIV
+		 */
+		if (unlikely((access & _PAGE_PRIV) != (old_pte & _PAGE_PRIV)))
+			return 1;
 		/* Try to lock the PTE, add ACCESSED and DIRTY if it was
 		 * a write access */
 		new_pte = old_pte | _PAGE_BUSY | _PAGE_ACCESSED;
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 17ca4827dd87..a63580c11518 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -604,7 +604,7 @@  int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 		end = pte_end;
 
 	pte = READ_ONCE(*ptep);
-	mask = _PAGE_PRESENT | _PAGE_USER | _PAGE_READ;
+	mask = _PAGE_PRESENT | _PAGE_READ;
 	if (write)
 		mask |= _PAGE_RW;
 
diff --git a/arch/powerpc/mm/pgtable-hash64.c b/arch/powerpc/mm/pgtable-hash64.c
index 341bbf7d8891..a453830c8710 100644
--- a/arch/powerpc/mm/pgtable-hash64.c
+++ b/arch/powerpc/mm/pgtable-hash64.c
@@ -180,9 +180,13 @@  int map_kernel_page(unsigned long ea, unsigned long pa, unsigned long flags)
  */
 static inline int pte_looks_normal(pte_t pte)
 {
-	return (pte_val(pte) &
-	    (_PAGE_PRESENT | _PAGE_SPECIAL | _PAGE_NO_CACHE | _PAGE_USER)) ==
-	    (_PAGE_PRESENT | _PAGE_USER);
+	if ((pte_val(pte) &
+	     (_PAGE_PRESENT | _PAGE_SPECIAL | _PAGE_NO_CACHE )) ==
+	    _PAGE_PRESENT) {
+		if (!(pte_val(pte) & _PAGE_PRIV))
+			return 1;
+	}
+	return 0;
 }
 
 static struct page *maybe_pte_to_page(pte_t pte)
@@ -420,7 +424,7 @@  void pmdp_huge_split_prepare(struct vm_area_struct *vma,
 	 * the translation is still valid, because we will withdraw
 	 * pgtable_t after this.
 	 */
-	pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_USER, 0);
+	pmd_hugepage_update(vma->vm_mm, address, pmdp, 0, _PAGE_PRIV);
 }
 
 
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 7692d1bb1bc6..16ea4c497590 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -158,7 +158,7 @@  ioremap_prot(phys_addr_t addr, unsigned long size, unsigned long flags)
 		flags |= _PAGE_DIRTY | _PAGE_HWWRITE;
 
 	/* we don't want to let _PAGE_USER and _PAGE_EXEC leak out */
-	flags &= ~(_PAGE_USER | _PAGE_EXEC);
+	flags &= ~(_PAGE_PRIV | _PAGE_EXEC);
 
 #ifdef _PAGE_BAP_SR
 	/* _PAGE_USER contains _PAGE_BAP_SR on BookE using the new PTE format
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 4d93ba050775..1ff49d2c978a 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -198,7 +198,7 @@  void __iomem * ioremap_prot(phys_addr_t addr, unsigned long size,
 		flags |= _PAGE_DIRTY;
 
 	/* we don't want to let _PAGE_USER and _PAGE_EXEC leak out */
-	flags &= ~(_PAGE_USER | _PAGE_EXEC);
+	flags &= ~(_PAGE_PRIV | _PAGE_EXEC);
 
 #ifdef _PAGE_BAP_SR
 	/* _PAGE_USER contains _PAGE_BAP_SR on BookE using the new PTE format
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index e04a6752b399..fe9ef6217df5 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -137,7 +137,7 @@  static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 	offset = addr & ((1UL << shift) - 1);
 
 	pte = READ_ONCE(*ptep);
-	if (!pte_present(pte) || !(pte_val(pte) & _PAGE_USER))
+	if (!pte_present(pte) || (pte_val(pte) & _PAGE_PRIV))
 		goto err_out;
 	pfn = pte_pfn(pte);
 	if (!page_is_ram(pfn))