diff mbox

powerpc/spufs: Fix hash faults for kernel regions

Message ID 1495608599-16819-1-git-send-email-jk@ozlabs.org (mailing list archive)
State Accepted
Commit d75e4919cc0b6fbcbc8d6654ef66d87a9dbf1526
Headers show

Commit Message

Jeremy Kerr May 24, 2017, 6:49 a.m. UTC
Change ac29c64089b7 swapped _PAGE_USER for _PAGE_PRIVILEGED, and
introduced check_pte_access() which denied kernel access to
non-_PAGE_PRIVILEGED pages.

However, it didn't add _PAGE_PRIVILEGED to the hash fault handler for
spufs' kernel accesses, so the DMAs required to establish SPE memory
no longer work.

This change adds _PAGE_PRIVILEGED to the hash fault handler for
kernel accesses.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Reported-by: Sombat Tragolgosol <sombat3960@gmail.com>
CC: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/platforms/cell/spu_base.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Aneesh Kumar K.V May 24, 2017, 8:58 a.m. UTC | #1
Jeremy Kerr <jk@ozlabs.org> writes:

> Change ac29c64089b7 swapped _PAGE_USER for _PAGE_PRIVILEGED, and
> introduced check_pte_access() which denied kernel access to
> non-_PAGE_PRIVILEGED pages.
>
> However, it didn't add _PAGE_PRIVILEGED to the hash fault handler for
> spufs' kernel accesses, so the DMAs required to establish SPE memory
> no longer work.
>
> This change adds _PAGE_PRIVILEGED to the hash fault handler for
> kernel accesses.
>
> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
> Reported-by: Sombat Tragolgosol <sombat3960@gmail.com>
> CC: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

> ---
>  arch/powerpc/platforms/cell/spu_base.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/platforms/cell/spu_base.c b/arch/powerpc/platforms/cell/spu_base.c
> index 96c2b8a..0c45cdb 100644
> --- a/arch/powerpc/platforms/cell/spu_base.c
> +++ b/arch/powerpc/platforms/cell/spu_base.c
> @@ -197,7 +197,9 @@ static int __spu_trap_data_map(struct spu *spu, unsigned long ea, u64 dsisr)
>  	    (REGION_ID(ea) != USER_REGION_ID)) {
>
>  		spin_unlock(&spu->register_lock);
> -		ret = hash_page(ea, _PAGE_PRESENT | _PAGE_READ, 0x300, dsisr);
> +		ret = hash_page(ea,
> +				_PAGE_PRESENT | _PAGE_READ | _PAGE_PRIVILEGED,
> +				0x300, dsisr);
>  		spin_lock(&spu->register_lock);
>
>  		if (!ret) {
> -- 
> 2.7.4
Michael Ellerman May 25, 2017, 1:22 p.m. UTC | #2
On Wed, 2017-05-24 at 06:49:59 UTC, Jeremy Kerr wrote:
> Change ac29c64089b7 swapped _PAGE_USER for _PAGE_PRIVILEGED, and
> introduced check_pte_access() which denied kernel access to
> non-_PAGE_PRIVILEGED pages.
> 
> However, it didn't add _PAGE_PRIVILEGED to the hash fault handler for
> spufs' kernel accesses, so the DMAs required to establish SPE memory
> no longer work.
> 
> This change adds _PAGE_PRIVILEGED to the hash fault handler for
> kernel accesses.
> 
> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
> Reported-by: Sombat Tragolgosol <sombat3960@gmail.com>
> CC: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/d75e4919cc0b6fbcbc8d6654ef66d8

cheers
diff mbox

Patch

diff --git a/arch/powerpc/platforms/cell/spu_base.c b/arch/powerpc/platforms/cell/spu_base.c
index 96c2b8a..0c45cdb 100644
--- a/arch/powerpc/platforms/cell/spu_base.c
+++ b/arch/powerpc/platforms/cell/spu_base.c
@@ -197,7 +197,9 @@  static int __spu_trap_data_map(struct spu *spu, unsigned long ea, u64 dsisr)
 	    (REGION_ID(ea) != USER_REGION_ID)) {
 
 		spin_unlock(&spu->register_lock);
-		ret = hash_page(ea, _PAGE_PRESENT | _PAGE_READ, 0x300, dsisr);
+		ret = hash_page(ea,
+				_PAGE_PRESENT | _PAGE_READ | _PAGE_PRIVILEGED,
+				0x300, dsisr);
 		spin_lock(&spu->register_lock);
 
 		if (!ret) {