diff mbox series

[SRU,F,1/1] powerpc/kasan: Fix addr error caused by page alignment

Message ID 20240422170742.19770-3-bethany.jamison@canonical.com
State New
Headers show
Series [SRU,F,1/1] powerpc/kasan: Fix addr error caused by page alignment | expand

Commit Message

Bethany Jamison April 22, 2024, 5:07 p.m. UTC
From: Jiangfeng Xiao <xiaojiangfeng@huawei.com>

In kasan_init_region, when k_start is not page aligned, at the begin of
for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then
`va = block + k_cur - k_start` is less than block, the addr va is invalid,
because the memory address space from va to block is not alloced by
memblock_alloc, which will not be reserved by memblock_reserve later, it
will be used by other places.

As a result, memory overwriting occurs.

for example:
int __init __weak kasan_init_region(void *start, size_t size)
{
[...]
	/* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */
	block = memblock_alloc(k_end - k_start, PAGE_SIZE);
	[...]
	for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
		/* at the begin of for loop
		 * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400)
		 * va(dcd96c00) is less than block(dcd97000), va is invalid
		 */
		void *va = block + k_cur - k_start;
		[...]
	}
[...]
}

Therefore, page alignment is performed on k_start before
memblock_alloc() to ensure the validity of the VA address.

Fixes: 663c0c9496a6 ("powerpc/kasan: Fix shadow area set up for modules.")
Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/1705974359-43790-1-git-send-email-xiaojiangfeng@huawei.com
(backported from commit 4a7aee96200ad281a5cc4cf5c7a2e2a49d2b97b0)
[bjamison: context conflict - added k_start realignment to appropriate spot in code]
CVE-2024-26712
Signed-off-by: Bethany Jamison <bethany.jamison@canonical.com>
---
 arch/powerpc/mm/kasan/kasan_init_32.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Jacob Martin April 23, 2024, 1:56 p.m. UTC | #1
On 4/22/24 12:07 PM, Bethany Jamison wrote:
> From: Jiangfeng Xiao <xiaojiangfeng@huawei.com>
> 
> In kasan_init_region, when k_start is not page aligned, at the begin of
> for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then
> `va = block + k_cur - k_start` is less than block, the addr va is invalid,
> because the memory address space from va to block is not alloced by
> memblock_alloc, which will not be reserved by memblock_reserve later, it
> will be used by other places.
> 
> As a result, memory overwriting occurs.
> 
> for example:
> int __init __weak kasan_init_region(void *start, size_t size)
> {
> [...]
> 	/* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */
> 	block = memblock_alloc(k_end - k_start, PAGE_SIZE);
> 	[...]
> 	for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
> 		/* at the begin of for loop
> 		 * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400)
> 		 * va(dcd96c00) is less than block(dcd97000), va is invalid
> 		 */
> 		void *va = block + k_cur - k_start;
> 		[...]
> 	}
> [...]
> }
> 
> Therefore, page alignment is performed on k_start before
> memblock_alloc() to ensure the validity of the VA address.
> 
> Fixes: 663c0c9496a6 ("powerpc/kasan: Fix shadow area set up for modules.")
> Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com>
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> Link: https://msgid.link/1705974359-43790-1-git-send-email-xiaojiangfeng@huawei.com
> (backported from commit 4a7aee96200ad281a5cc4cf5c7a2e2a49d2b97b0)
> [bjamison: context conflict - added k_start realignment to appropriate spot in code]
> CVE-2024-26712
> Signed-off-by: Bethany Jamison <bethany.jamison@canonical.com>
> ---
>   arch/powerpc/mm/kasan/kasan_init_32.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
> index 3f78007a72822..8a294f94a7ca3 100644
> --- a/arch/powerpc/mm/kasan/kasan_init_32.c
> +++ b/arch/powerpc/mm/kasan/kasan_init_32.c
> @@ -91,6 +91,7 @@ static int __ref kasan_init_region(void *start, size_t size)
>   		return ret;
>   
>   	if (!slab_is_available())
> +		k_start = k_start & PAGE_MASK;
>   		block = memblock_alloc(k_end - k_start, PAGE_SIZE);

Curly braces are needed for the above `if` block, since a 2nd line is 
being added to it.

Jacob

>   
>   	for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
diff mbox series

Patch

diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
index 3f78007a72822..8a294f94a7ca3 100644
--- a/arch/powerpc/mm/kasan/kasan_init_32.c
+++ b/arch/powerpc/mm/kasan/kasan_init_32.c
@@ -91,6 +91,7 @@  static int __ref kasan_init_region(void *start, size_t size)
 		return ret;
 
 	if (!slab_is_available())
+		k_start = k_start & PAGE_MASK;
 		block = memblock_alloc(k_end - k_start, PAGE_SIZE);
 
 	for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {