[RFC,1/4] powerpc/64s/radix: Fix memory hotplug section page table creation
diff mbox series

Message ID 20190722174700.11483-2-npiggin@gmail.com
State Superseded
Headers show
Series
  • assorted virtual / physical address fixes
Related show

Checks

Context Check Description
snowpatch_ozlabs/checkpatch success total: 0 errors, 0 warnings, 0 checks, 8 lines checked
snowpatch_ozlabs/apply_patch success Successfully applied on branch next (f3365d1a959d5c6527efe3d38276acc9b58e3f3f)

Commit Message

Nicholas Piggin July 22, 2019, 5:46 p.m. UTC
create_physical_mapping expects physical addresses, but creating and
splitting these mappings after boot is supplying virtual (effective)
addresses. This can be hit by booting with limited memory then probing
new physical memory sections.

Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
Fixes: 6cc27341b21a8 ("powerpc/mm: add radix__create_section_mapping()")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/book3s64/radix_pgtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Michael Ellerman July 23, 2019, 10:52 a.m. UTC | #1
Nicholas Piggin <npiggin@gmail.com> writes:
> create_physical_mapping expects physical addresses, but creating and
> splitting these mappings after boot is supplying virtual (effective)
> addresses. This can be hit by booting with limited memory then probing
> new physical memory sections.
>
> Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
> Fixes: 6cc27341b21a8 ("powerpc/mm: add radix__create_section_mapping()")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

This is not catastrophic because create_physical_mapping() just uses
start/end to construct virtual addresses anyway, and __va(__va(x)) == __va(x) ?

Although we do pass those through as region_start/end which then go to
memblock_alloc_try_nid(). But I guess that doesn't happen after boot,
which is the case you're talking about.

So I think looks good, change log could use a bit more detail though :)

cheers

> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index b4ca9e95e678..c5cc16ab1954 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -902,7 +902,7 @@ int __meminit radix__create_section_mapping(unsigned long start, unsigned long e
>  		return -1;
>  	}
>  
> -	return create_physical_mapping(start, end, nid);
> +	return create_physical_mapping(__pa(start), __pa(end), nid);
>  }
>  
>  int __meminit radix__remove_section_mapping(unsigned long start, unsigned long end)
> -- 
> 2.20.1
Nicholas Piggin July 24, 2019, 1:18 a.m. UTC | #2
Michael Ellerman's on July 23, 2019 8:52 pm:
> Nicholas Piggin <npiggin@gmail.com> writes:
>> create_physical_mapping expects physical addresses, but creating and
>> splitting these mappings after boot is supplying virtual (effective)
>> addresses. This can be hit by booting with limited memory then probing
>> new physical memory sections.
>>
>> Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
>> Fixes: 6cc27341b21a8 ("powerpc/mm: add radix__create_section_mapping()")
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> 
> This is not catastrophic because create_physical_mapping() just uses
> start/end to construct virtual addresses anyway, and __va(__va(x)) == __va(x) ?

A bit more subtle, it calls __map_kernel_page with the pa as well.
pfn_pte ends up masking the top 0xc bits out with PTE_RPN_MASK,
which is what saves us.

Hmm, so we really should also have a VM_BUG_ON in pfn_pte if it's
given a pfn with the top PAGE_SHIFT bit or PTE_RPN_MASK bits set.
I'll add that as a patch 5.
 
> Although we do pass those through as region_start/end which then go to
> memblock_alloc_try_nid(). But I guess that doesn't happen after boot,
> which is the case you're talking about.
> 
> So I think looks good, change log could use a bit more detail though :)

Thanks for taking a look. I'll resend after a bit more testing and
some changelog improvement.

Thanks,
Nick

Patch
diff mbox series

diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index b4ca9e95e678..c5cc16ab1954 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -902,7 +902,7 @@  int __meminit radix__create_section_mapping(unsigned long start, unsigned long e
 		return -1;
 	}
 
-	return create_physical_mapping(start, end, nid);
+	return create_physical_mapping(__pa(start), __pa(end), nid);
 }
 
 int __meminit radix__remove_section_mapping(unsigned long start, unsigned long end)