Message ID | 6499f8eeb2a36330e5c9fc1cee9a79374875bd54.1589866984.git.christophe.leroy@csgroup.eu (mailing list archive) |
---|---|
State | Accepted |
Commit | 4b19f96a81bceaf0bcf44d79c0855c61158065ec |
Headers | show |
Series | Use hugepages to map kernel mem on 8xx | expand |
Christophe Leroy <christophe.leroy@csgroup.eu> writes: > Mapping RO data as ROX is not an issue since that data > cannot be modified to introduce an exploit. Being pedantic: it is still an issue, in that it means there's more targets for a code-reuse attack. But given the entire kernel text is also available for code-reuse attacks, the RO data is unlikely to contain any useful sequences that aren't also in the kernel text. > PPC64 accepts to have RO data mapped ROX, as a trade off > between kernel size and strictness of protection. > > On PPC32, kernel size is even more critical as amount of > memory is usually small. Yep, I think it's a reasonable trade off to make. cheers > Depending on the number of available IBATs, the last IBATs > might overflow the end of text. Only warn if it crosses > the end of RO data. > > Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> > --- > arch/powerpc/mm/book3s32/mmu.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c > index 39ba53ca5bb5..a9b2cbc74797 100644 > --- a/arch/powerpc/mm/book3s32/mmu.c > +++ b/arch/powerpc/mm/book3s32/mmu.c > @@ -187,6 +187,7 @@ void mmu_mark_initmem_nx(void) > int i; > unsigned long base = (unsigned long)_stext - PAGE_OFFSET; > unsigned long top = (unsigned long)_etext - PAGE_OFFSET; > + unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET; > unsigned long size; > > if (IS_ENABLED(CONFIG_PPC_BOOK3S_601)) > @@ -201,9 +202,10 @@ void mmu_mark_initmem_nx(void) > size = block_size(base, top); > size = max(size, 128UL << 10); > if ((top - base) > size) { > - if (strict_kernel_rwx_enabled()) > - pr_warn("Kernel _etext not properly aligned\n"); > size <<= 1; > + if (strict_kernel_rwx_enabled() && base + size > border) > + pr_warn("Some RW data is getting mapped X. " > + "Adjust CONFIG_DATA_SHIFT to avoid that.\n"); > } > setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT); > base += size; > -- > 2.25.0
diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c index 39ba53ca5bb5..a9b2cbc74797 100644 --- a/arch/powerpc/mm/book3s32/mmu.c +++ b/arch/powerpc/mm/book3s32/mmu.c @@ -187,6 +187,7 @@ void mmu_mark_initmem_nx(void) int i; unsigned long base = (unsigned long)_stext - PAGE_OFFSET; unsigned long top = (unsigned long)_etext - PAGE_OFFSET; + unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET; unsigned long size; if (IS_ENABLED(CONFIG_PPC_BOOK3S_601)) @@ -201,9 +202,10 @@ void mmu_mark_initmem_nx(void) size = block_size(base, top); size = max(size, 128UL << 10); if ((top - base) > size) { - if (strict_kernel_rwx_enabled()) - pr_warn("Kernel _etext not properly aligned\n"); size <<= 1; + if (strict_kernel_rwx_enabled() && base + size > border) + pr_warn("Some RW data is getting mapped X. " + "Adjust CONFIG_DATA_SHIFT to avoid that.\n"); } setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT); base += size;
Mapping RO data as ROX is not an issue since that data cannot be modified to introduce an exploit. PPC64 accepts to have RO data mapped ROX, as a trade off between kernel size and strictness of protection. On PPC32, kernel size is even more critical as amount of memory is usually small. Depending on the number of available IBATs, the last IBATs might overflow the end of text. Only warn if it crosses the end of RO data. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> --- arch/powerpc/mm/book3s32/mmu.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)