Message ID | 97d45054364142af48b8767f9f9e115504d7568b.1492778567.git.christophe.leroy@c-s.fr (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Excerpts from Christophe Leroy's message of April 21, 2017 18:32: > This patch implements CONFIG_DEBUG_RODATA on PPC32. > > As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings > in order to allow page protection setup at the level of each page. > > As BAT/LTLB mappings are deactivated, their might be performance > impact. For this reason, we keep it OFF by default. > > Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [snip] > diff --git a/arch/powerpc/kernel/ftrace.c > b/arch/powerpc/kernel/ftrace.c > index 32509de6ce4c..06d2ac53f471 100644 > --- a/arch/powerpc/kernel/ftrace.c > +++ b/arch/powerpc/kernel/ftrace.c > @@ -46,6 +46,7 @@ static int > ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new) > { > unsigned int replaced; > + int err; > > /* > * Note: > @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new) > } > > /* replace the text with the new text */ > - if (patch_instruction((unsigned int *)ip, new)) > - return -EPERM; > + set_kernel_text_rw(ip); > + err = patch_instruction((unsigned int *)ip, new); > + set_kernel_text_ro(ip); Is there a reason to not put those inside patch_instruction()? - Naveen
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes: > Excerpts from Christophe Leroy's message of April 21, 2017 18:32: >> diff --git a/arch/powerpc/kernel/ftrace.c >> b/arch/powerpc/kernel/ftrace.c >> index 32509de6ce4c..06d2ac53f471 100644 >> --- a/arch/powerpc/kernel/ftrace.c >> +++ b/arch/powerpc/kernel/ftrace.c >> @@ -46,6 +46,7 @@ static int >> @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new) >> } >> >> /* replace the text with the new text */ >> - if (patch_instruction((unsigned int *)ip, new)) >> - return -EPERM; >> + set_kernel_text_rw(ip); >> + err = patch_instruction((unsigned int *)ip, new); >> + set_kernel_text_ro(ip); > > Is there a reason to not put those inside patch_instruction()? Yes and no. patch_instruction() is called quite early from apply_feature_fixups(), I haven't looked closely but I suspect the set_kernel_text_rx() routines won't work that early. But on the other hand patch_instruction() is used by things other than ftrace, like jump labels, so we probably want the rw/ro setting in there so that we don't have to go and fixup jump labels etc. So probably we need a raw_patch_instruction() which does just the patching (what patch_instruction() does now), and can be used early in boot. And then patch_instruction() would have the rw/ro change in it, so that all users of it are OK. eg ~=: int raw_patch_instruction(unsigned int *addr, unsigned int instr) { ... } int patch_instruction(unsigned int *addr, unsigned int instr) { int err; set_kernel_text_rw(ip); err = raw_patch_instruction((unsigned int *)ip, new); set_kernel_text_ro(ip); return err; } cheers
Le 22/04/2017 à 08:08, Michael Ellerman a écrit : > "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes: >> Excerpts from Christophe Leroy's message of April 21, 2017 18:32: >>> diff --git a/arch/powerpc/kernel/ftrace.c >>> b/arch/powerpc/kernel/ftrace.c >>> index 32509de6ce4c..06d2ac53f471 100644 >>> --- a/arch/powerpc/kernel/ftrace.c >>> +++ b/arch/powerpc/kernel/ftrace.c >>> @@ -46,6 +46,7 @@ static int >>> @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new) >>> } >>> >>> /* replace the text with the new text */ >>> - if (patch_instruction((unsigned int *)ip, new)) >>> - return -EPERM; >>> + set_kernel_text_rw(ip); >>> + err = patch_instruction((unsigned int *)ip, new); >>> + set_kernel_text_ro(ip); >> >> Is there a reason to not put those inside patch_instruction()? > > Yes and no. > > patch_instruction() is called quite early from apply_feature_fixups(), I > haven't looked closely but I suspect the set_kernel_text_rx() routines > won't work that early. > > But on the other hand patch_instruction() is used by things other than > ftrace, like jump labels, so we probably want the rw/ro setting in there > so that we don't have to go and fixup jump labels etc. > > So probably we need a raw_patch_instruction() which does just the > patching (what patch_instruction() does now), and can be used early in > boot. And then patch_instruction() would have the rw/ro change in it, so > that all users of it are OK. > > eg ~=: > > int raw_patch_instruction(unsigned int *addr, unsigned int instr) > { > ... > } > > int patch_instruction(unsigned int *addr, unsigned int instr) > { > int err; > > set_kernel_text_rw(ip); > err = raw_patch_instruction((unsigned int *)ip, new); > set_kernel_text_ro(ip); > > return err; > } > Shouldn't we then also have some kind of protection against parallel use of patch_instruction() like a spin_lock_irqsave(), or is it garantied not to happen for other reasons ? Otherwise, we might end up with one instance setting back the kernel text to RO while the other one has just put it RW and is about to patch the instruction. Christophe --- L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast. https://www.avast.com/antivirus
christophe leroy <christophe.leroy@c-s.fr> writes: > Le 22/04/2017 à 08:08, Michael Ellerman a écrit : >> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes: >>> Excerpts from Christophe Leroy's message of April 21, 2017 18:32: >>>> diff --git a/arch/powerpc/kernel/ftrace.c >>>> b/arch/powerpc/kernel/ftrace.c >>>> index 32509de6ce4c..06d2ac53f471 100644 >>>> --- a/arch/powerpc/kernel/ftrace.c >>>> +++ b/arch/powerpc/kernel/ftrace.c >>>> @@ -46,6 +46,7 @@ static int >>>> @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new) >>>> } >>>> >>>> /* replace the text with the new text */ >>>> - if (patch_instruction((unsigned int *)ip, new)) >>>> - return -EPERM; >>>> + set_kernel_text_rw(ip); >>>> + err = patch_instruction((unsigned int *)ip, new); >>>> + set_kernel_text_ro(ip); >>> >>> Is there a reason to not put those inside patch_instruction()? >> >> Yes and no. >> >> patch_instruction() is called quite early from apply_feature_fixups(), I >> haven't looked closely but I suspect the set_kernel_text_rx() routines >> won't work that early. >> >> But on the other hand patch_instruction() is used by things other than >> ftrace, like jump labels, so we probably want the rw/ro setting in there >> so that we don't have to go and fixup jump labels etc. >> >> So probably we need a raw_patch_instruction() which does just the >> patching (what patch_instruction() does now), and can be used early in >> boot. And then patch_instruction() would have the rw/ro change in it, so >> that all users of it are OK. >> >> eg ~=: >> >> int raw_patch_instruction(unsigned int *addr, unsigned int instr) >> { >> ... >> } >> >> int patch_instruction(unsigned int *addr, unsigned int instr) >> { >> int err; >> >> set_kernel_text_rw(ip); >> err = raw_patch_instruction((unsigned int *)ip, new); >> set_kernel_text_ro(ip); >> >> return err; >> } > > Shouldn't we then also have some kind of protection against parallel use > of patch_instruction() like a spin_lock_irqsave(), or is it garantied > not to happen for other reasons ? > > Otherwise, we might end up with one instance setting back the kernel > text to RO while the other one has just put it RW and is about to patch > the instruction. Yes it'd need some locking for sure. "Locking left as an exercise for the reader." ;) cheers
Le 23/04/2017 à 12:26, Michael Ellerman a écrit : > christophe leroy <christophe.leroy@c-s.fr> writes: > >> Le 22/04/2017 à 08:08, Michael Ellerman a écrit : >>> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes: >>>> Excerpts from Christophe Leroy's message of April 21, 2017 18:32: >>>>> diff --git a/arch/powerpc/kernel/ftrace.c >>>>> b/arch/powerpc/kernel/ftrace.c >>>>> index 32509de6ce4c..06d2ac53f471 100644 >>>>> --- a/arch/powerpc/kernel/ftrace.c >>>>> +++ b/arch/powerpc/kernel/ftrace.c >>>>> @@ -46,6 +46,7 @@ static int >>>>> @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new) >>>>> } >>>>> >>>>> /* replace the text with the new text */ >>>>> - if (patch_instruction((unsigned int *)ip, new)) >>>>> - return -EPERM; >>>>> + set_kernel_text_rw(ip); >>>>> + err = patch_instruction((unsigned int *)ip, new); >>>>> + set_kernel_text_ro(ip); >>>> >>>> Is there a reason to not put those inside patch_instruction()? >>> >>> Yes and no. >>> >>> patch_instruction() is called quite early from apply_feature_fixups(), I >>> haven't looked closely but I suspect the set_kernel_text_rx() routines >>> won't work that early. >>> >>> But on the other hand patch_instruction() is used by things other than >>> ftrace, like jump labels, so we probably want the rw/ro setting in there >>> so that we don't have to go and fixup jump labels etc. >>> >>> So probably we need a raw_patch_instruction() which does just the >>> patching (what patch_instruction() does now), and can be used early in >>> boot. And then patch_instruction() would have the rw/ro change in it, so >>> that all users of it are OK. >>> >>> eg ~=: >>> >>> int raw_patch_instruction(unsigned int *addr, unsigned int instr) >>> { >>> ... >>> } >>> >>> int patch_instruction(unsigned int *addr, unsigned int instr) >>> { >>> int err; >>> >>> set_kernel_text_rw(ip); >>> err = raw_patch_instruction((unsigned int *)ip, new); >>> set_kernel_text_ro(ip); >>> >>> return err; >>> } >> >> Shouldn't we then also have some kind of protection against parallel use >> of patch_instruction() like a spin_lock_irqsave(), or is it garantied >> not to happen for other reasons ? >> >> Otherwise, we might end up with one instance setting back the kernel >> text to RO while the other one has just put it RW and is about to patch >> the instruction. > > Yes it'd need some locking for sure. > > "Locking left as an exercise for the reader." ;) > > cheers > Not so easy indeed as patch_instruction() is called from many higher level functions like patch_branch() which are themselves called from other functions like do_features_fixup() which are called during init but also when loading a module for instance. It is therefore not easy to implement it via a raw_patch_instruction() as proposed. So I took another approach, taken from x86: a static bool tells whether kernel text has been put in RO yet or not. Until this, set_kernel_text_ro/rw() return without doing anything. As for the locking, I put a spin_lock_irqsave() as I was not sure whether patch_instruction() can be called during interrupts or not. Christophe
diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug index c86df246339e..047f91564e52 100644 --- a/arch/powerpc/Kconfig.debug +++ b/arch/powerpc/Kconfig.debug @@ -369,4 +369,15 @@ config PPC_HTDUMP def_bool y depends on PPC_PTDUMP && PPC_BOOK3S +config DEBUG_RODATA + bool "Write protect kernel read-only data structures" + depends on DEBUG_KERNEL && PPC32 + default n + help + Mark the kernel read-only data as write-protected in the pagetables, + in order to catch accidental (and incorrect) writes to such const + data. This option may have a performance impact because block + mapping via BATs etc... will be disabled. + If in doubt, say "N". + endmenu diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index dd01212935ac..142337f3b745 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -80,6 +80,14 @@ unsigned long vmalloc_to_phys(void *vmalloc_addr); void pgtable_cache_add(unsigned shift, void (*ctor)(void *)); void pgtable_cache_init(void); + +#ifdef CONFIG_DEBUG_RODATA +void set_kernel_text_rw(unsigned long addr); +void set_kernel_text_ro(unsigned long addr); +#else +static inline void set_kernel_text_rw(unsigned long addr) {} +static inline void set_kernel_text_ro(unsigned long addr) {} +#endif #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_PGTABLE_H */ diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c index 32509de6ce4c..06d2ac53f471 100644 --- a/arch/powerpc/kernel/ftrace.c +++ b/arch/powerpc/kernel/ftrace.c @@ -46,6 +46,7 @@ static int ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new) { unsigned int replaced; + int err; /* * Note: @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new) } /* replace the text with the new text */ - if (patch_instruction((unsigned int *)ip, new)) - return -EPERM; + set_kernel_text_rw(ip); + err = patch_instruction((unsigned int *)ip, new); + set_kernel_text_ro(ip); - return 0; + return err ? -EPERM : 0; } /* diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c index 8a7c38b8d335..e39c812b97ca 100644 --- a/arch/powerpc/mm/init_32.c +++ b/arch/powerpc/mm/init_32.c @@ -109,7 +109,8 @@ void __init MMU_setup(void) if (strstr(boot_command_line, "noltlbs")) { __map_without_ltlbs = 1; } - if (debug_pagealloc_enabled()) { + if (debug_pagealloc_enabled() || + IS_ENABLED(CONFIG_DEBUG_RODATA)) { __map_without_bats = 1; __map_without_ltlbs = 1; } diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c index 31728f3cdd20..972effec1bb2 100644 --- a/arch/powerpc/mm/pgtable_32.c +++ b/arch/powerpc/mm/pgtable_32.c @@ -34,6 +34,7 @@ #include <asm/fixmap.h> #include <asm/io.h> #include <asm/setup.h> +#include <asm/sections.h> #include "mmu_decl.h" @@ -375,6 +376,41 @@ void remap_init_ram(void) change_page_attr(page, numpages, PAGE_KERNEL); } +#ifdef CONFIG_DEBUG_RODATA +void set_kernel_text_rw(unsigned long addr) +{ + if (core_kernel_text(addr)) + change_page_attr(virt_to_page(addr), 1, PAGE_KERNEL_X); +} + +void set_kernel_text_ro(unsigned long addr) +{ + if (core_kernel_text(addr)) + change_page_attr(virt_to_page(addr), 1, PAGE_KERNEL_ROX); +} + +void mark_rodata_ro(void) +{ + struct page *page; + unsigned long numpages; + + page = virt_to_page(_stext); + numpages = PFN_UP((unsigned long)_etext) - + PFN_DOWN((unsigned long)_stext); + + change_page_attr(page, numpages, PAGE_KERNEL_ROX); + /* + * mark .rodata as read only. Use __init_begin rather than __end_rodata + * to cover NOTES and EXCEPTION_TABLE. + */ + page = virt_to_page(__start_rodata); + numpages = PFN_UP((unsigned long)__init_begin) - + PFN_DOWN((unsigned long)__start_rodata); + + change_page_attr(page, numpages, PAGE_KERNEL_RO); +} +#endif + #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) {
This patch implements CONFIG_DEBUG_RODATA on PPC32. As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings in order to allow page protection setup at the level of each page. As BAT/LTLB mappings are deactivated, their might be performance impact. For this reason, we keep it OFF by default. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> --- v2: For ftrace, only change the attributes of the page to be modified arch/powerpc/Kconfig.debug | 11 +++++++++++ arch/powerpc/include/asm/pgtable.h | 8 ++++++++ arch/powerpc/kernel/ftrace.c | 8 +++++--- arch/powerpc/mm/init_32.c | 3 ++- arch/powerpc/mm/pgtable_32.c | 36 ++++++++++++++++++++++++++++++++++++ 5 files changed, 62 insertions(+), 4 deletions(-)