[1/2] powerpc: kprobes: blacklist exception handlers

Submitted by Naveen N. Rao on April 19, 2017, 3:29 p.m.

Details

Message ID 1ec8a69d73b38e365347f0c4b6329407d8861afa.1492614257.git.naveen.n.rao@linux.vnet.ibm.com
State Accepted
Commit 7aa5b018bf36f733345f8814393b48011110b555
Headers show

Commit Message

Naveen N. Rao April 19, 2017, 3:29 p.m.
Introduce __head_end to mark end of the early fixed sections and use the
same to blacklist all exception handlers from kprobes.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/sections.h | 1 +
 arch/powerpc/kernel/kprobes.c       | 9 +++++++++
 arch/powerpc/kernel/vmlinux.lds.S   | 2 ++
 3 files changed, 12 insertions(+)

Comments

Michael Ellerman April 20, 2017, 6:33 a.m.
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:

> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 71286dfd76a0..59159337a097 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -112,6 +113,14 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
>  	return addr;
>  }
>  
> +bool arch_within_kprobe_blacklist(unsigned long addr)
> +{
> +	return  (addr >= (unsigned long)__kprobes_text_start &&
> +		 addr < (unsigned long)__kprobes_text_end) ||
> +		(addr >= (unsigned long)_stext &&
> +		 addr < (unsigned long)__head_end);
> +}

This isn't quite right when the kernel is relocated.

_stext and __head_end will be updated to point to the relocated copy of
the kernel, eg:

# grep -e _stext /proc/kallsyms 
c000000002000000 T _stext

So you probably also want something like:

  if (_stext != PAGE_OFFSET &&
      addr >= PAGE_OFFSET &&
      addr < (PAGE_OFFSET + (__head_end - _stext)))
      return true;

But that's entirely untested :)

You can test the relocatable case by enabling CONFIG_RELOCATABLE_TEST.

cheers
Naveen N. Rao April 20, 2017, 7:04 a.m.
Excerpts from Michael Ellerman's message of April 20, 2017 12:03:
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
> 
>> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
>> index 71286dfd76a0..59159337a097 100644
>> --- a/arch/powerpc/kernel/kprobes.c
>> +++ b/arch/powerpc/kernel/kprobes.c
>> @@ -112,6 +113,14 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
>>  	return addr;
>>  }
>>  
>> +bool arch_within_kprobe_blacklist(unsigned long addr)
>> +{
>> +	return  (addr >= (unsigned long)__kprobes_text_start &&
>> +		 addr < (unsigned long)__kprobes_text_end) ||
>> +		(addr >= (unsigned long)_stext &&
>> +		 addr < (unsigned long)__head_end);
>> +}
> 
> This isn't quite right when the kernel is relocated.
> 
> _stext and __head_end will be updated to point to the relocated copy of
> the kernel, eg:
> 
> # grep -e _stext /proc/kallsyms 
> c000000002000000 T _stext
> 
> So you probably also want something like:
> 
>   if (_stext != PAGE_OFFSET &&
>       addr >= PAGE_OFFSET &&
>       addr < (PAGE_OFFSET + (__head_end - _stext)))
>       return true;

Ah, so that's for ensuring we don't allow probing at the real exception 
vectors, which get copied down from _stext. In that case, we are covered 
by the test for kernel_text_address() in check_kprobe_address_safe(). We 
only allow probing from _stext to _etext.

> 
> But that's entirely untested :)
> 
> You can test the relocatable case by enabling CONFIG_RELOCATABLE_TEST.

Done, thanks. This is working as expected (without the need for the 
changes above).  I am not allowed to probe at the real exception vectors 
(and PAGE_OFFSET) as well as between _stext and __head_end.

Thanks,
Naveen
Michael Ellerman April 20, 2017, 12:30 p.m.
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:

> Excerpts from Michael Ellerman's message of April 20, 2017 12:03:
>> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
>> 
>>> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
>>> index 71286dfd76a0..59159337a097 100644
>>> --- a/arch/powerpc/kernel/kprobes.c
>>> +++ b/arch/powerpc/kernel/kprobes.c
>>> @@ -112,6 +113,14 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
>>>  	return addr;
>>>  }
>>>  
>>> +bool arch_within_kprobe_blacklist(unsigned long addr)
>>> +{
>>> +	return  (addr >= (unsigned long)__kprobes_text_start &&
>>> +		 addr < (unsigned long)__kprobes_text_end) ||
>>> +		(addr >= (unsigned long)_stext &&
>>> +		 addr < (unsigned long)__head_end);
>>> +}
>> 
>> This isn't quite right when the kernel is relocated.
>> 
>> _stext and __head_end will be updated to point to the relocated copy of
>> the kernel, eg:
>> 
>> # grep -e _stext /proc/kallsyms 
>> c000000002000000 T _stext
>> 
>> So you probably also want something like:
>> 
>>   if (_stext != PAGE_OFFSET &&
>>       addr >= PAGE_OFFSET &&
>>       addr < (PAGE_OFFSET + (__head_end - _stext)))
>>       return true;
>
> Ah, so that's for ensuring we don't allow probing at the real exception 
> vectors, which get copied down from _stext. In that case, we are covered 
> by the test for kernel_text_address() in check_kprobe_address_safe(). We 
> only allow probing from _stext to _etext.

OK good. I was thinking of is_kernel_addr() which just checks it's >
PAGE_OFFSET, but of course it needs to be a text address also.

>> You can test the relocatable case by enabling CONFIG_RELOCATABLE_TEST.
>
> Done, thanks. This is working as expected (without the need for the 
> changes above).  I am not allowed to probe at the real exception vectors 
> (and PAGE_OFFSET) as well as between _stext and __head_end.

Great.

cheers
Michael Ellerman April 23, 2017, 11:53 a.m.
On Wed, 2017-04-19 at 15:29:51 UTC, "Naveen N. Rao" wrote:
> Introduce __head_end to mark end of the early fixed sections and use the
> same to blacklist all exception handlers from kprobes.
> 
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/7aa5b018bf36f733345f8814393b48

cheers

Patch hide | download patch | download mbox

diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h
index 7dc006b58369..5d0c5b103302 100644
--- a/arch/powerpc/include/asm/sections.h
+++ b/arch/powerpc/include/asm/sections.h
@@ -10,6 +10,7 @@ 
 
 extern char __start_interrupts[];
 extern char __end_interrupts[];
+extern char __head_end[];
 
 extern char __prom_init_toc_start[];
 extern char __prom_init_toc_end[];
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 71286dfd76a0..59159337a097 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -35,6 +35,7 @@ 
 #include <asm/code-patching.h>
 #include <asm/cacheflush.h>
 #include <asm/sstep.h>
+#include <asm/sections.h>
 #include <linux/uaccess.h>
 
 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
@@ -112,6 +113,14 @@  kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
 	return addr;
 }
 
+bool arch_within_kprobe_blacklist(unsigned long addr)
+{
+	return  (addr >= (unsigned long)__kprobes_text_start &&
+		 addr < (unsigned long)__kprobes_text_end) ||
+		(addr >= (unsigned long)_stext &&
+		 addr < (unsigned long)__head_end);
+}
+
 int arch_prepare_kprobe(struct kprobe *p)
 {
 	int ret = 0;
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 7394b770ae1f..f6eee507e0b9 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -77,6 +77,8 @@  SECTIONS
 #endif
 	} :kernel
 
+	__head_end = .;
+
 	/*
 	 * If the build dies here, it's likely code in head_64.S is referencing
 	 * labels it can't reach, and the linker inserting stubs without the