diff mbox series

[v2] powerpc: make stack walking KASAN-safe

Message ID 20210614120907.1952321-1-dja@axtens.net (mailing list archive)
State Accepted
Headers show
Series [v2] powerpc: make stack walking KASAN-safe | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success Successfully applied on branch powerpc/merge (c53db722ec7ab3ebf29ecf61e922820f31e5284b)
snowpatch_ozlabs/build-ppc64le success Build succeeded
snowpatch_ozlabs/build-ppc64be success Build succeeded
snowpatch_ozlabs/build-ppc64e success Build succeeded
snowpatch_ozlabs/build-pmac32 success Build succeeded
snowpatch_ozlabs/checkpatch success total: 0 errors, 0 warnings, 0 checks, 39 lines checked
snowpatch_ozlabs/needsstable success Patch has no Fixes tags

Commit Message

Daniel Axtens June 14, 2021, 12:09 p.m. UTC
Make our stack-walking code KASAN-safe by using __no_sanitize_address.
Generic code, arm64, s390 and x86 all make accesses unchecked for similar
sorts of reasons: when unwinding a stack, we might touch memory that KASAN
has marked as being out-of-bounds. In ppc64 KASAN development, I hit this
sometimes when checking for an exception frame - because we're checking
an arbitrary offset into the stack frame.

See commit 20955746320e ("s390/kasan: avoid false positives during stack
unwind"), commit bcaf669b4bdb ("arm64: disable kasan when accessing
frame->fp in unwind_frame"), commit 91e08ab0c851 ("x86/dumpstack:
Prevent KASAN false positive warnings") and commit 6e22c8366416
("tracing, kasan: Silence Kasan warning in check_stack of stack_tracer").

Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
Signed-off-by: Daniel Axtens <dja@axtens.net>

---

v2: Use __no_sanitize_address, thanks Naveen
---
 arch/powerpc/kernel/process.c    | 5 +++--
 arch/powerpc/kernel/stacktrace.c | 8 ++++----
 arch/powerpc/perf/callchain.c    | 2 +-
 3 files changed, 8 insertions(+), 7 deletions(-)

Comments

Michael Ellerman June 18, 2021, 3:51 a.m. UTC | #1
On Mon, 14 Jun 2021 22:09:07 +1000, Daniel Axtens wrote:
> Make our stack-walking code KASAN-safe by using __no_sanitize_address.
> Generic code, arm64, s390 and x86 all make accesses unchecked for similar
> sorts of reasons: when unwinding a stack, we might touch memory that KASAN
> has marked as being out-of-bounds. In ppc64 KASAN development, I hit this
> sometimes when checking for an exception frame - because we're checking
> an arbitrary offset into the stack frame.
> 
> [...]

Applied to powerpc/next.

[1/1] powerpc: make stack walking KASAN-safe
      https://git.kernel.org/powerpc/c/b112fb913b5b5705db22efa90ec60f42518934af

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 89e34aa273e2..3464064a0b8b 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -2121,8 +2121,9 @@  unsigned long get_wchan(struct task_struct *p)
 
 static int kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH;
 
-void show_stack(struct task_struct *tsk, unsigned long *stack,
-		const char *loglvl)
+void __no_sanitize_address show_stack(struct task_struct *tsk,
+				      unsigned long *stack,
+				      const char *loglvl)
 {
 	unsigned long sp, ip, lr, newsp;
 	int count = 0;
diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index 1deb1bf331dd..1961e6d5e33b 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -23,8 +23,8 @@ 
 
 #include <asm/paca.h>
 
-void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
-		     struct task_struct *task, struct pt_regs *regs)
+void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+					   struct task_struct *task, struct pt_regs *regs)
 {
 	unsigned long sp;
 
@@ -61,8 +61,8 @@  void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
  *
  * If the task is not 'current', the caller *must* ensure the task is inactive.
  */
-int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
-			     void *cookie, struct task_struct *task)
+int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
+						   void *cookie, struct task_struct *task)
 {
 	unsigned long sp;
 	unsigned long newsp;
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 6c028ee513c0..082f6d0308a4 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -40,7 +40,7 @@  static int valid_next_sp(unsigned long sp, unsigned long prev_sp)
 	return 0;
 }
 
-void
+void __no_sanitize_address
 perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
 {
 	unsigned long sp, next_sp;