diff mbox series

[2/2] powerpc/64: poison __per_cpu_offset to catch use-before-init

Message ID 20220711030653.150950-2-npiggin@gmail.com (mailing list archive)
State Changes Requested
Headers show
Series [1/2] powerpc/mce: mce_init use early_cpu_to_node | expand

Checks

Context Check Description
snowpatch_ozlabs/github-powerpc_selftests success Successfully ran 10 jobs.
snowpatch_ozlabs/github-powerpc_ppctests success Successfully ran 10 jobs.
snowpatch_ozlabs/github-powerpc_clang success Successfully ran 7 jobs.
snowpatch_ozlabs/github-powerpc_sparse success Successfully ran 4 jobs.
snowpatch_ozlabs/github-powerpc_kernel_qemu success Successfully ran 23 jobs.

Commit Message

Nicholas Piggin July 11, 2022, 3:06 a.m. UTC
If the boot CPU tries to access per-cpu data of other CPUs before
per cpu areas are set up, it will unexpectedly use offset 0.

Try to catch such accesses by poisoning the __per_cpu_offset array.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/percpu.h | 1 +
 arch/powerpc/kernel/paca.c        | 2 +-
 arch/powerpc/kernel/setup_64.c    | 2 +-
 3 files changed, 3 insertions(+), 2 deletions(-)

Comments

Michael Ellerman Aug. 1, 2022, 12:02 p.m. UTC | #1
Nicholas Piggin <npiggin@gmail.com> writes:
> If the boot CPU tries to access per-cpu data of other CPUs before
> per cpu areas are set up, it will unexpectedly use offset 0.
>
> Try to catch such accesses by poisoning the __per_cpu_offset array.

I wasn't sure about this.

On bare metal it's just an instant checkstop which is very user hostile.

I worry it's just going to cause unusual configurations/options to crash
for folks, like eg. booting with page_poison=1 did a while back.

Can we put it behind a debug option? Maybe CONFIG_DEBUG_VM ?

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/percpu.h b/arch/powerpc/include/asm/percpu.h
index 8e5b7d0b851c..6ca1a9fc5725 100644
--- a/arch/powerpc/include/asm/percpu.h
+++ b/arch/powerpc/include/asm/percpu.h
@@ -7,6 +7,7 @@ 
  * Same as asm-generic/percpu.h, except that we store the per cpu offset
  * in the paca. Based on the x86-64 implementation.
  */
+#define PER_CPU_OFFSET_POISON 0xfeeeeeeeeeeeeeeeULL
 
 #ifdef CONFIG_SMP
 
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index ba593fd60124..914d27c8b84a 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -223,7 +223,7 @@  void __init initialise_paca(struct paca_struct *new_paca, int cpu)
 	new_paca->hw_cpu_id = 0xffff;
 	new_paca->kexec_state = KEXEC_STATE_NONE;
 	new_paca->__current = &init_task;
-	new_paca->data_offset = 0xfeeeeeeeeeeeeeeeULL;
+	new_paca->data_offset = PER_CPU_OFFSET_POISON;
 #ifdef CONFIG_PPC_64S_HASH_MMU
 	new_paca->slb_shadow_ptr = NULL;
 #endif
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 5761f08dae95..60f0d1258526 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -812,7 +812,7 @@  static __init int pcpu_cpu_to_node(int cpu)
 	return early_cpu_to_node(cpu);
 }
 
-unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
+unsigned long __per_cpu_offset[NR_CPUS] __read_mostly = { [0 ... NR_CPUS-1 ] = PER_CPU_OFFSET_POISON };
 EXPORT_SYMBOL(__per_cpu_offset);
 
 void __init setup_per_cpu_areas(void)