diff mbox series

[v4] powerpc: Avoid code patching freed init sections

Message ID 20180914011411.3184-1-mikey@neuling.org (mailing list archive)
State Accepted
Commit 51c3c62b58b357e8d35e4cc32f7b4ec907426fe3
Headers show
Series [v4] powerpc: Avoid code patching freed init sections | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success next/apply_patch Successfully applied
snowpatch_ozlabs/checkpatch warning Test checkpatch on branch next
snowpatch_ozlabs/build-ppc64le success Test build-ppc64le on branch next
snowpatch_ozlabs/build-ppc64be success Test build-ppc64be on branch next
snowpatch_ozlabs/build-ppc64e success Test build-ppc64e on branch next
snowpatch_ozlabs/build-ppc32 success Test build-ppc32 on branch next

Commit Message

Michael Neuling Sept. 14, 2018, 1:14 a.m. UTC
This stops us from doing code patching in init sections after they've
been freed.

In this chain:
  kvm_guest_init() ->
    kvm_use_magic_page() ->
      fault_in_pages_readable() ->
	 __get_user() ->
	   __get_user_nocheck() ->
	     barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

cc: stable@vger.kernel.org # 4.13+
Signed-off-by: Michael Neuling <mikey@neuling.org>

---
For stable I've marked this as v4.13+ since that's when we refactored
code-patching.c but it could go back even further than that. In
reality though, I think we can only hit this since the first
spectre/meltdown changes.

v4:
 Feedback from Christophe Leroy:
   - init_mem_free -> init_mem_is_free
   - prlog %lx -> %px

v3:
 Add init_mem_free flag to avoid potential race.
 Feedback from Christophe Leroy:
   - use init_section_contains()
   - change order of init test for performance
   - use pr_debug()
   - remove blank line

v2:
  Print when we skip an address
---
 arch/powerpc/include/asm/setup.h | 1 +
 arch/powerpc/lib/code-patching.c | 6 ++++++
 arch/powerpc/mm/mem.c            | 2 ++
 3 files changed, 9 insertions(+)

Comments

Nicholas Piggin Sept. 14, 2018, 4:22 a.m. UTC | #1
On Fri, 14 Sep 2018 11:14:11 +1000
Michael Neuling <mikey@neuling.org> wrote:

> This stops us from doing code patching in init sections after they've
> been freed.
> 
> In this chain:
>   kvm_guest_init() ->
>     kvm_use_magic_page() ->
>       fault_in_pages_readable() ->
> 	 __get_user() ->
> 	   __get_user_nocheck() ->
> 	     barrier_nospec();
> 
> We have a code patching location at barrier_nospec() and
> kvm_guest_init() is an init function. This whole chain gets inlined,
> so when we free the init section (hence kvm_guest_init()), this code
> goes away and hence should no longer be patched.
> 
> We seen this as userspace memory corruption when using a memory
> checker while doing partition migration testing on powervm (this
> starts the code patching post migration via
> /sys/kernel/mobility/migration). In theory, it could also happen when
> using /sys/kernel/debug/powerpc/barrier_nospec.
> 
> cc: stable@vger.kernel.org # 4.13+
> Signed-off-by: Michael Neuling <mikey@neuling.org>
> 
> ---
> For stable I've marked this as v4.13+ since that's when we refactored
> code-patching.c but it could go back even further than that. In
> reality though, I think we can only hit this since the first
> spectre/meltdown changes.
> 
> v4:
>  Feedback from Christophe Leroy:
>    - init_mem_free -> init_mem_is_free
>    - prlog %lx -> %px
> 
> v3:
>  Add init_mem_free flag to avoid potential race.
>  Feedback from Christophe Leroy:
>    - use init_section_contains()
>    - change order of init test for performance
>    - use pr_debug()
>    - remove blank line
> 
> v2:
>   Print when we skip an address
> ---
>  arch/powerpc/include/asm/setup.h | 1 +
>  arch/powerpc/lib/code-patching.c | 6 ++++++
>  arch/powerpc/mm/mem.c            | 2 ++
>  3 files changed, 9 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
> index 1a951b0046..1fffbba8d6 100644
> --- a/arch/powerpc/include/asm/setup.h
> +++ b/arch/powerpc/include/asm/setup.h
> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>  
>  extern unsigned int rtas_data;
>  extern unsigned long long memory_limit;
> +extern bool init_mem_is_free;
>  extern unsigned long klimit;
>  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>  
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index 850f3b8f4d..6ae2777c22 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
>  {
>  	int err;
>  
> +	/* Make sure we aren't patching a freed init section */
> +	if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
> +		pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr);
> +		return 0;
> +	}

What we should do is a whitelist, make sure it's only patching the
sections we want it to.

That's a bigger job when you consider modules and things too though,
so this looks good for now. Thanks,

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Christophe Leroy Sept. 14, 2018, 5:32 a.m. UTC | #2
Le 14/09/2018 à 03:14, Michael Neuling a écrit :
> This stops us from doing code patching in init sections after they've
> been freed.
> 
> In this chain:
>    kvm_guest_init() ->
>      kvm_use_magic_page() ->
>        fault_in_pages_readable() ->
> 	 __get_user() ->
> 	   __get_user_nocheck() ->
> 	     barrier_nospec();
> 
> We have a code patching location at barrier_nospec() and
> kvm_guest_init() is an init function. This whole chain gets inlined,
> so when we free the init section (hence kvm_guest_init()), this code
> goes away and hence should no longer be patched.
> 
> We seen this as userspace memory corruption when using a memory
> checker while doing partition migration testing on powervm (this
> starts the code patching post migration via
> /sys/kernel/mobility/migration). In theory, it could also happen when
> using /sys/kernel/debug/powerpc/barrier_nospec.
> 
> cc: stable@vger.kernel.org # 4.13+
> Signed-off-by: Michael Neuling <mikey@neuling.org>
> 
> ---
> For stable I've marked this as v4.13+ since that's when we refactored
> code-patching.c but it could go back even further than that. In
> reality though, I think we can only hit this since the first
> spectre/meltdown changes.
> 
> v4:
>   Feedback from Christophe Leroy:
>     - init_mem_free -> init_mem_is_free
>     - prlog %lx -> %px
> 
> v3:
>   Add init_mem_free flag to avoid potential race.
>   Feedback from Christophe Leroy:
>     - use init_section_contains()
>     - change order of init test for performance
>     - use pr_debug()
>     - remove blank line
> 
> v2:
>    Print when we skip an address
> ---
>   arch/powerpc/include/asm/setup.h | 1 +
>   arch/powerpc/lib/code-patching.c | 6 ++++++
>   arch/powerpc/mm/mem.c            | 2 ++
>   3 files changed, 9 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
> index 1a951b0046..1fffbba8d6 100644
> --- a/arch/powerpc/include/asm/setup.h
> +++ b/arch/powerpc/include/asm/setup.h
> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>   
>   extern unsigned int rtas_data;
>   extern unsigned long long memory_limit;
> +extern bool init_mem_is_free;
>   extern unsigned long klimit;
>   extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>   
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index 850f3b8f4d..6ae2777c22 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
>   {
>   	int err;
>   
> +	/* Make sure we aren't patching a freed init section */
> +	if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
> +		pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr);
> +		return 0;
> +	}
> +
>   	__put_user_size(instr, patch_addr, 4, err);
>   	if (err)
>   		return err;
> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> index 5c8530d0c6..04ccb274a6 100644
> --- a/arch/powerpc/mm/mem.c
> +++ b/arch/powerpc/mm/mem.c
> @@ -63,6 +63,7 @@
>   #endif
>   
>   unsigned long long memory_limit;
> +bool init_mem_is_free;
>   
>   #ifdef CONFIG_HIGHMEM
>   pte_t *kmap_pte;
> @@ -396,6 +397,7 @@ void free_initmem(void)
>   {
>   	ppc_md.progress = ppc_printk_progress;
>   	mark_initmem_nx();
> +	init_mem_is_free = true;
>   	free_initmem_default(POISON_FREE_INITMEM);
>   }
>   
> 

Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Christophe Leroy Sept. 18, 2018, 8:52 a.m. UTC | #3
Le 14/09/2018 à 06:22, Nicholas Piggin a écrit :
> On Fri, 14 Sep 2018 11:14:11 +1000
> Michael Neuling <mikey@neuling.org> wrote:
> 
>> This stops us from doing code patching in init sections after they've
>> been freed.
>>
>> In this chain:
>>    kvm_guest_init() ->
>>      kvm_use_magic_page() ->
>>        fault_in_pages_readable() ->
>> 	 __get_user() ->
>> 	   __get_user_nocheck() ->
>> 	     barrier_nospec();
>>
>> We have a code patching location at barrier_nospec() and
>> kvm_guest_init() is an init function. This whole chain gets inlined,
>> so when we free the init section (hence kvm_guest_init()), this code
>> goes away and hence should no longer be patched.
>>
>> We seen this as userspace memory corruption when using a memory
>> checker while doing partition migration testing on powervm (this
>> starts the code patching post migration via
>> /sys/kernel/mobility/migration). In theory, it could also happen when
>> using /sys/kernel/debug/powerpc/barrier_nospec.
>>
>> cc: stable@vger.kernel.org # 4.13+
>> Signed-off-by: Michael Neuling <mikey@neuling.org>
>>
>> ---
>> For stable I've marked this as v4.13+ since that's when we refactored
>> code-patching.c but it could go back even further than that. In
>> reality though, I think we can only hit this since the first
>> spectre/meltdown changes.
>>
>> v4:
>>   Feedback from Christophe Leroy:
>>     - init_mem_free -> init_mem_is_free
>>     - prlog %lx -> %px
>>
>> v3:
>>   Add init_mem_free flag to avoid potential race.
>>   Feedback from Christophe Leroy:
>>     - use init_section_contains()
>>     - change order of init test for performance
>>     - use pr_debug()
>>     - remove blank line
>>
>> v2:
>>    Print when we skip an address
>> ---
>>   arch/powerpc/include/asm/setup.h | 1 +
>>   arch/powerpc/lib/code-patching.c | 6 ++++++
>>   arch/powerpc/mm/mem.c            | 2 ++
>>   3 files changed, 9 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
>> index 1a951b0046..1fffbba8d6 100644
>> --- a/arch/powerpc/include/asm/setup.h
>> +++ b/arch/powerpc/include/asm/setup.h
>> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>>   
>>   extern unsigned int rtas_data;
>>   extern unsigned long long memory_limit;
>> +extern bool init_mem_is_free;
>>   extern unsigned long klimit;
>>   extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>>   
>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>> index 850f3b8f4d..6ae2777c22 100644
>> --- a/arch/powerpc/lib/code-patching.c
>> +++ b/arch/powerpc/lib/code-patching.c
>> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
>>   {
>>   	int err;
>>   
>> +	/* Make sure we aren't patching a freed init section */
>> +	if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
>> +		pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr);
>> +		return 0;
>> +	}
> 
> What we should do is a whitelist, make sure it's only patching the
> sections we want it to.
> 
> That's a bigger job when you consider modules and things too though,
> so this looks good for now. Thanks,

What about using kernel_text_address() for it then ? It also handles 
modules, is it more complicated than that ?

Christophe
Michal Suchánek Sept. 18, 2018, 11:35 a.m. UTC | #4
On Tue, 18 Sep 2018 10:52:09 +0200
Christophe LEROY <christophe.leroy@c-s.fr> wrote:

> 
> 
> Le 14/09/2018 à 06:22, Nicholas Piggin a écrit :
> > On Fri, 14 Sep 2018 11:14:11 +1000
> > Michael Neuling <mikey@neuling.org> wrote:
> > 
> >> This stops us from doing code patching in init sections after
> >> they've been freed.
> >>
> >> In this chain:
> >>    kvm_guest_init() ->
> >>      kvm_use_magic_page() ->
> >>        fault_in_pages_readable() ->
> >> 	 __get_user() ->
> >> 	   __get_user_nocheck() ->
> >> 	     barrier_nospec();
> >>
> >> We have a code patching location at barrier_nospec() and
> >> kvm_guest_init() is an init function. This whole chain gets
> >> inlined, so when we free the init section (hence
> >> kvm_guest_init()), this code goes away and hence should no longer
> >> be patched.
> >>
> >> We seen this as userspace memory corruption when using a memory
> >> checker while doing partition migration testing on powervm (this
> >> starts the code patching post migration via
> >> /sys/kernel/mobility/migration). In theory, it could also happen
> >> when using /sys/kernel/debug/powerpc/barrier_nospec.
> >>
> >> cc: stable@vger.kernel.org # 4.13+
> >> Signed-off-by: Michael Neuling <mikey@neuling.org>
> >>
> >> ---
> >> For stable I've marked this as v4.13+ since that's when we
> >> refactored code-patching.c but it could go back even further than
> >> that. In reality though, I think we can only hit this since the
> >> first spectre/meltdown changes.
> >>
> >> v4:
> >>   Feedback from Christophe Leroy:
> >>     - init_mem_free -> init_mem_is_free
> >>     - prlog %lx -> %px
> >>
> >> v3:
> >>   Add init_mem_free flag to avoid potential race.
> >>   Feedback from Christophe Leroy:
> >>     - use init_section_contains()
> >>     - change order of init test for performance
> >>     - use pr_debug()
> >>     - remove blank line
> >>
> >> v2:
> >>    Print when we skip an address
> >> ---
> >>   arch/powerpc/include/asm/setup.h | 1 +
> >>   arch/powerpc/lib/code-patching.c | 6 ++++++
> >>   arch/powerpc/mm/mem.c            | 2 ++
> >>   3 files changed, 9 insertions(+)
> >>
> >> diff --git a/arch/powerpc/include/asm/setup.h
> >> b/arch/powerpc/include/asm/setup.h index 1a951b0046..1fffbba8d6
> >> 100644 --- a/arch/powerpc/include/asm/setup.h
> >> +++ b/arch/powerpc/include/asm/setup.h
> >> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned
> >> short hex); 
> >>   extern unsigned int rtas_data;
> >>   extern unsigned long long memory_limit;
> >> +extern bool init_mem_is_free;
> >>   extern unsigned long klimit;
> >>   extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
> >>   
> >> diff --git a/arch/powerpc/lib/code-patching.c
> >> b/arch/powerpc/lib/code-patching.c index 850f3b8f4d..6ae2777c22
> >> 100644 --- a/arch/powerpc/lib/code-patching.c
> >> +++ b/arch/powerpc/lib/code-patching.c
> >> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int
> >> *exec_addr, unsigned int instr, {
> >>   	int err;
> >>   
> >> +	/* Make sure we aren't patching a freed init section */
> >> +	if (init_mem_is_free && init_section_contains(exec_addr,
> >> 4)) {
> >> +		pr_debug("Skipping init section patching addr:
> >> 0x%px\n", exec_addr);
> >> +		return 0;
> >> +	}
> > 
> > What we should do is a whitelist, make sure it's only patching the
> > sections we want it to.
> > 
> > That's a bigger job when you consider modules and things too though,
> > so this looks good for now. Thanks,
> 
> What about using kernel_text_address() for it then ? It also handles 
> modules, is it more complicated than that ?

Modules are patched separately so should not need to be excluded here.

There is a different problem with modules: when the mitigation type
changes the modules are not re-patched with the new settings.

Thanks

Michal
Michael Ellerman Sept. 21, 2018, 11:59 a.m. UTC | #5
On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:
> This stops us from doing code patching in init sections after they've
> been freed.
> 
> In this chain:
>   kvm_guest_init() ->
>     kvm_use_magic_page() ->
>       fault_in_pages_readable() ->
> 	 __get_user() ->
> 	   __get_user_nocheck() ->
> 	     barrier_nospec();
> 
> We have a code patching location at barrier_nospec() and
> kvm_guest_init() is an init function. This whole chain gets inlined,
> so when we free the init section (hence kvm_guest_init()), this code
> goes away and hence should no longer be patched.
> 
> We seen this as userspace memory corruption when using a memory
> checker while doing partition migration testing on powervm (this
> starts the code patching post migration via
> /sys/kernel/mobility/migration). In theory, it could also happen when
> using /sys/kernel/debug/powerpc/barrier_nospec.
> 
> cc: stable@vger.kernel.org # 4.13+
> Signed-off-by: Michael Neuling <mikey@neuling.org>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/51c3c62b58b357e8d35e4cc32f7b4e

cheers
Christophe Leroy Oct. 1, 2018, 11:25 a.m. UTC | #6
Le 21/09/2018 à 13:59, Michael Ellerman a écrit :
> On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:
>> This stops us from doing code patching in init sections after they've
>> been freed.
>>
>> In this chain:
>>    kvm_guest_init() ->
>>      kvm_use_magic_page() ->
>>        fault_in_pages_readable() ->
>> 	 __get_user() ->
>> 	   __get_user_nocheck() ->
>> 	     barrier_nospec();
>>
>> We have a code patching location at barrier_nospec() and
>> kvm_guest_init() is an init function. This whole chain gets inlined,
>> so when we free the init section (hence kvm_guest_init()), this code
>> goes away and hence should no longer be patched.
>>
>> We seen this as userspace memory corruption when using a memory
>> checker while doing partition migration testing on powervm (this
>> starts the code patching post migration via
>> /sys/kernel/mobility/migration). In theory, it could also happen when
>> using /sys/kernel/debug/powerpc/barrier_nospec.
>>
>> cc: stable@vger.kernel.org # 4.13+
>> Signed-off-by: Michael Neuling <mikey@neuling.org>
>> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
> 
> Applied to powerpc fixes, thanks.
> 
> https://git.kernel.org/powerpc/c/51c3c62b58b357e8d35e4cc32f7b4e
> 

This patch breaks booting on my MPC83xx board (book3s/32) very early 
(before console is active), provoking restart.
u-boot reports a checkstop reset at restart.

Reverting this commit fixes the issue.

The following patch fixes the issue as well, but I think it is not the 
best solution. I still think the test should be in patch_instruction() 
instead of being in __patch_instruction(), see my comment on v2

Christophe

diff --git a/arch/powerpc/lib/code-patching.c 
b/arch/powerpc/lib/code-patching.c
index 6ae2777..6192fda 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -29,7 +29,7 @@ static int __patch_instruction(unsigned int 
*exec_addr, unsigned int instr,
         int err;

         /* Make sure we aren't patching a freed init section */
-       if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
+       if (*PTRRELOC(&init_mem_is_free) && 
init_section_contains(exec_addr, 4)) {
                 pr_debug("Skipping init section patching addr: 
0x%px\n", exec_addr);
                 return 0;
         }


Christophe
Michael Neuling Oct. 1, 2018, 10:57 p.m. UTC | #7
On Mon, 2018-10-01 at 13:25 +0200, Christophe LEROY wrote:
> 
> Le 21/09/2018 à 13:59, Michael Ellerman a écrit :
> > On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:
> > > This stops us from doing code patching in init sections after they've
> > > been freed.
> > > 
> > > In this chain:
> > >    kvm_guest_init() ->
> > >      kvm_use_magic_page() ->
> > >        fault_in_pages_readable() ->
> > > 	 __get_user() ->
> > > 	   __get_user_nocheck() ->
> > > 	     barrier_nospec();
> > > 
> > > We have a code patching location at barrier_nospec() and
> > > kvm_guest_init() is an init function. This whole chain gets inlined,
> > > so when we free the init section (hence kvm_guest_init()), this code
> > > goes away and hence should no longer be patched.
> > > 
> > > We seen this as userspace memory corruption when using a memory
> > > checker while doing partition migration testing on powervm (this
> > > starts the code patching post migration via
> > > /sys/kernel/mobility/migration). In theory, it could also happen when
> > > using /sys/kernel/debug/powerpc/barrier_nospec.
> > > 
> > > cc: stable@vger.kernel.org # 4.13+
> > > Signed-off-by: Michael Neuling <mikey@neuling.org>
> > > Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> > > Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
> > 
> > Applied to powerpc fixes, thanks.
> > 
> > https://git.kernel.org/powerpc/c/51c3c62b58b357e8d35e4cc32f7b4e
> > 
> 
> This patch breaks booting on my MPC83xx board (book3s/32) very early 
> (before console is active), provoking restart.
> u-boot reports a checkstop reset at restart.
> 
> Reverting this commit fixes the issue.
> 
> The following patch fixes the issue as well, but I think it is not the 
> best solution. I still think the test should be in patch_instruction() 
> instead of being in __patch_instruction(), see my comment on v2

Arrh, sorry.

Can you write this up as a real patch with a signed off by so mpe can take it?

Mikey

> 
> Christophe
> 
> diff --git a/arch/powerpc/lib/code-patching.c 
> b/arch/powerpc/lib/code-patching.c
> index 6ae2777..6192fda 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -29,7 +29,7 @@ static int __patch_instruction(unsigned int 
> *exec_addr, unsigned int instr,
>          int err;
> 
>          /* Make sure we aren't patching a freed init section */
> -       if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
> +       if (*PTRRELOC(&init_mem_is_free) && 
> init_section_contains(exec_addr, 4)) {
>                  pr_debug("Skipping init section patching addr: 
> 0x%px\n", exec_addr);
>                  return 0;
>          }
> 
> 
> Christophe
>
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 1a951b0046..1fffbba8d6 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -9,6 +9,7 @@  extern void ppc_printk_progress(char *s, unsigned short hex);
 
 extern unsigned int rtas_data;
 extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
 extern unsigned long klimit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
 
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 850f3b8f4d..6ae2777c22 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -28,6 +28,12 @@  static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
 {
 	int err;
 
+	/* Make sure we aren't patching a freed init section */
+	if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
+		pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr);
+		return 0;
+	}
+
 	__put_user_size(instr, patch_addr, 4, err);
 	if (err)
 		return err;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 5c8530d0c6..04ccb274a6 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -63,6 +63,7 @@ 
 #endif
 
 unsigned long long memory_limit;
+bool init_mem_is_free;
 
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;
@@ -396,6 +397,7 @@  void free_initmem(void)
 {
 	ppc_md.progress = ppc_printk_progress;
 	mark_initmem_nx();
+	init_mem_is_free = true;
 	free_initmem_default(POISON_FREE_INITMEM);
 }