mbox series

[RFC,00/10] implement KASLR for powerpc/fsl_booke/32

Message ID 20190717080621.40424-1-yanaijie@huawei.com (mailing list archive)
Headers show
Series implement KASLR for powerpc/fsl_booke/32 | expand

Message

Jason Yan July 17, 2019, 8:06 a.m. UTC
This series implements KASLR for powerpc/fsl_booke/32, as a security
feature that deters exploit attempts relying on knowledge of the location
of kernel internals.

Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

Entropy is derived from the banner and timer base, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

    KERNELBASE

        |-->   64M   <--|
        |               |
        +---------------+    +----------------+---------------+
        |               |....|    |kernel|    |               |
        +---------------+    +----------------+---------------+
        |                         |
        |----->   offset    <-----|

                              kimage_vaddr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Jason Yan (10):
  powerpc: unify definition of M_IF_NEEDED
  powerpc: move memstart_addr and kernstart_addr to init-common.c
  powerpc: introduce kimage_vaddr to store the kernel base
  powerpc/fsl_booke/32: introduce create_tlb_entry() helper
  powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
  powerpc/fsl_booke/32: implement KASLR infrastructure
  powerpc/fsl_booke/32: randomize the kernel image offset
  powerpc/fsl_booke/kaslr: clear the original kernel if randomized
  powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
  powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

 arch/powerpc/Kconfig                          |  11 +
 arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
 arch/powerpc/include/asm/page.h               |   7 +
 arch/powerpc/kernel/Makefile                  |   1 +
 arch/powerpc/kernel/early_32.c                |   2 +-
 arch/powerpc/kernel/exceptions-64e.S          |  10 -
 arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
 arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
 arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
 arch/powerpc/kernel/machine_kexec.c           |   1 +
 arch/powerpc/kernel/misc_64.S                 |   5 -
 arch/powerpc/kernel/setup-common.c            |  23 +
 arch/powerpc/mm/init-common.c                 |   7 +
 arch/powerpc/mm/init_32.c                     |   5 -
 arch/powerpc/mm/init_64.c                     |   5 -
 arch/powerpc/mm/mmu_decl.h                    |  10 +
 arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
 17 files changed, 580 insertions(+), 48 deletions(-)
 create mode 100644 arch/powerpc/kernel/kaslr_booke.c

Comments

Jason Yan July 25, 2019, 7:16 a.m. UTC | #1
Hi all, any comments?


On 2019/7/17 16:06, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
> 
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
> 
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
> 
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
> 
>      KERNELBASE
> 
>          |-->   64M   <--|
>          |               |
>          +---------------+    +----------------+---------------+
>          |               |....|    |kernel|    |               |
>          +---------------+    +----------------+---------------+
>          |                         |
>          |----->   offset    <-----|
> 
>                                kimage_vaddr
> 
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
> 
> Jason Yan (10):
>    powerpc: unify definition of M_IF_NEEDED
>    powerpc: move memstart_addr and kernstart_addr to init-common.c
>    powerpc: introduce kimage_vaddr to store the kernel base
>    powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>    powerpc/fsl_booke/32: implement KASLR infrastructure
>    powerpc/fsl_booke/32: randomize the kernel image offset
>    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
> 
>   arch/powerpc/Kconfig                          |  11 +
>   arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>   arch/powerpc/include/asm/page.h               |   7 +
>   arch/powerpc/kernel/Makefile                  |   1 +
>   arch/powerpc/kernel/early_32.c                |   2 +-
>   arch/powerpc/kernel/exceptions-64e.S          |  10 -
>   arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
>   arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
>   arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
>   arch/powerpc/kernel/machine_kexec.c           |   1 +
>   arch/powerpc/kernel/misc_64.S                 |   5 -
>   arch/powerpc/kernel/setup-common.c            |  23 +
>   arch/powerpc/mm/init-common.c                 |   7 +
>   arch/powerpc/mm/init_32.c                     |   5 -
>   arch/powerpc/mm/init_64.c                     |   5 -
>   arch/powerpc/mm/mmu_decl.h                    |  10 +
>   arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>   17 files changed, 580 insertions(+), 48 deletions(-)
>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>
Kees Cook July 25, 2019, 7:58 p.m. UTC | #2
On Thu, Jul 25, 2019 at 03:16:28PM +0800, Jason Yan wrote:
> Hi all, any comments?

I'm a fan of it, but I don't know ppc internals well enough to sanely
review the code. :) Some comments below on design...

> 
> 
> On 2019/7/17 16:06, Jason Yan wrote:
> > This series implements KASLR for powerpc/fsl_booke/32, as a security
> > feature that deters exploit attempts relying on knowledge of the location
> > of kernel internals.
> > 
> > Since CONFIG_RELOCATABLE has already supported, what we need to do is
> > map or copy kernel to a proper place and relocate. Freescale Book-E
> > parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> > entries are not suitable to map the kernel directly in a randomized
> > region, so we chose to copy the kernel to a proper place and restart to
> > relocate.
> > 
> > Entropy is derived from the banner and timer base, which will change every
> > build and boot. This not so much safe so additionally the bootloader may
> > pass entropy via the /chosen/kaslr-seed node in device tree.

Good: adding kaslr-seed is a good step here. Are there any x86-like
RDRAND or RDTSC to use? (Or maybe timer base here is similar to x86
RDTSC here?)

> > 
> > We will use the first 512M of the low memory to randomize the kernel
> > image. The memory will be split in 64M zones. We will use the lower 8
> > bit of the entropy to decide the index of the 64M zone. Then we chose a
> > 16K aligned offset inside the 64M zone to put the kernel in.

Does this 16K granularity have any page table performance impact? My
understanding was that x86 needed to have 2M granularity due to its page
table layouts.

Why the 64M zones instead of just 16K granularity across the entire low
512M?

> > 
> >      KERNELBASE
> > 
> >          |-->   64M   <--|
> >          |               |
> >          +---------------+    +----------------+---------------+
> >          |               |....|    |kernel|    |               |
> >          +---------------+    +----------------+---------------+
> >          |                         |
> >          |----->   offset    <-----|
> > 
> >                                kimage_vaddr
> > 
> > We also check if we will overlap with some areas like the dtb area, the
> > initrd area or the crashkernel area. If we cannot find a proper area,
> > kaslr will be disabled and boot from the original kernel.
> > 
> > Jason Yan (10):
> >    powerpc: unify definition of M_IF_NEEDED
> >    powerpc: move memstart_addr and kernstart_addr to init-common.c
> >    powerpc: introduce kimage_vaddr to store the kernel base
> >    powerpc/fsl_booke/32: introduce create_tlb_entry() helper
> >    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
> >    powerpc/fsl_booke/32: implement KASLR infrastructure
> >    powerpc/fsl_booke/32: randomize the kernel image offset
> >    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
> >    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
> >    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

Is there anything planned for other fixed-location things, like x86's
CONFIG_RANDOMIZE_MEMORY?
Diana Craciun July 26, 2019, 7:04 a.m. UTC | #3
Hi Jason,

I have briefly tested yesterday on a P4080 board and did not see any
issues. I do not have much expertise on KASLR, but I will take a look
over the code.

Regards,
Diana

On 7/25/2019 10:16 AM, Jason Yan wrote:
> Hi all, any comments?
>
>
> On 2019/7/17 16:06, Jason Yan wrote:
>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>> feature that deters exploit attempts relying on knowledge of the location
>> of kernel internals.
>>
>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>> map or copy kernel to a proper place and relocate. Freescale Book-E
>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>> entries are not suitable to map the kernel directly in a randomized
>> region, so we chose to copy the kernel to a proper place and restart to
>> relocate.
>>
>> Entropy is derived from the banner and timer base, which will change every
>> build and boot. This not so much safe so additionally the bootloader may
>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>
>> We will use the first 512M of the low memory to randomize the kernel
>> image. The memory will be split in 64M zones. We will use the lower 8
>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>> 16K aligned offset inside the 64M zone to put the kernel in.
>>
>>      KERNELBASE
>>
>>          |-->   64M   <--|
>>          |               |
>>          +---------------+    +----------------+---------------+
>>          |               |....|    |kernel|    |               |
>>          +---------------+    +----------------+---------------+
>>          |                         |
>>          |----->   offset    <-----|
>>
>>                                kimage_vaddr
>>
>> We also check if we will overlap with some areas like the dtb area, the
>> initrd area or the crashkernel area. If we cannot find a proper area,
>> kaslr will be disabled and boot from the original kernel.
>>
>> Jason Yan (10):
>>    powerpc: unify definition of M_IF_NEEDED
>>    powerpc: move memstart_addr and kernstart_addr to init-common.c
>>    powerpc: introduce kimage_vaddr to store the kernel base
>>    powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>    powerpc/fsl_booke/32: implement KASLR infrastructure
>>    powerpc/fsl_booke/32: randomize the kernel image offset
>>    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>>
>>   arch/powerpc/Kconfig                          |  11 +
>>   arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>>   arch/powerpc/include/asm/page.h               |   7 +
>>   arch/powerpc/kernel/Makefile                  |   1 +
>>   arch/powerpc/kernel/early_32.c                |   2 +-
>>   arch/powerpc/kernel/exceptions-64e.S          |  10 -
>>   arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
>>   arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
>>   arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
>>   arch/powerpc/kernel/machine_kexec.c           |   1 +
>>   arch/powerpc/kernel/misc_64.S                 |   5 -
>>   arch/powerpc/kernel/setup-common.c            |  23 +
>>   arch/powerpc/mm/init-common.c                 |   7 +
>>   arch/powerpc/mm/init_32.c                     |   5 -
>>   arch/powerpc/mm/init_64.c                     |   5 -
>>   arch/powerpc/mm/mmu_decl.h                    |  10 +
>>   arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>>   17 files changed, 580 insertions(+), 48 deletions(-)
>>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>
>
Jason Yan July 26, 2019, 7:20 a.m. UTC | #4
On 2019/7/26 3:58, Kees Cook wrote:
> On Thu, Jul 25, 2019 at 03:16:28PM +0800, Jason Yan wrote:
>> Hi all, any comments?
> 
> I'm a fan of it, but I don't know ppc internals well enough to sanely
> review the code. :) Some comments below on design...
> 

Hi Kees, Thanks for your comments.

>>
>>
>> On 2019/7/17 16:06, Jason Yan wrote:
>>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>>> feature that deters exploit attempts relying on knowledge of the location
>>> of kernel internals.
>>>
>>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>>> map or copy kernel to a proper place and relocate. Freescale Book-E
>>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>>> entries are not suitable to map the kernel directly in a randomized
>>> region, so we chose to copy the kernel to a proper place and restart to
>>> relocate.
>>>
>>> Entropy is derived from the banner and timer base, which will change every
>>> build and boot. This not so much safe so additionally the bootloader may
>>> pass entropy via the /chosen/kaslr-seed node in device tree.
> 
> Good: adding kaslr-seed is a good step here. Are there any x86-like
> RDRAND or RDTSC to use? (Or maybe timer base here is similar to x86
> RDTSC here?)
> 

Yes, time base is similar to RDTSC here.

>>>
>>> We will use the first 512M of the low memory to randomize the kernel
>>> image. The memory will be split in 64M zones. We will use the lower 8
>>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>>> 16K aligned offset inside the 64M zone to put the kernel in.
> 
> Does this 16K granularity have any page table performance impact? My
> understanding was that x86 needed to have 2M granularity due to its page
> table layouts.
> 

The fsl booke TLB1 covers the whole low memeory. AFAIK, there is no page 
table performance impact. But if anyone knows there is any regressions, 
please let me know.

> Why the 64M zones instead of just 16K granularity across the entire low
> 512M?
> 

The boot code only maps one 64M zone at early start. If the kernel 
crosses two 64M zones, we need to map two 64M zones. Keep the kernel in 
one 64M saves a lot of complex codes.

>>>
>>>       KERNELBASE
>>>
>>>           |-->   64M   <--|
>>>           |               |
>>>           +---------------+    +----------------+---------------+
>>>           |               |....|    |kernel|    |               |
>>>           +---------------+    +----------------+---------------+
>>>           |                         |
>>>           |----->   offset    <-----|
>>>
>>>                                 kimage_vaddr
>>>
>>> We also check if we will overlap with some areas like the dtb area, the
>>> initrd area or the crashkernel area. If we cannot find a proper area,
>>> kaslr will be disabled and boot from the original kernel.
>>>
>>> Jason Yan (10):
>>>     powerpc: unify definition of M_IF_NEEDED
>>>     powerpc: move memstart_addr and kernstart_addr to init-common.c
>>>     powerpc: introduce kimage_vaddr to store the kernel base
>>>     powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>>     powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>>     powerpc/fsl_booke/32: implement KASLR infrastructure
>>>     powerpc/fsl_booke/32: randomize the kernel image offset
>>>     powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>>     powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>>     powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
> 
> Is there anything planned for other fixed-location things, like x86's
> CONFIG_RANDOMIZE_MEMORY?
> 

Yes, if this feature can be accepted, I will start to work with 
powerpc64 KASLR and other things like CONFIG_RANDOMIZE_MEMORY.
Jason Yan July 26, 2019, 7:26 a.m. UTC | #5
On 2019/7/26 15:04, Diana Madalina Craciun wrote:
> Hi Jason,
> 
> I have briefly tested yesterday on a P4080 board and did not see any
> issues. I do not have much expertise on KASLR, but I will take a look
> over the code.
> 

Hi Diana, thanks. Looking forward to your suggestions.

> Regards,
> Diana
> 
> On 7/25/2019 10:16 AM, Jason Yan wrote:
>> Hi all, any comments?
>>
>>
>> On 2019/7/17 16:06, Jason Yan wrote:
>>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>>> feature that deters exploit attempts relying on knowledge of the location
>>> of kernel internals.
>>>
>>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>>> map or copy kernel to a proper place and relocate. Freescale Book-E
>>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>>> entries are not suitable to map the kernel directly in a randomized
>>> region, so we chose to copy the kernel to a proper place and restart to
>>> relocate.
>>>
>>> Entropy is derived from the banner and timer base, which will change every
>>> build and boot. This not so much safe so additionally the bootloader may
>>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>>
>>> We will use the first 512M of the low memory to randomize the kernel
>>> image. The memory will be split in 64M zones. We will use the lower 8
>>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>>> 16K aligned offset inside the 64M zone to put the kernel in.
>>>
>>>       KERNELBASE
>>>
>>>           |-->   64M   <--|
>>>           |               |
>>>           +---------------+    +----------------+---------------+
>>>           |               |....|    |kernel|    |               |
>>>           +---------------+    +----------------+---------------+
>>>           |                         |
>>>           |----->   offset    <-----|
>>>
>>>                                 kimage_vaddr
>>>
>>> We also check if we will overlap with some areas like the dtb area, the
>>> initrd area or the crashkernel area. If we cannot find a proper area,
>>> kaslr will be disabled and boot from the original kernel.
>>>
>>> Jason Yan (10):
>>>     powerpc: unify definition of M_IF_NEEDED
>>>     powerpc: move memstart_addr and kernstart_addr to init-common.c
>>>     powerpc: introduce kimage_vaddr to store the kernel base
>>>     powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>>     powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>>     powerpc/fsl_booke/32: implement KASLR infrastructure
>>>     powerpc/fsl_booke/32: randomize the kernel image offset
>>>     powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>>     powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>>     powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>>>
>>>    arch/powerpc/Kconfig                          |  11 +
>>>    arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>>>    arch/powerpc/include/asm/page.h               |   7 +
>>>    arch/powerpc/kernel/Makefile                  |   1 +
>>>    arch/powerpc/kernel/early_32.c                |   2 +-
>>>    arch/powerpc/kernel/exceptions-64e.S          |  10 -
>>>    arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
>>>    arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
>>>    arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
>>>    arch/powerpc/kernel/machine_kexec.c           |   1 +
>>>    arch/powerpc/kernel/misc_64.S                 |   5 -
>>>    arch/powerpc/kernel/setup-common.c            |  23 +
>>>    arch/powerpc/mm/init-common.c                 |   7 +
>>>    arch/powerpc/mm/init_32.c                     |   5 -
>>>    arch/powerpc/mm/init_64.c                     |   5 -
>>>    arch/powerpc/mm/mmu_decl.h                    |  10 +
>>>    arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>>>    17 files changed, 580 insertions(+), 48 deletions(-)
>>>    create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>>
>>
> 
> 
> .
>
Kees Cook July 26, 2019, 4:15 p.m. UTC | #6
On Fri, Jul 26, 2019 at 03:20:26PM +0800, Jason Yan wrote:
> The boot code only maps one 64M zone at early start. If the kernel crosses
> two 64M zones, we need to map two 64M zones. Keep the kernel in one 64M
> saves a lot of complex codes.

Ah-ha. Gotcha. Thanks for the clarification.

> Yes, if this feature can be accepted, I will start to work with powerpc64
> KASLR and other things like CONFIG_RANDOMIZE_MEMORY.

Awesome. :)
Diana Craciun July 29, 2019, 2:30 p.m. UTC | #7
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>


On 7/17/2019 10:49 AM, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
>
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
>
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
>
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
>
>     KERNELBASE
>
>         |-->   64M   <--|
>         |               |
>         +---------------+    +----------------+---------------+
>         |               |....|    |kernel|    |               |
>         +---------------+    +----------------+---------------+
>         |                         |
>         |----->   offset    <-----|
>
>                               kimage_vaddr
>
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
>
> Jason Yan (10):
>   powerpc: unify definition of M_IF_NEEDED
>   powerpc: move memstart_addr and kernstart_addr to init-common.c
>   powerpc: introduce kimage_vaddr to store the kernel base
>   powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>   powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>   powerpc/fsl_booke/32: implement KASLR infrastructure
>   powerpc/fsl_booke/32: randomize the kernel image offset
>   powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>   powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>   powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>
>  arch/powerpc/Kconfig                          |  11 +
>  arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>  arch/powerpc/include/asm/page.h               |   7 +
>  arch/powerpc/kernel/Makefile                  |   1 +
>  arch/powerpc/kernel/early_32.c                |   2 +-
>  arch/powerpc/kernel/exceptions-64e.S          |  10 -
>  arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
>  arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
>  arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
>  arch/powerpc/kernel/machine_kexec.c           |   1 +
>  arch/powerpc/kernel/misc_64.S                 |   5 -
>  arch/powerpc/kernel/setup-common.c            |  23 +
>  arch/powerpc/mm/init-common.c                 |   7 +
>  arch/powerpc/mm/init_32.c                     |   5 -
>  arch/powerpc/mm/init_64.c                     |   5 -
>  arch/powerpc/mm/mmu_decl.h                    |  10 +
>  arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>  17 files changed, 580 insertions(+), 48 deletions(-)
>  create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>