Message ID | 1392471397-2158-7-git-send-email-marc.zyngier@arm.com |
---|---|
State | RFC |
Delegated to: | Albert ARIBAUD |
Headers | show |
Hi, Marc: I am studying ARMv8's u-boot code with FVP model. In do_nonsec_virt_switch() function in bootm.c : It will call smp_kick_all_cpus() function : It seems it would set GICD_SGIR[24] = 1, forward the interrupt to all CPU interfaces except tha tof the processor that requested the interrupt. So, who generated the interrupt(which would be forwarded to other cores)? Best wishes,
Hi Liu, On 30/05/14 03:25, TigerLiu@via-alliance.com wrote: > Hi, Marc: > I am studying ARMv8's u-boot code with FVP model. > In do_nonsec_virt_switch() function in bootm.c : > It will call smp_kick_all_cpus() function : > It seems it would set GICD_SGIR[24] = 1, forward the interrupt to all > CPU interfaces except tha tof the processor that requested the > interrupt. > > So, who generated the interrupt(which would be forwarded to other > cores)? I suggest you have a look at the GICv2 architecture document, section 4.3.15, which describes the GICD_SGIR register. Writing to this register generates the interrupt (SGI number in GICD_SGIR[3:0}), and GICD_SGIR[25:24] determines who gets it. In short, if you're setting GICD_SGIR[24] to 1, you're sending SGI0 to all CPUs but yourself. This seems to match the name of the function, doesn't it? M.
Hi, Marc: >In short, if you're setting GICD_SGIR[24] to 1, you're sending SGI0 to >all CPUs but yourself. This seems to match the name of the function, >doesn't it? I described my understanding based on 2014.07-RC2 u-boot source code: (For ARMv8 cores) 1. smp_kick_all_cpus() will send SGI0 to all other cores except BSP. These non-BSP cores handled this SGI0 in gic_wait_for_interrupt(), and then switched to EL2/EL1 . These code is implemented in lowlevel_init in arch/arm/cpu/armv8/start.S. Is my understanding right? 2. if runing with ATF(Arm Trusted Firmware) + Uboot.bin ATF has put non-BSP cores to WFI state. So, before jumping to u-boot's entrypoint, there is only a BSP . So, smp_kick_all_cpus() could wake up these non-BSP cores? Best wishes,
On Tue, Jun 03 2014 at 3:16:19 am BST, "TigerLiu@via-alliance.com" <TigerLiu@via-alliance.com> wrote: > Hi, Marc: >>In short, if you're setting GICD_SGIR[24] to 1, you're sending SGI0 to >>all CPUs but yourself. This seems to match the name of the function, >>doesn't it? > I described my understanding based on 2014.07-RC2 u-boot source code: > (For ARMv8 cores) > 1. smp_kick_all_cpus() will send SGI0 to all other cores except BSP. > These non-BSP cores handled this SGI0 in gic_wait_for_interrupt(), and then switched to EL2/EL1 . > These code is implemented in lowlevel_init in arch/arm/cpu/armv8/start.S. > Is my understanding right? I can't tell, I haven't read that bit of code. But that seems similar to what ARMv7 used to do. > 2. if runing with ATF(Arm Trusted Firmware) + Uboot.bin > ATF has put non-BSP cores to WFI state. > So, before jumping to u-boot's entrypoint, there is only a BSP . > So, smp_kick_all_cpus() could wake up these non-BSP cores? My understanding is that if you're using the Trusted Firmware, then you have an implementation of PSCI, and that's what you must use to bring the CPUs into u-boot. U-Boot will be running non-secure anyway, so it requires the firmware to perform S to NS transition on its behalf. M.
Hi, Marc: >My understanding is that if you're using the Trusted Firmware, then you >have an implementation of PSCI, and that's what you must use to bring >the CPUs into u-boot. U-Boot will be running non-secure anyway, so it >requires the firmware to perform S to NS transition on its behalf. Do you mean : Waking up Non-BSP cores through PSCI interface, and then let them switch to Non-Secure state through smp_kick_all_cpus()? And another question: 1. how to determine successfully transitioning to Non-Secure? Is there any register to indicate current state is Non-Secure state? And after transitioning to non-secure state, I tried to access SCR register,but it caused system hang. Best wishes,
On Tue, Jun 03 2014 at 10:41:51 am BST, "TigerLiu@via-alliance.com" <TigerLiu@via-alliance.com> wrote: > Hi, Marc: >>My understanding is that if you're using the Trusted Firmware, then you >>have an implementation of PSCI, and that's what you must use to bring >>the CPUs into u-boot. U-Boot will be running non-secure anyway, so it >>requires the firmware to perform S to NS transition on its behalf. > Do you mean : > Waking up Non-BSP cores through PSCI interface, and then let them switch > to Non-Secure state through smp_kick_all_cpus()? No. You don't need smp_kick_all_cpus at all. Just call the PSCI firmware to wake up the secondary CPUs, and they will be directly placed in non-secure mode. > And another question: > 1. how to determine successfully transitioning to Non-Secure? > Is there any register to indicate current state is Non-Secure state? > And after transitioning to non-secure state, I tried to access SCR > register,but it caused system hang. No, there is no architectural way. But if you go from EL3 to EL2, looking at the mode in PSTATE is pretty easy. M.
I am trying to bring up xen suing u-boot that has this patch. Unfortunately as soon as the code tries to call _nonsec_init through secure_ram_addr in arm7_init_nonsec function in virt-v7.c I get an undefined instruction exception. I suspect the CONFIG_ARMV7_SECURE_BASE needs to be defined to a particular value. What should that be defined to for omap5432? Surya On Saturday, February 15, 2014 at 5:36:30 AM UTC-8, Marc Zyngier wrote: > The current non-sec switching code suffers from one major issue: > it cannot run in secure RAM, as a large part of u-boot still needs > to be run while we're switched to non-secure. > > This patch reworks the whole HYP/non-secure strategy by: > - making sure the secure code is the *last* thing u-boot executes > before entering the payload > - performing an exception return from secure mode directly into > the payload > - allowing the code to be dynamically relocated to secure RAM > before switching to non-secure. > > This involves quite a bit of horrible code, specially as u-boot > relocation is quite primitive. > > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> > --- > arch/arm/cpu/armv7/nonsec_virt.S | 161 +++++++++++++++++++-------------------- > arch/arm/cpu/armv7/virt-v7.c | 59 +++++--------- > arch/arm/include/asm/armv7.h | 10 ++- > arch/arm/include/asm/secure.h | 26 +++++++ > arch/arm/lib/bootm.c | 22 +++--- > 5 files changed, 138 insertions(+), 140 deletions(-) > create mode 100644 arch/arm/include/asm/secure.h > > diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S > index b5c946f..2a43e3c 100644 > --- a/arch/arm/cpu/armv7/nonsec_virt.S > +++ b/arch/arm/cpu/armv7/nonsec_virt.S > @@ -10,10 +10,13 @@ > #include <linux/linkage.h> > #include <asm/gic.h> > #include <asm/armv7.h> > +#include <asm/proc-armv/ptrace.h> > > .arch_extension sec > .arch_extension virt > > + .pushsection ._secure.text, "ax" > + > .align 5 > /* the vector table for secure state and HYP mode */ > _monitor_vectors: > @@ -22,51 +25,86 @@ _monitor_vectors: > adr pc, _secure_monitor > .word 0 > .word 0 > - adr pc, _hyp_trap > + .word 0 > .word 0 > .word 0 > > +.macro is_cpu_virt_capable tmp > + mrc p15, 0, \tmp, c0, c1, 1 @ read ID_PFR1 > + and \tmp, \tmp, #CPUID_ARM_VIRT_MASK @ mask virtualization bits > + cmp \tmp, #(1 << CPUID_ARM_VIRT_SHIFT) > +.endm > + > /* > * secure monitor handler > * U-boot calls this "software interrupt" in start.S > * This is executed on a "smc" instruction, we use a "smc #0" to switch > * to non-secure state. > - * We use only r0 and r1 here, due to constraints in the caller. > + * r0, r1, r2: passed to the callee > + * ip: target PC > */ > _secure_monitor: > - mrc p15, 0, r1, c1, c1, 0 @ read SCR > - bic r1, r1, #0x4e @ clear IRQ, FIQ, EA, nET bits > - orr r1, r1, #0x31 @ enable NS, AW, FW bits > + mrc p15, 0, r5, c1, c1, 0 @ read SCR > + bic r5, r5, #0x4e @ clear IRQ, FIQ, EA, nET bits > + orr r5, r5, #0x31 @ enable NS, AW, FW bits > > - mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 > - and r0, r0, #CPUID_ARM_VIRT_MASK @ mask virtualization bits > - cmp r0, #(1 << CPUID_ARM_VIRT_SHIFT) > + mov r6, #SVC_MODE @ default mode is SVC > + is_cpu_virt_capable r4 > #ifdef CONFIG_ARMV7_VIRT > - orreq r1, r1, #0x100 @ allow HVC instruction > + orreq r5, r5, #0x100 @ allow HVC instruction > + moveq r6, #HYP_MODE @ Enter the kernel as HYP > #endif > > - mcr p15, 0, r1, c1, c1, 0 @ write SCR (with NS bit set) > + mcr p15, 0, r5, c1, c1, 0 @ write SCR (with NS bit set) > isb > > -#ifdef CONFIG_ARMV7_VIRT > - mrceq p15, 0, r0, c12, c0, 1 @ get MVBAR value > - mcreq p15, 4, r0, c12, c0, 0 @ write HVBAR > -#endif > bne 1f > > @ Reset CNTVOFF to 0 before leaving monitor mode > - mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 > - ands r0, r0, #CPUID_ARM_GENTIMER_MASK @ test arch timer bits > - movne r0, #0 > - mcrrne p15, 4, r0, r0, c14 @ Reset CNTVOFF to zero > + mrc p15, 0, r4, c0, c1, 1 @ read ID_PFR1 > + ands r4, r4, #CPUID_ARM_GENTIMER_MASK @ test arch timer bits > + movne r4, #0 > + mcrrne p15, 4, r4, r4, c14 @ Reset CNTVOFF to zero > 1: > - movs pc, lr @ return to non-secure SVC > - > -_hyp_trap: > - mrs lr, elr_hyp @ for older asm: .byte 0x00, 0xe3, 0x0e, 0xe1 > - mov pc, lr @ do no switch modes, but > - @ return to caller > - > + mov lr, ip > + mov ip, #(F_BIT | I_BIT | A_BIT) @ Set A, I and F > + tst lr, #1 @ Check for Thumb PC > + orrne ip, ip, #T_BIT @ Set T if Thumb > + orr ip, ip, r6 @ Slot target mode in > + msr spsr_cxfs, ip @ Set full SPSR > + movs pc, lr @ ERET to non-secure > + > +ENTRY(_do_nonsec_entry) > + mov ip, r0 > + mov r0, r1 > + mov r1, r2 > + mov r2, r3 > + smc #0 > +ENDPROC(_do_nonsec_entry) > + > +.macro get_cbar_addr addr > +#ifdef CONFIG_ARM_GIC_BASE_ADDRESS > + ldr \addr, =CONFIG_ARM_GIC_BASE_ADDRESS > +#else > + mrc p15, 4, \addr, c15, c0, 0 @ read CBAR > + bfc \addr, #0, #15 @ clear reserved bits > +#endif > +.endm > + > +.macro get_gicd_addr addr > + get_cbar_addr \addr > + add \addr, \addr, #GIC_DIST_OFFSET @ GIC dist i/f offset > +.endm > + > +.macro get_gicc_addr addr, tmp > + get_cbar_addr \addr > + is_cpu_virt_capable \tmp > + movne \tmp, #GIC_CPU_OFFSET_A9 @ GIC CPU offset for A9 > + moveq \tmp, #GIC_CPU_OFFSET_A15 @ GIC CPU offset for A15/A7 > + add \addr, \addr, \tmp > +.endm > + > +#ifndef CONFIG_ARMV7_PSCI > /* > * Secondary CPUs start here and call the code for the core specific parts > * of the non-secure and HYP mode transition. The GIC distributor specific > @@ -74,31 +112,21 @@ _hyp_trap: > * Then they go back to wfi and wait to be woken up by the kernel again. > */ > ENTRY(_smp_pen) > - mrs r0, cpsr > - orr r0, r0, #0xc0 > - msr cpsr, r0 @ disable interrupts > - ldr r1, =_start > - mcr p15, 0, r1, c12, c0, 0 @ set VBAR > + cpsid i > + cpsid f > > bl _nonsec_init > - mov r12, r0 @ save GICC address > -#ifdef CONFIG_ARMV7_VIRT > - bl _switch_to_hyp > -#endif > - > - ldr r1, [r12, #GICC_IAR] @ acknowledge IPI > - str r1, [r12, #GICC_EOIR] @ signal end of interrupt > > adr r0, _smp_pen @ do not use this address again > b smp_waitloop @ wait for IPIs, board specific > ENDPROC(_smp_pen) > +#endif > > /* > * Switch a core to non-secure state. > * > * 1. initialize the GIC per-core interface > * 2. allow coprocessor access in non-secure modes > - * 3. switch the cpu mode (by calling "smc #0") > * > * Called from smp_pen by secondary cores and directly by the BSP. > * Do not assume that the stack is available and only use registers > @@ -108,38 +136,23 @@ ENDPROC(_smp_pen) > * though, but we check this in C before calling this function. > */ > ENTRY(_nonsec_init) > -#ifdef CONFIG_ARM_GIC_BASE_ADDRESS > - ldr r2, =CONFIG_ARM_GIC_BASE_ADDRESS > -#else > - mrc p15, 4, r2, c15, c0, 0 @ read CBAR > - bfc r2, #0, #15 @ clear reserved bits > -#endif > - add r3, r2, #GIC_DIST_OFFSET @ GIC dist i/f offset > + get_gicd_addr r3 > + > mvn r1, #0 @ all bits to 1 > str r1, [r3, #GICD_IGROUPRn] @ allow private interrupts > > - mrc p15, 0, r0, c0, c0, 0 @ read MIDR > - ldr r1, =MIDR_PRIMARY_PART_MASK > - and r0, r0, r1 @ mask out variant and revision > + get_gicc_addr r3, r1 > > - ldr r1, =MIDR_CORTEX_A7_R0P0 & MIDR_PRIMARY_PART_MASK > - cmp r0, r1 @ check for Cortex-A7 > - > - ldr r1, =MIDR_CORTEX_A15_R0P0 & MIDR_PRIMARY_PART_MASK > - cmpne r0, r1 @ check for Cortex-A15 > - > - movne r1, #GIC_CPU_OFFSET_A9 @ GIC CPU offset for A9 > - moveq r1, #GIC_CPU_OFFSET_A15 @ GIC CPU offset for A15/A7 > - add r3, r2, r1 @ r3 = GIC CPU i/f addr > - > - mov r1, #1 @ set GICC_CTLR[enable] > + mov r1, #3 @ Enable both groups > str r1, [r3, #GICC_CTLR] @ and clear all other bits > mov r1, #0xff > str r1, [r3, #GICC_PMR] @ set priority mask register > > + mrc p15, 0, r0, c1, c1, 2 > movw r1, #0x3fff > - movt r1, #0x0006 > - mcr p15, 0, r1, c1, c1, 2 @ NSACR = all copros to non-sec > + movt r1, #0x0004 > + orr r0, r0, r1 > + mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec > > /* The CNTFRQ register of the generic timer needs to be > * programmed in secure state. Some primary bootloaders / firmware > @@ -157,21 +170,9 @@ ENTRY(_nonsec_init) > > adr r1, _monitor_vectors > mcr p15, 0, r1, c12, c0, 1 @ set MVBAR to secure vectors > - > - mrc p15, 0, ip, c12, c0, 0 @ save secure copy of VBAR > - > isb > - smc #0 @ call into MONITOR mode > - > - mcr p15, 0, ip, c12, c0, 0 @ write non-secure copy of VBAR > - > - mov r1, #1 > - str r1, [r3, #GICC_CTLR] @ enable non-secure CPU i/f > - add r2, r2, #GIC_DIST_OFFSET > - str r1, [r2, #GICD_CTLR] @ allow private interrupts > > mov r0, r3 @ return GICC address > - > bx lr > ENDPROC(_nonsec_init) > > @@ -183,18 +184,10 @@ ENTRY(smp_waitloop) > ldr r1, [r1] > cmp r0, r1 @ make sure we dont execute this code > beq smp_waitloop @ again (due to a spurious wakeup) > - mov pc, r1 > + mov r0, r1 > + b _do_nonsec_entry > ENDPROC(smp_waitloop) > .weak smp_waitloop > #endif > > -ENTRY(_switch_to_hyp) > - mov r0, lr > - mov r1, sp @ save SVC copy of LR and SP > - isb > - hvc #0 @ for older asm: .byte 0x70, 0x00, 0x40, 0xe1 > - mov sp, r1 > - mov lr, r0 @ restore SVC copy of LR and SP > - > - bx lr > -ENDPROC(_switch_to_hyp) > + .popsection > diff --git a/arch/arm/cpu/armv7/virt-v7.c b/arch/arm/cpu/armv7/virt-v7.c > index 2cd604f..6500030 100644 > --- a/arch/arm/cpu/armv7/virt-v7.c > +++ b/arch/arm/cpu/armv7/virt-v7.c > @@ -13,17 +13,10 @@ > #include <asm/armv7.h> > #include <asm/gic.h> > #include <asm/io.h> > +#include <asm/secure.h> > > unsigned long gic_dist_addr; > > -static unsigned int read_cpsr(void) > -{ > - unsigned int reg; > - > - asm volatile ("mrs %0, cpsr\n" : "=r" (reg)); > - return reg; > -} > - > static unsigned int read_id_pfr1(void) > { > unsigned int reg; > @@ -72,6 +65,18 @@ static unsigned long get_gicd_base_address(void) > #endif > } > > +static void relocate_secure_section(void) > +{ > +#ifdef CONFIG_ARMV7_SECURE_BASE > + size_t sz = __secure_end - __secure_start; > + > + memcpy((void *)CONFIG_ARMV7_SECURE_BASE, __secure_start, sz); > + flush_dcache_range(CONFIG_ARMV7_SECURE_BASE, > + CONFIG_ARMV7_SECURE_BASE + sz + 1); > + invalidate_icache_all(); > +#endif > +} > + > static void kick_secondary_cpus_gic(unsigned long gicdaddr) > { > /* kick all CPUs (except this one) by writing to GICD_SGIR */ > @@ -83,35 +88,7 @@ void __weak smp_kick_all_cpus(void) > kick_secondary_cpus_gic(gic_dist_addr); > } > > -int armv7_switch_hyp(void) > -{ > - unsigned int reg; > - > - /* check whether we are in HYP mode already */ > - if ((read_cpsr() & 0x1f) == 0x1a) { > - debug("CPU already in HYP mode\n"); > - return 0; > - } > - > - /* check whether the CPU supports the virtualization extensions */ > - reg = read_id_pfr1(); > - if ((reg & CPUID_ARM_VIRT_MASK) != 1 << CPUID_ARM_VIRT_SHIFT) { > - printf("HYP mode: Virtualization extensions not implemented.\n"); > - return -1; > - } > - > - /* call the HYP switching code on this CPU also */ > - _switch_to_hyp(); > - > - if ((read_cpsr() & 0x1F) != 0x1a) { > - printf("HYP mode: switch not successful.\n"); > - return -1; > - } > - > - return 0; > -} > - > -int armv7_switch_nonsec(void) > +int armv7_init_nonsec(void) > { > unsigned int reg; > unsigned itlinesnr, i; > @@ -147,11 +124,13 @@ int armv7_switch_nonsec(void) > for (i = 1; i <= itlinesnr; i++) > writel((unsigned)-1, gic_dist_addr + GICD_IGROUPRn + 4 * i); > > - smp_set_core_boot_addr((unsigned long)_smp_pen, -1); > +#ifndef CONFIG_ARMV7_PSCI > + smp_set_core_boot_addr((unsigned long)secure_ram_addr(_smp_pen), -1); > smp_kick_all_cpus(); > +#endif > > /* call the non-sec switching code on this CPU also */ > - _nonsec_init(); > - > + relocate_secure_section(); > + secure_ram_addr(_nonsec_init)(); > return 0; > } > diff --git a/arch/arm/include/asm/armv7.h b/arch/arm/include/asm/armv7.h > index 395444e..11476dd 100644 > --- a/arch/arm/include/asm/armv7.h > +++ b/arch/arm/include/asm/armv7.h > @@ -78,13 +78,17 @@ void v7_outer_cache_inval_range(u32 start, u32 end); > > #if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) > > -int armv7_switch_nonsec(void); > -int armv7_switch_hyp(void); > +int armv7_init_nonsec(void); > > /* defined in assembly file */ > unsigned int _nonsec_init(void); > +void _do_nonsec_entry(void *target_pc, unsigned long r0, > + unsigned long r1, unsigned long r2); > void _smp_pen(void); > -void _switch_to_hyp(void); > + > +extern char __secure_start[]; > +extern char __secure_end[]; > + > #endif /* CONFIG_ARMV7_NONSEC || CONFIG_ARMV7_VIRT */ > > #endif /* ! __ASSEMBLY__ */ > diff --git a/arch/arm/include/asm/secure.h b/arch/arm/include/asm/secure.h > new file mode 100644 > index 0000000..effdb18 > --- /dev/null > +++ b/arch/arm/include/asm/secure.h > @@ -0,0 +1,26 @@ > +#ifndef __ASM_SECURE_H > +#define __ASM_SECURE_H > + > +#include <config.h> > + > +#ifdef CONFIG_ARMV7_SECURE_BASE > +/* > + * Warning, horror ahead. > + * > + * The target code lives in our "secure ram", but u-boot doesn't know > + * that, and has blindly added reloc_off to every relocation > + * entry. Gahh. Do the opposite conversion. This hack also prevents > + * GCC from generating code veeners, which u-boot doesn't relocate at > + * all... > + */ > +#define secure_ram_addr(_fn) ({ \ > + DECLARE_GLOBAL_DATA_PTR; \ > + void *__fn = _fn; \ > + typeof(_fn) *__tmp = (__fn - gd->reloc_off); \ > + __tmp; \ > + }) > +#else > +#define secure_ram_addr(_fn) (_fn) > +#endif > + > +#endif > diff --git a/arch/arm/lib/bootm.c b/arch/arm/lib/bootm.c > index 68554c8..ecc25f9 100644 > --- a/arch/arm/lib/bootm.c > +++ b/arch/arm/lib/bootm.c > @@ -20,6 +20,7 @@ > #include <libfdt.h> > #include <fdt_support.h> > #include <asm/bootm.h> > +#include <asm/secure.h> > #include <linux/compiler.h> > > #if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) > @@ -185,26 +186,16 @@ static void setup_end_tag(bd_t *bd) > > __weak void setup_board_tags(struct tag **in_params) {} > > +#ifdef CONFIG_ARM64 > static void do_nonsec_virt_switch(void) > { > -#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) > - if (armv7_switch_nonsec() == 0) > -#ifdef CONFIG_ARMV7_VIRT > - if (armv7_switch_hyp() == 0) > - debug("entered HYP mode\n"); > -#else > - debug("entered non-secure state\n"); > -#endif > -#endif > - > -#ifdef CONFIG_ARM64 > smp_kick_all_cpus(); > armv8_switch_to_el2(); > #ifdef CONFIG_ARMV8_SWITCH_TO_EL1 > armv8_switch_to_el1(); > #endif > -#endif > } > +#endif > > /* Subcommand: PREP */ > static void boot_prep_linux(bootm_headers_t *images) > @@ -287,8 +278,13 @@ static void boot_jump_linux(bootm_headers_t *images, int flag) > r2 = gd->bd->bi_boot_params; > > if (!fake) { > - do_nonsec_virt_switch(); > +#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) > + armv7_init_nonsec(); > + secure_ram_addr(_do_nonsec_entry)(kernel_entry, > + 0, machid, r2); > +#else > kernel_entry(0, machid, r2); > +#endif > } > #endif > } > -- > 1.8.5.1
On 18/02/15 17:42, surya.satyavolu@sirabtech.com wrote: > I am trying to bring up xen suing u-boot that has this patch. > Unfortunately as soon as the code tries to call _nonsec_init through > secure_ram_addr in arm7_init_nonsec function in virt-v7.c I get an > undefined instruction exception. I suspect the > CONFIG_ARMV7_SECURE_BASE needs to be defined to a particular value. > What should that be defined to for omap5432? I'm afraid you're barking up the wrong tree. TI, in its infinite wisdom, drops directly to *non-secure*. So, there is nothing you can actually do (maybe there's a way to go back to secure mode, but that's certainly not documented). The consequence of the above is that you cannot implement PSCI on OMAP4/5. You'll have to add you own code to promote your CPUs to HYP (there is a secure call for this). Good luck, M.
diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index b5c946f..2a43e3c 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -10,10 +10,13 @@ #include <linux/linkage.h> #include <asm/gic.h> #include <asm/armv7.h> +#include <asm/proc-armv/ptrace.h> .arch_extension sec .arch_extension virt + .pushsection ._secure.text, "ax" + .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: @@ -22,51 +25,86 @@ _monitor_vectors: adr pc, _secure_monitor .word 0 .word 0 - adr pc, _hyp_trap + .word 0 .word 0 .word 0 +.macro is_cpu_virt_capable tmp + mrc p15, 0, \tmp, c0, c1, 1 @ read ID_PFR1 + and \tmp, \tmp, #CPUID_ARM_VIRT_MASK @ mask virtualization bits + cmp \tmp, #(1 << CPUID_ARM_VIRT_SHIFT) +.endm + /* * secure monitor handler * U-boot calls this "software interrupt" in start.S * This is executed on a "smc" instruction, we use a "smc #0" to switch * to non-secure state. - * We use only r0 and r1 here, due to constraints in the caller. + * r0, r1, r2: passed to the callee + * ip: target PC */ _secure_monitor: - mrc p15, 0, r1, c1, c1, 0 @ read SCR - bic r1, r1, #0x4e @ clear IRQ, FIQ, EA, nET bits - orr r1, r1, #0x31 @ enable NS, AW, FW bits + mrc p15, 0, r5, c1, c1, 0 @ read SCR + bic r5, r5, #0x4e @ clear IRQ, FIQ, EA, nET bits + orr r5, r5, #0x31 @ enable NS, AW, FW bits - mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 - and r0, r0, #CPUID_ARM_VIRT_MASK @ mask virtualization bits - cmp r0, #(1 << CPUID_ARM_VIRT_SHIFT) + mov r6, #SVC_MODE @ default mode is SVC + is_cpu_virt_capable r4 #ifdef CONFIG_ARMV7_VIRT - orreq r1, r1, #0x100 @ allow HVC instruction + orreq r5, r5, #0x100 @ allow HVC instruction + moveq r6, #HYP_MODE @ Enter the kernel as HYP #endif - mcr p15, 0, r1, c1, c1, 0 @ write SCR (with NS bit set) + mcr p15, 0, r5, c1, c1, 0 @ write SCR (with NS bit set) isb -#ifdef CONFIG_ARMV7_VIRT - mrceq p15, 0, r0, c12, c0, 1 @ get MVBAR value - mcreq p15, 4, r0, c12, c0, 0 @ write HVBAR -#endif bne 1f @ Reset CNTVOFF to 0 before leaving monitor mode - mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 - ands r0, r0, #CPUID_ARM_GENTIMER_MASK @ test arch timer bits - movne r0, #0 - mcrrne p15, 4, r0, r0, c14 @ Reset CNTVOFF to zero + mrc p15, 0, r4, c0, c1, 1 @ read ID_PFR1 + ands r4, r4, #CPUID_ARM_GENTIMER_MASK @ test arch timer bits + movne r4, #0 + mcrrne p15, 4, r4, r4, c14 @ Reset CNTVOFF to zero 1: - movs pc, lr @ return to non-secure SVC - -_hyp_trap: - mrs lr, elr_hyp @ for older asm: .byte 0x00, 0xe3, 0x0e, 0xe1 - mov pc, lr @ do no switch modes, but - @ return to caller - + mov lr, ip + mov ip, #(F_BIT | I_BIT | A_BIT) @ Set A, I and F + tst lr, #1 @ Check for Thumb PC + orrne ip, ip, #T_BIT @ Set T if Thumb + orr ip, ip, r6 @ Slot target mode in + msr spsr_cxfs, ip @ Set full SPSR + movs pc, lr @ ERET to non-secure + +ENTRY(_do_nonsec_entry) + mov ip, r0 + mov r0, r1 + mov r1, r2 + mov r2, r3 + smc #0 +ENDPROC(_do_nonsec_entry) + +.macro get_cbar_addr addr +#ifdef CONFIG_ARM_GIC_BASE_ADDRESS + ldr \addr, =CONFIG_ARM_GIC_BASE_ADDRESS +#else + mrc p15, 4, \addr, c15, c0, 0 @ read CBAR + bfc \addr, #0, #15 @ clear reserved bits +#endif +.endm + +.macro get_gicd_addr addr + get_cbar_addr \addr + add \addr, \addr, #GIC_DIST_OFFSET @ GIC dist i/f offset +.endm + +.macro get_gicc_addr addr, tmp + get_cbar_addr \addr + is_cpu_virt_capable \tmp + movne \tmp, #GIC_CPU_OFFSET_A9 @ GIC CPU offset for A9 + moveq \tmp, #GIC_CPU_OFFSET_A15 @ GIC CPU offset for A15/A7 + add \addr, \addr, \tmp +.endm + +#ifndef CONFIG_ARMV7_PSCI /* * Secondary CPUs start here and call the code for the core specific parts * of the non-secure and HYP mode transition. The GIC distributor specific @@ -74,31 +112,21 @@ _hyp_trap: * Then they go back to wfi and wait to be woken up by the kernel again. */ ENTRY(_smp_pen) - mrs r0, cpsr - orr r0, r0, #0xc0 - msr cpsr, r0 @ disable interrupts - ldr r1, =_start - mcr p15, 0, r1, c12, c0, 0 @ set VBAR + cpsid i + cpsid f bl _nonsec_init - mov r12, r0 @ save GICC address -#ifdef CONFIG_ARMV7_VIRT - bl _switch_to_hyp -#endif - - ldr r1, [r12, #GICC_IAR] @ acknowledge IPI - str r1, [r12, #GICC_EOIR] @ signal end of interrupt adr r0, _smp_pen @ do not use this address again b smp_waitloop @ wait for IPIs, board specific ENDPROC(_smp_pen) +#endif /* * Switch a core to non-secure state. * * 1. initialize the GIC per-core interface * 2. allow coprocessor access in non-secure modes - * 3. switch the cpu mode (by calling "smc #0") * * Called from smp_pen by secondary cores and directly by the BSP. * Do not assume that the stack is available and only use registers @@ -108,38 +136,23 @@ ENDPROC(_smp_pen) * though, but we check this in C before calling this function. */ ENTRY(_nonsec_init) -#ifdef CONFIG_ARM_GIC_BASE_ADDRESS - ldr r2, =CONFIG_ARM_GIC_BASE_ADDRESS -#else - mrc p15, 4, r2, c15, c0, 0 @ read CBAR - bfc r2, #0, #15 @ clear reserved bits -#endif - add r3, r2, #GIC_DIST_OFFSET @ GIC dist i/f offset + get_gicd_addr r3 + mvn r1, #0 @ all bits to 1 str r1, [r3, #GICD_IGROUPRn] @ allow private interrupts - mrc p15, 0, r0, c0, c0, 0 @ read MIDR - ldr r1, =MIDR_PRIMARY_PART_MASK - and r0, r0, r1 @ mask out variant and revision + get_gicc_addr r3, r1 - ldr r1, =MIDR_CORTEX_A7_R0P0 & MIDR_PRIMARY_PART_MASK - cmp r0, r1 @ check for Cortex-A7 - - ldr r1, =MIDR_CORTEX_A15_R0P0 & MIDR_PRIMARY_PART_MASK - cmpne r0, r1 @ check for Cortex-A15 - - movne r1, #GIC_CPU_OFFSET_A9 @ GIC CPU offset for A9 - moveq r1, #GIC_CPU_OFFSET_A15 @ GIC CPU offset for A15/A7 - add r3, r2, r1 @ r3 = GIC CPU i/f addr - - mov r1, #1 @ set GICC_CTLR[enable] + mov r1, #3 @ Enable both groups str r1, [r3, #GICC_CTLR] @ and clear all other bits mov r1, #0xff str r1, [r3, #GICC_PMR] @ set priority mask register + mrc p15, 0, r0, c1, c1, 2 movw r1, #0x3fff - movt r1, #0x0006 - mcr p15, 0, r1, c1, c1, 2 @ NSACR = all copros to non-sec + movt r1, #0x0004 + orr r0, r0, r1 + mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec /* The CNTFRQ register of the generic timer needs to be * programmed in secure state. Some primary bootloaders / firmware @@ -157,21 +170,9 @@ ENTRY(_nonsec_init) adr r1, _monitor_vectors mcr p15, 0, r1, c12, c0, 1 @ set MVBAR to secure vectors - - mrc p15, 0, ip, c12, c0, 0 @ save secure copy of VBAR - isb - smc #0 @ call into MONITOR mode - - mcr p15, 0, ip, c12, c0, 0 @ write non-secure copy of VBAR - - mov r1, #1 - str r1, [r3, #GICC_CTLR] @ enable non-secure CPU i/f - add r2, r2, #GIC_DIST_OFFSET - str r1, [r2, #GICD_CTLR] @ allow private interrupts mov r0, r3 @ return GICC address - bx lr ENDPROC(_nonsec_init) @@ -183,18 +184,10 @@ ENTRY(smp_waitloop) ldr r1, [r1] cmp r0, r1 @ make sure we dont execute this code beq smp_waitloop @ again (due to a spurious wakeup) - mov pc, r1 + mov r0, r1 + b _do_nonsec_entry ENDPROC(smp_waitloop) .weak smp_waitloop #endif -ENTRY(_switch_to_hyp) - mov r0, lr - mov r1, sp @ save SVC copy of LR and SP - isb - hvc #0 @ for older asm: .byte 0x70, 0x00, 0x40, 0xe1 - mov sp, r1 - mov lr, r0 @ restore SVC copy of LR and SP - - bx lr -ENDPROC(_switch_to_hyp) + .popsection diff --git a/arch/arm/cpu/armv7/virt-v7.c b/arch/arm/cpu/armv7/virt-v7.c index 2cd604f..6500030 100644 --- a/arch/arm/cpu/armv7/virt-v7.c +++ b/arch/arm/cpu/armv7/virt-v7.c @@ -13,17 +13,10 @@ #include <asm/armv7.h> #include <asm/gic.h> #include <asm/io.h> +#include <asm/secure.h> unsigned long gic_dist_addr; -static unsigned int read_cpsr(void) -{ - unsigned int reg; - - asm volatile ("mrs %0, cpsr\n" : "=r" (reg)); - return reg; -} - static unsigned int read_id_pfr1(void) { unsigned int reg; @@ -72,6 +65,18 @@ static unsigned long get_gicd_base_address(void) #endif } +static void relocate_secure_section(void) +{ +#ifdef CONFIG_ARMV7_SECURE_BASE + size_t sz = __secure_end - __secure_start; + + memcpy((void *)CONFIG_ARMV7_SECURE_BASE, __secure_start, sz); + flush_dcache_range(CONFIG_ARMV7_SECURE_BASE, + CONFIG_ARMV7_SECURE_BASE + sz + 1); + invalidate_icache_all(); +#endif +} + static void kick_secondary_cpus_gic(unsigned long gicdaddr) { /* kick all CPUs (except this one) by writing to GICD_SGIR */ @@ -83,35 +88,7 @@ void __weak smp_kick_all_cpus(void) kick_secondary_cpus_gic(gic_dist_addr); } -int armv7_switch_hyp(void) -{ - unsigned int reg; - - /* check whether we are in HYP mode already */ - if ((read_cpsr() & 0x1f) == 0x1a) { - debug("CPU already in HYP mode\n"); - return 0; - } - - /* check whether the CPU supports the virtualization extensions */ - reg = read_id_pfr1(); - if ((reg & CPUID_ARM_VIRT_MASK) != 1 << CPUID_ARM_VIRT_SHIFT) { - printf("HYP mode: Virtualization extensions not implemented.\n"); - return -1; - } - - /* call the HYP switching code on this CPU also */ - _switch_to_hyp(); - - if ((read_cpsr() & 0x1F) != 0x1a) { - printf("HYP mode: switch not successful.\n"); - return -1; - } - - return 0; -} - -int armv7_switch_nonsec(void) +int armv7_init_nonsec(void) { unsigned int reg; unsigned itlinesnr, i; @@ -147,11 +124,13 @@ int armv7_switch_nonsec(void) for (i = 1; i <= itlinesnr; i++) writel((unsigned)-1, gic_dist_addr + GICD_IGROUPRn + 4 * i); - smp_set_core_boot_addr((unsigned long)_smp_pen, -1); +#ifndef CONFIG_ARMV7_PSCI + smp_set_core_boot_addr((unsigned long)secure_ram_addr(_smp_pen), -1); smp_kick_all_cpus(); +#endif /* call the non-sec switching code on this CPU also */ - _nonsec_init(); - + relocate_secure_section(); + secure_ram_addr(_nonsec_init)(); return 0; } diff --git a/arch/arm/include/asm/armv7.h b/arch/arm/include/asm/armv7.h index 395444e..11476dd 100644 --- a/arch/arm/include/asm/armv7.h +++ b/arch/arm/include/asm/armv7.h @@ -78,13 +78,17 @@ void v7_outer_cache_inval_range(u32 start, u32 end); #if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) -int armv7_switch_nonsec(void); -int armv7_switch_hyp(void); +int armv7_init_nonsec(void); /* defined in assembly file */ unsigned int _nonsec_init(void); +void _do_nonsec_entry(void *target_pc, unsigned long r0, + unsigned long r1, unsigned long r2); void _smp_pen(void); -void _switch_to_hyp(void); + +extern char __secure_start[]; +extern char __secure_end[]; + #endif /* CONFIG_ARMV7_NONSEC || CONFIG_ARMV7_VIRT */ #endif /* ! __ASSEMBLY__ */ diff --git a/arch/arm/include/asm/secure.h b/arch/arm/include/asm/secure.h new file mode 100644 index 0000000..effdb18 --- /dev/null +++ b/arch/arm/include/asm/secure.h @@ -0,0 +1,26 @@ +#ifndef __ASM_SECURE_H +#define __ASM_SECURE_H + +#include <config.h> + +#ifdef CONFIG_ARMV7_SECURE_BASE +/* + * Warning, horror ahead. + * + * The target code lives in our "secure ram", but u-boot doesn't know + * that, and has blindly added reloc_off to every relocation + * entry. Gahh. Do the opposite conversion. This hack also prevents + * GCC from generating code veeners, which u-boot doesn't relocate at + * all... + */ +#define secure_ram_addr(_fn) ({ \ + DECLARE_GLOBAL_DATA_PTR; \ + void *__fn = _fn; \ + typeof(_fn) *__tmp = (__fn - gd->reloc_off); \ + __tmp; \ + }) +#else +#define secure_ram_addr(_fn) (_fn) +#endif + +#endif diff --git a/arch/arm/lib/bootm.c b/arch/arm/lib/bootm.c index 68554c8..ecc25f9 100644 --- a/arch/arm/lib/bootm.c +++ b/arch/arm/lib/bootm.c @@ -20,6 +20,7 @@ #include <libfdt.h> #include <fdt_support.h> #include <asm/bootm.h> +#include <asm/secure.h> #include <linux/compiler.h> #if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) @@ -185,26 +186,16 @@ static void setup_end_tag(bd_t *bd) __weak void setup_board_tags(struct tag **in_params) {} +#ifdef CONFIG_ARM64 static void do_nonsec_virt_switch(void) { -#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) - if (armv7_switch_nonsec() == 0) -#ifdef CONFIG_ARMV7_VIRT - if (armv7_switch_hyp() == 0) - debug("entered HYP mode\n"); -#else - debug("entered non-secure state\n"); -#endif -#endif - -#ifdef CONFIG_ARM64 smp_kick_all_cpus(); armv8_switch_to_el2(); #ifdef CONFIG_ARMV8_SWITCH_TO_EL1 armv8_switch_to_el1(); #endif -#endif } +#endif /* Subcommand: PREP */ static void boot_prep_linux(bootm_headers_t *images) @@ -287,8 +278,13 @@ static void boot_jump_linux(bootm_headers_t *images, int flag) r2 = gd->bd->bi_boot_params; if (!fake) { - do_nonsec_virt_switch(); +#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) + armv7_init_nonsec(); + secure_ram_addr(_do_nonsec_entry)(kernel_entry, + 0, machid, r2); +#else kernel_entry(0, machid, r2); +#endif } #endif }
The current non-sec switching code suffers from one major issue: it cannot run in secure RAM, as a large part of u-boot still needs to be run while we're switched to non-secure. This patch reworks the whole HYP/non-secure strategy by: - making sure the secure code is the *last* thing u-boot executes before entering the payload - performing an exception return from secure mode directly into the payload - allowing the code to be dynamically relocated to secure RAM before switching to non-secure. This involves quite a bit of horrible code, specially as u-boot relocation is quite primitive. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> --- arch/arm/cpu/armv7/nonsec_virt.S | 161 +++++++++++++++++++-------------------- arch/arm/cpu/armv7/virt-v7.c | 59 +++++--------- arch/arm/include/asm/armv7.h | 10 ++- arch/arm/include/asm/secure.h | 26 +++++++ arch/arm/lib/bootm.c | 22 +++--- 5 files changed, 138 insertions(+), 140 deletions(-) create mode 100644 arch/arm/include/asm/secure.h