diff mbox

[v9,01/26] target-arm: extend async excp masking

Message ID 1415229793-3278-2-git-send-email-greg.bellows@linaro.org
State New
Headers show

Commit Message

Greg Bellows Nov. 5, 2014, 11:22 p.m. UTC
This patch extends arm_excp_unmasked() to use lookup tables for determining
whether IRQ and FIQ exceptions are masked.  The lookup tables are based on the
ARMv8 and ARMv7 specification physical interrupt masking tables.

If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception levels
other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3).

Signed-off-by: Greg Bellows <greg.bellows@linaro.org>

---

v8 -> v9
- Undo the use of tables for exception masking and instead go with simplified
  logic based on the target EL lookup.
- Remove the masking tables

v7 -> v8
- Add IRQ and FIQ exeception masking lookup tables.
- Rewrite patch to use lookup tables for determining whether an excpetion is
  masked or not.

v5 -> v6
- Globally change Aarch# to AArch#
- Fixed comment termination

v4 -> v5
- Merge with v4 patch 10
---
 target-arm/cpu.h | 79 +++++++++++++++++++++++++++++++++++++-------------------
 1 file changed, 53 insertions(+), 26 deletions(-)

Comments

Peter Maydell Nov. 5, 2014, 11:37 p.m. UTC | #1
On 5 November 2014 23:22, Greg Bellows <greg.bellows@linaro.org> wrote:
> This patch extends arm_excp_unmasked() to use lookup tables for determining
> whether IRQ and FIQ exceptions are masked.  The lookup tables are based on the
> ARMv8 and ARMv7 specification physical interrupt masking tables.
>
> If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception levels
> other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3).
>
> Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
>
> ---
>
> v8 -> v9
> - Undo the use of tables for exception masking and instead go with simplified
>   logic based on the target EL lookup.
> - Remove the masking tables
>
> v7 -> v8
> - Add IRQ and FIQ exeception masking lookup tables.
> - Rewrite patch to use lookup tables for determining whether an excpetion is
>   masked or not.
>
> v5 -> v6
> - Globally change Aarch# to AArch#
> - Fixed comment termination
>
> v4 -> v5
> - Merge with v4 patch 10
> ---
>  target-arm/cpu.h | 79 +++++++++++++++++++++++++++++++++++++-------------------
>  1 file changed, 53 insertions(+), 26 deletions(-)
>
> diff --git a/target-arm/cpu.h b/target-arm/cpu.h
> index cb6ec5c..0ea8602 100644
> --- a/target-arm/cpu.h
> +++ b/target-arm/cpu.h
> @@ -1247,39 +1247,51 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx)
>      CPUARMState *env = cs->env_ptr;
>      unsigned int cur_el = arm_current_el(env);
>      unsigned int target_el = arm_excp_target_el(cs, excp_idx);
> -    /* FIXME: Use actual secure state.  */
> -    bool secure = false;
> -    /* If in EL1/0, Physical IRQ routing to EL2 only happens from NS state.  */
> -    bool irq_can_hyp = !secure && cur_el < 2 && target_el == 2;
> -    /* ARMv7-M interrupt return works by loading a magic value
> -     * into the PC.  On real hardware the load causes the
> -     * return to occur.  The qemu implementation performs the
> -     * jump normally, then does the exception return when the
> -     * CPU tries to execute code at the magic address.
> -     * This will cause the magic PC value to be pushed to
> -     * the stack if an interrupt occurred at the wrong time.
> -     * We avoid this by disabling interrupts when
> -     * pc contains a magic address.

I did suggest you based this on the M profile patches;
you'll find this doesn't apply to current master I think.

thanks
-- PMM
Greg Bellows Nov. 6, 2014, 1:29 a.m. UTC | #2
Yeah I wanted to get out what I had after all the patches.  I am planning
to rebar tomorrow.

Greg
On Nov 5, 2014 5:37 PM, "Peter Maydell" <peter.maydell@linaro.org> wrote:

> On 5 November 2014 23:22, Greg Bellows <greg.bellows@linaro.org> wrote:
> > This patch extends arm_excp_unmasked() to use lookup tables for
> determining
> > whether IRQ and FIQ exceptions are masked.  The lookup tables are based
> on the
> > ARMv8 and ARMv7 specification physical interrupt masking tables.
> >
> > If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception
> levels
> > other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3).
> >
> > Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
> >
> > ---
> >
> > v8 -> v9
> > - Undo the use of tables for exception masking and instead go with
> simplified
> >   logic based on the target EL lookup.
> > - Remove the masking tables
> >
> > v7 -> v8
> > - Add IRQ and FIQ exeception masking lookup tables.
> > - Rewrite patch to use lookup tables for determining whether an
> excpetion is
> >   masked or not.
> >
> > v5 -> v6
> > - Globally change Aarch# to AArch#
> > - Fixed comment termination
> >
> > v4 -> v5
> > - Merge with v4 patch 10
> > ---
> >  target-arm/cpu.h | 79
> +++++++++++++++++++++++++++++++++++++-------------------
> >  1 file changed, 53 insertions(+), 26 deletions(-)
> >
> > diff --git a/target-arm/cpu.h b/target-arm/cpu.h
> > index cb6ec5c..0ea8602 100644
> > --- a/target-arm/cpu.h
> > +++ b/target-arm/cpu.h
> > @@ -1247,39 +1247,51 @@ static inline bool arm_excp_unmasked(CPUState
> *cs, unsigned int excp_idx)
> >      CPUARMState *env = cs->env_ptr;
> >      unsigned int cur_el = arm_current_el(env);
> >      unsigned int target_el = arm_excp_target_el(cs, excp_idx);
> > -    /* FIXME: Use actual secure state.  */
> > -    bool secure = false;
> > -    /* If in EL1/0, Physical IRQ routing to EL2 only happens from NS
> state.  */
> > -    bool irq_can_hyp = !secure && cur_el < 2 && target_el == 2;
> > -    /* ARMv7-M interrupt return works by loading a magic value
> > -     * into the PC.  On real hardware the load causes the
> > -     * return to occur.  The qemu implementation performs the
> > -     * jump normally, then does the exception return when the
> > -     * CPU tries to execute code at the magic address.
> > -     * This will cause the magic PC value to be pushed to
> > -     * the stack if an interrupt occurred at the wrong time.
> > -     * We avoid this by disabling interrupts when
> > -     * pc contains a magic address.
>
> I did suggest you based this on the M profile patches;
> you'll find this doesn't apply to current master I think.
>
> thanks
> -- PMM
>
diff mbox

Patch

diff --git a/target-arm/cpu.h b/target-arm/cpu.h
index cb6ec5c..0ea8602 100644
--- a/target-arm/cpu.h
+++ b/target-arm/cpu.h
@@ -1247,39 +1247,51 @@  static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx)
     CPUARMState *env = cs->env_ptr;
     unsigned int cur_el = arm_current_el(env);
     unsigned int target_el = arm_excp_target_el(cs, excp_idx);
-    /* FIXME: Use actual secure state.  */
-    bool secure = false;
-    /* If in EL1/0, Physical IRQ routing to EL2 only happens from NS state.  */
-    bool irq_can_hyp = !secure && cur_el < 2 && target_el == 2;
-    /* ARMv7-M interrupt return works by loading a magic value
-     * into the PC.  On real hardware the load causes the
-     * return to occur.  The qemu implementation performs the
-     * jump normally, then does the exception return when the
-     * CPU tries to execute code at the magic address.
-     * This will cause the magic PC value to be pushed to
-     * the stack if an interrupt occurred at the wrong time.
-     * We avoid this by disabling interrupts when
-     * pc contains a magic address.
+    bool secure = arm_is_secure(env);
+    uint32_t scr;
+    uint32_t hcr;
+    bool pstate_unmasked;
+    int8_t unmasked = 0;
+    bool is_aa64 = arm_el_is_aa64(env, 3);
+
+    /* Don't take exceptions if they target a lower EL.
+     * This check should catch any exceptions that would not be taken but left
+     * pending.
      */
-    bool irq_unmasked = !(env->daif & PSTATE_I)
-                        && (!IS_M(env) || env->regs[15] < 0xfffffff0);
-
-    /* Don't take exceptions if they target a lower EL.  */
     if (cur_el > target_el) {
         return false;
     }
 
     switch (excp_idx) {
     case EXCP_FIQ:
-        if (irq_can_hyp && (env->cp15.hcr_el2 & HCR_FMO)) {
-            return true;
-        }
-        return !(env->daif & PSTATE_F);
+        /* If FIQs are routed to EL3 or EL2 then there are cases where we
+         * override the CPSR.F in determining if the exception is masked or
+         * not.  If neither of these are set then we fall back to the CPSR.F
+         * setting otherwise we further assess the state below.
+         */
+        hcr = (env->cp15.hcr_el2 & HCR_FMO);
+        scr = (env->cp15.scr_el3 & SCR_FIQ);
+
+        /* When EL3 is 32-bit, the SCR.FW bit controls whether the CPSR.F bit
+         * masks FIQ interrupts when taken in non-secure state.  If SCR.FW is
+         * set then FIQs can be masked by CPSR.F when non-secure but only
+         * when FIQs are only routed to EL3.
+         */
+        scr &= is_aa64 || !((env->cp15.scr_el3 & SCR_FW) && !hcr);
+        pstate_unmasked = !(env->daif & PSTATE_F);
+        break;
+
     case EXCP_IRQ:
-        if (irq_can_hyp && (env->cp15.hcr_el2 & HCR_IMO)) {
-            return true;
-        }
-        return irq_unmasked;
+        /* When EL3 execution state is 32-bit, if HCR.IMO is set then we may
+         * override the CPSR.I masking when in non-secure state.  The SCR.IRQ
+         * setting has already been taken into consideration when setting the
+         * target EL, so it does not have a further affect here.
+         */
+        hcr = is_aa64 || (env->cp15.hcr_el2 & HCR_IMO);
+        scr = false;
+        pstate_unmasked = !(env->daif & PSTATE_I);
+        break;
+
     case EXCP_VFIQ:
         if (!secure && !(env->cp15.hcr_el2 & HCR_FMO)) {
             /* VFIQs are only taken when hypervized and non-secure.  */
@@ -1291,10 +1303,25 @@  static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx)
             /* VIRQs are only taken when hypervized and non-secure.  */
             return false;
         }
-        return irq_unmasked;
+        return !(env->daif & PSTATE_I) &&
+               (!IS_M(env) || env->regs[15] < 0xfffffff0);
     default:
         g_assert_not_reached();
     }
+
+    /* Use the target EL, current execution state and SCR/HCR settings to
+     * determine whether the corresponding CPSR bit is used to mask the
+     * interrupt.
+     */
+    if ((target_el > cur_el) && (target_el != 1) && (scr || hcr) &&
+        (is_aa64 || !secure)) {
+        unmasked = 1;
+    }
+
+    /* The PSTATE bits only mask the interrupt if we have not overriden the
+     * ability above.
+     */
+    return unmasked || pstate_unmasked;
 }
 
 static inline CPUARMState *cpu_init(const char *cpu_model)