diff mbox

[v8,01/27] target-arm: extend async excp masking

Message ID 1414704538-17103-2-git-send-email-greg.bellows@linaro.org
State New
Headers show

Commit Message

Greg Bellows Oct. 30, 2014, 9:28 p.m. UTC
This patch extends arm_excp_unmasked() to use lookup tables for determining
whether IRQ and FIQ exceptions are masked.  The lookup tables are based on the
ARMv8 and ARMv7 specification physical interrupt masking tables.

If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception levels
other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3).

Signed-off-by: Greg Bellows <greg.bellows@linaro.org>

---

v7 -> v8
- Add IRQ and FIQ exeception masking lookup tables.
- Rewrite patch to use lookup tables for determining whether an excpetion is
  masked or not.

v5 -> v6
- Globally change Aarch# to AArch#
- Fixed comment termination

v4 -> v5
- Merge with v4 patch 10
---
 target-arm/cpu.h | 218 ++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 190 insertions(+), 28 deletions(-)

Comments

Peter Maydell Oct. 31, 2014, 7 p.m. UTC | #1
On 30 October 2014 21:28, Greg Bellows <greg.bellows@linaro.org> wrote:
> This patch extends arm_excp_unmasked() to use lookup tables for determining
> whether IRQ and FIQ exceptions are masked.  The lookup tables are based on the
> ARMv8 and ARMv7 specification physical interrupt masking tables.
>
> If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception levels
> other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3).
>
> Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
>
> ---
>
> v7 -> v8
> - Add IRQ and FIQ exeception masking lookup tables.
> - Rewrite patch to use lookup tables for determining whether an excpetion is
>   masked or not.
>
> v5 -> v6
> - Globally change Aarch# to AArch#
> - Fixed comment termination
>
> v4 -> v5
> - Merge with v4 patch 10
> ---
>  target-arm/cpu.h | 218 ++++++++++++++++++++++++++++++++++++++++++++++++-------
>  1 file changed, 190 insertions(+), 28 deletions(-)

Having got through the rest of the series I'm coming back
to this one, which I skipped because it needed checking
against the ARM ARM.

I definitely don't like the use of tables here -- it is in almost
all of the cases simply repeating the routing calculations.
All you actually need here is:

    if (target_el < current_el) {
        /* Don't take exception */
        return false;
    }
    if (target_el > current_el && target_el != 1) {
        if (target_el == 3 && !arm_el_is_aa64(env, 3)) {
            /* In a 32 bit EL3 there are some awkward special cases where we
             * must honour the PSTATE/CPSR mask bits even when taking the
             * exception to EL3.
             */
            if (arm_is_secure(env)) {
                goto honour_masking;
            }
            /* (We know at this point that SCR.FIQ/IRQ must be set.) */
            if (excp_idx == EXCP_FIQ && HCR.FMO == 0 && SCR.FW == 1) {
                goto honour_masking;
            }
            /* If we supported SError then the async external abort routing
             * would have a similar case for SCR.AW here. There is no
SCR.IW for IRQs.
             */
            if (excp_idx == EXCP_IRQ && HCR.IMO == 0) {
                goto honour_masking;
            }
       }
       /* Take the exception unconditionally. */
       return true;
    }
honour_masking:
    /* If we get here then we check the PSTATE flags. */

(I think the 'gotos' here are clearer than turning the if statement inside out
to avoid them...)

> -    unsigned int target_el = arm_excp_target_el(cs, excp_idx);
> -    /* FIXME: Use actual secure state.  */
> -    bool secure = false;
> -    /* If in EL1/0, Physical IRQ routing to EL2 only happens from NS state.  */
> -    bool irq_can_hyp = !secure && cur_el < 2 && target_el == 2;
> -    /* ARMv7-M interrupt return works by loading a magic value
> -     * into the PC.  On real hardware the load causes the
> -     * return to occur.  The qemu implementation performs the
> -     * jump normally, then does the exception return when the
> -     * CPU tries to execute code at the magic address.
> -     * This will cause the magic PC value to be pushed to
> -     * the stack if an interrupt occurred at the wrong time.
> -     * We avoid this by disabling interrupts when
> -     * pc contains a magic address.

You'll probably find it easier to base on top of the patches
I sent out that split out the M profile code from this
function (I'm planning to put those into target-arm.next soon
so they get in before hardfreeze).

thanks
-- PMM
Greg Bellows Nov. 5, 2014, 9:12 p.m. UTC | #2
Actually it is possible to make it simpler and avoid the gotos altogether
with the changes I have made using the following conditional:

+    /* Use the target EL, current execution state and SCR/HCR settings to
+     * determine whether the corresponding CPSR bit is used to mask the
+     * interrupt.
+     */
+    if ((tar_el > cur_el) && (tar_el != 1) && (scr || hcr)) {
+        if (arm_el_is_aa64(env, 3) || !secure) {
+            unmasked = 1;
+        }
+    }
+
+    /* The PSTATE bits only mask the interrupt if we have not overriden the
+     * ability above.
+     */
+    return unmasked || pstate_unmasked;

scr and hcr are set depending on the exception type so both EXCP_IRQ and
EXCP_FIQ fall through to this check.  The conditional basically weeds out
all the cases where we can ignore/override the CPSR masking bits.

Removed table and reworked function in v9.

On 31 October 2014 14:00, Peter Maydell <peter.maydell@linaro.org> wrote:

> On 30 October 2014 21:28, Greg Bellows <greg.bellows@linaro.org> wrote:
> > This patch extends arm_excp_unmasked() to use lookup tables for
> determining
> > whether IRQ and FIQ exceptions are masked.  The lookup tables are based
> on the
> > ARMv8 and ARMv7 specification physical interrupt masking tables.
> >
> > If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception
> levels
> > other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3).
> >
> > Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
> >
> > ---
> >
> > v7 -> v8
> > - Add IRQ and FIQ exeception masking lookup tables.
> > - Rewrite patch to use lookup tables for determining whether an
> excpetion is
> >   masked or not.
> >
> > v5 -> v6
> > - Globally change Aarch# to AArch#
> > - Fixed comment termination
> >
> > v4 -> v5
> > - Merge with v4 patch 10
> > ---
> >  target-arm/cpu.h | 218
> ++++++++++++++++++++++++++++++++++++++++++++++++-------
> >  1 file changed, 190 insertions(+), 28 deletions(-)
>
> Having got through the rest of the series I'm coming back
> to this one, which I skipped because it needed checking
> against the ARM ARM.
>
> I definitely don't like the use of tables here -- it is in almost
> all of the cases simply repeating the routing calculations.
> All you actually need here is:
>
>     if (target_el < current_el) {
>         /* Don't take exception */
>         return false;
>     }
>     if (target_el > current_el && target_el != 1) {
>         if (target_el == 3 && !arm_el_is_aa64(env, 3)) {
>             /* In a 32 bit EL3 there are some awkward special cases where
> we
>              * must honour the PSTATE/CPSR mask bits even when taking the
>              * exception to EL3.
>              */
>             if (arm_is_secure(env)) {
>                 goto honour_masking;
>             }
>             /* (We know at this point that SCR.FIQ/IRQ must be set.) */
>             if (excp_idx == EXCP_FIQ && HCR.FMO == 0 && SCR.FW == 1) {
>                 goto honour_masking;
>             }
>             /* If we supported SError then the async external abort routing
>              * would have a similar case for SCR.AW here. There is no
> SCR.IW for IRQs.
>              */
>             if (excp_idx == EXCP_IRQ && HCR.IMO == 0) {
>                 goto honour_masking;
>             }
>        }
>        /* Take the exception unconditionally. */
>        return true;
>     }
> honour_masking:
>     /* If we get here then we check the PSTATE flags. */
>
> (I think the 'gotos' here are clearer than turning the if statement inside
> out
> to avoid them...)
>
> > -    unsigned int target_el = arm_excp_target_el(cs, excp_idx);
> > -    /* FIXME: Use actual secure state.  */
> > -    bool secure = false;
> > -    /* If in EL1/0, Physical IRQ routing to EL2 only happens from NS
> state.  */
> > -    bool irq_can_hyp = !secure && cur_el < 2 && target_el == 2;
> > -    /* ARMv7-M interrupt return works by loading a magic value
> > -     * into the PC.  On real hardware the load causes the
> > -     * return to occur.  The qemu implementation performs the
> > -     * jump normally, then does the exception return when the
> > -     * CPU tries to execute code at the magic address.
> > -     * This will cause the magic PC value to be pushed to
> > -     * the stack if an interrupt occurred at the wrong time.
> > -     * We avoid this by disabling interrupts when
> > -     * pc contains a magic address.
>
> You'll probably find it easier to base on top of the patches
> I sent out that split out the M profile code from this
> function (I'm planning to put those into target-arm.next soon
> so they get in before hardfreeze).
>
> thanks
> -- PMM
>
diff mbox

Patch

diff --git a/target-arm/cpu.h b/target-arm/cpu.h
index cb6ec5c..be5d022 100644
--- a/target-arm/cpu.h
+++ b/target-arm/cpu.h
@@ -1242,44 +1242,200 @@  bool write_cpustate_to_list(ARMCPU *cpu);
 #  define TARGET_VIRT_ADDR_SPACE_BITS 32
 #endif
 
+/* Physical FIQ exception mask lookup table
+ *
+ * [ From ARM ARMv7 B1.8.6 Async exception masking (table B1-12) ]
+ * [ From ARM ARMv8 G1.11.3 Async exception masking (table G1-18) ]
+ *
+ * The below multi-dimensional table is used for looking up the masking
+ * behavior given the specified state conditions.  The table value are used
+ * for determining whether the PSTATE.AIF/CPSR.AIF bits control enterrupt
+ * masking or not.
+ *
+ *    Dimensions:
+ *    fiq_excp_mask_table[2][2][2][2][2][2][4]
+ *                        |  |  |  |  |  |  +--- Current EL
+ *                        |  |  |  |  |  +------ Non-secure(0)/Secure(1)
+ *                        |  |  |  |  +--------- HCR mask override
+ *                        |  |  |  +------------ SCR exec state control
+ *                        |  |  +--------------- SCR non-secure masking
+ *                        |  +------------------ SCR mask override
+ *                        +--------------------- 32-bit(0)/64-bit(1) EL3
+ *
+ *    The table values are as such:
+ *      0 = Exception is masked depending on PSTATE
+ *      1 = Exception is taken (unmasked) regardless of PSTATE
+ *     -1 = Cannot occur
+ *     -2 = Exception not taken, left pending
+ *
+ * Notes:
+ *    - RW is dont-care when EL3 is AArch32
+ *    - AW/FW are don't care when EL3 is AArch32
+ *    - Exceptions left pending (-2) is informational and should never escape
+ *      as the correct procedure involves first checking current to target EL.
+ *
+ *             SCR         HCR
+ *          64  EA         AMO                 From
+ *         BIT IRQ  AW     IMO      Non-secure         Secure
+ *         EL3 FIQ  FW  RW FMO   EL0 EL1 EL2 EL3   EL0 EL1 EL2 EL3
+ */
+static const int8_t fiq_excp_mask_table[2][2][2][2][2][2][4] = {
+    {{{{{/* 0   0   0   0   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   0   0   0   1 */{ 1,  1,  0, -1 },{ 0, -1, -1,  0 },},},
+       {{/* 0   0   0   1   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   0   0   1   1 */{ 1,  1,  0, -1 },{ 0, -1, -1,  0 },},},},
+      {{{/* 0   0   1   0   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   0   1   0   1 */{ 1,  1,  0, -1 },{ 0, -1, -1,  0 },},},
+       {{/* 0   0   1   1   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   0   1   1   1 */{ 1,  1,  0, -1 },{ 0, -1, -1,  0 },},},},},
+     {{{{/* 0   1   0   0   0 */{ 1,  1,  1, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   1   0   0   1 */{ 1,  1,  1, -1 },{ 0, -1, -1,  0 },},},
+       {{/* 0   1   0   1   0 */{ 1,  1,  1, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   1   0   1   1 */{ 1,  1,  1, -1 },{ 0, -1, -1,  0 },},},},
+      {{{/* 0   1   1   0   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   1   1   0   1 */{ 1,  1,  1, -1 },{ 0, -1, -1,  0 },},},
+       {{/* 0   1   1   1   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   1   1   1   1 */{ 1,  1,  1, -1 },{ 0, -1, -1,  0 },},},},},},
+    {{{{{/* 1   0   0   0   0 */{ 0,  0,  0, -1 },{ 0,  0, -1, -2 },},
+        {/* 1   0   0   0   1 */{ 1,  1,  0, -1 },{ 0,  0, -1, -2 },},},
+       {{/* 1   0   0   1   0 */{ 0,  0, -2, -1 },{ 0,  0, -1, -2 },},
+        {/* 1   0   0   1   1 */{ 1,  1,  0, -1 },{ 0,  0, -1, -2 },},},},
+      {{{/* 1   0   1   0   0 */{ 1,  1,  1, -1 },{ 0,  0, -1, -2 },},
+        {/* 1   0   1   0   1 */{ 1,  1,  0, -1 },{ 0,  0, -1, -2 },},},
+       {{/* 1   0   1   1   0 */{ 0,  0, -2, -1 },{ 0,  0, -1, -2 },},
+        {/* 1   0   1   1   1 */{ 1,  1,  0, -1 },{ 0,  0, -1, -2 },},},},},
+     {{{{/* 1   1   0   0   0 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},
+        {/* 1   1   0   0   1 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},},
+       {{/* 1   1   0   1   0 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},
+        {/* 1   1   0   1   1 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},},},
+      {{{/* 1   1   1   0   0 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},
+        {/* 1   1   1   0   1 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},},
+       {{/* 1   1   1   1   0 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},
+        {/* 1   1   1   1   1 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},},},},},
+};
+
+/* Physical IRQ exception mask lookup table
+ *
+ * [ From ARM ARMv7 B1.8.6 Async exception masking (table B1-13) ]
+ * [ From ARM ARMv8 G1.11.3 Async exception masking (table G1-19) ]
+ *
+ * The below multi-dimensional table is used for looking up the masking
+ * behavior given the specified state conditions.  The table value are used
+ * for determining whether the PSTATE.AIF/CPSR.AIF bits control enterrupt
+ * masking or not.
+ *
+ *    Dimensions:
+ *    irq_excp_mask_table[2][2][2][2][2][4]
+ *                        |  |  |  |  |  +--- Current EL
+ *                        |  |  |  |  +------ Non-secure(0)/Secure(1)
+ *                        |  |  |  +--------- HCR mask override
+ *                        |  |  +------------ SCR exec state control
+ *                        |  +--------------- SCR mask override
+ *                        +------------------ 32-bit(0)/64-bit(1) EL3
+ *
+ *    The table values are as such:
+ *      0 = Exception is masked depending on PSTATE
+ *      1 = Exception is taken (unmasked) regardless of PSTATE
+ *     -1 = Cannot occur
+ *     -2 = Exception not taken, left pending
+ *
+ * Notes:
+ *    - RW is dont-care when EL3 is AArch32
+ *    - Exceptions left pending (-2) is informational and should never escape
+ *      as the correct procedure involves first checking current to target EL.
+ *
+ *             SCR     HCR
+ *          64  EA     AMO                 From
+ *         BIT IRQ     IMO      Non-secure         Secure
+ *         EL3 FIQ  RW FMO   EL0 EL1 EL2 EL3   EL0 EL1 EL2 EL3
+ */
+static const int8_t irq_excp_mask_table[2][2][2][2][2][4] = {
+     {{{{/* 0   0   0   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   0   0   1 */{ 1,  1,  0, -1 },{ 0, -1, -1,  0 },},},
+       {{/* 0   0   1   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   0   1   1 */{ 1,  1,  0, -1 },{ 0, -1, -1,  0 },},},},
+      {{{/* 0   1   0   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   1   0   1 */{ 1,  1,  1, -1 },{ 0, -1, -1,  0 },},},
+       {{/* 0   1   1   0 */{ 0,  0,  0, -1 },{ 0, -1, -1,  0 },},
+        {/* 0   1   1   1 */{ 1,  1,  1, -1 },{ 0, -1, -1,  0 },},},},},
+     {{{{/* 1   0   0   0 */{ 0,  0,  0, -1 },{ 0,  0, -1, -2 },},
+        {/* 1   0   0   1 */{ 1,  1,  0, -1 },{ 0,  0, -1, -2 },},},
+       {{/* 1   0   1   0 */{ 0,  0, -2, -1 },{ 0,  0, -1, -2 },},
+        {/* 1   0   1   1 */{ 1,  1,  0, -1 },{ 0,  0, -1, -2 },},},},
+      {{{/* 1   1   0   0 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},
+        {/* 1   1   0   1 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},},
+       {{/* 1   1   1   0 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},
+        {/* 1   1   1   1 */{ 1,  1,  1, -1 },{ 1,  1, -1,  0 },},},},},
+};
+
 static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx)
 {
     CPUARMState *env = cs->env_ptr;
     unsigned int cur_el = arm_current_el(env);
-    unsigned int target_el = arm_excp_target_el(cs, excp_idx);
-    /* FIXME: Use actual secure state.  */
-    bool secure = false;
-    /* If in EL1/0, Physical IRQ routing to EL2 only happens from NS state.  */
-    bool irq_can_hyp = !secure && cur_el < 2 && target_el == 2;
-    /* ARMv7-M interrupt return works by loading a magic value
-     * into the PC.  On real hardware the load causes the
-     * return to occur.  The qemu implementation performs the
-     * jump normally, then does the exception return when the
-     * CPU tries to execute code at the magic address.
-     * This will cause the magic PC value to be pushed to
-     * the stack if an interrupt occurred at the wrong time.
-     * We avoid this by disabling interrupts when
-     * pc contains a magic address.
+    bool secure = arm_is_secure(env);
+    uint32_t rw = ((env->cp15.scr_el3 & SCR_RW) == SCR_RW);
+    uint32_t is64 = arm_el_is_aa64(env, 3);
+    uint32_t fw;
+    uint32_t scr;
+    uint32_t hcr;
+    bool pstate_unmasked;
+    int8_t unmasked = 0;
+
+    /* Don't take exceptions if they target a lower EL.
+     * This check should catch any exceptions that would not be taken but left
+     * pending.
      */
-    bool irq_unmasked = !(env->daif & PSTATE_I)
-                        && (!IS_M(env) || env->regs[15] < 0xfffffff0);
-
-    /* Don't take exceptions if they target a lower EL.  */
-    if (cur_el > target_el) {
+    if (cur_el > arm_excp_target_el(cs, excp_idx)) {
         return false;
     }
 
     switch (excp_idx) {
     case EXCP_FIQ:
-        if (irq_can_hyp && (env->cp15.hcr_el2 & HCR_FMO)) {
-            return true;
-        }
-        return !(env->daif & PSTATE_F);
+        scr = ((env->cp15.scr_el3 & SCR_FIQ) == SCR_FIQ);
+        hcr = ((env->cp15.hcr_el2 & HCR_FMO) == HCR_FMO);
+
+        /* The SCR.FW bit only affects masking when the virtualization
+         * extension is present.  The unmasked table assumes that the extension
+         * is present, so when not present we must set FW to 1 to
+         * remain neutral.
+         */
+        fw = (!arm_feature(env, ARM_FEATURE_EL2) |
+              ((env->cp15.scr_el3 & SCR_FW) == SCR_FW));
+
+        /* FIQ are unmasked if PSTATE.F is clear */
+        pstate_unmasked = !(env->daif & PSTATE_F);
+
+        /* Perform a table lookup on whether the current state results in the
+         * exception being masked.  If the table lookup returns true(1) then
+         * PSTATE determines whether the interrupt is unmasked or not.
+         */
+        unmasked = fiq_excp_mask_table[is64][scr][fw][rw][hcr][secure][cur_el];
+        break;
+
     case EXCP_IRQ:
-        if (irq_can_hyp && (env->cp15.hcr_el2 & HCR_IMO)) {
-            return true;
-        }
-        return irq_unmasked;
+        scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
+        hcr = ((env->cp15.hcr_el2 & HCR_IMO) == HCR_IMO);
+
+        /* ARMv7-M interrupt return works by loading a magic value
+         * into the PC.  On real hardware the load causes the
+         * return to occur.  The qemu implementation performs the
+         * jump normally, then does the exception return when the
+         * CPU tries to execute code at the magic address.
+         * This will cause the magic PC value to be pushed to
+         * the stack if an interrupt occurred at the wrong time.
+         * We avoid this by disabling interrupts when
+         * pc contains a magic address.
+         */
+        pstate_unmasked = !(env->daif & PSTATE_I)
+                          && (!IS_M(env) || env->regs[15] < 0xfffffff0);
+
+        /* Perform a table lookup on whether the current state results in the
+         * exception being masked.  If the table lookup returns true(1) then
+         * PSTATE determines whether the interrupt is unmasked or not.
+         */
+        unmasked = irq_excp_mask_table[is64][scr][rw][hcr][secure][cur_el];
+        break;
+
     case EXCP_VFIQ:
         if (!secure && !(env->cp15.hcr_el2 & HCR_FMO)) {
             /* VFIQs are only taken when hypervized and non-secure.  */
@@ -1291,10 +1447,16 @@  static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx)
             /* VIRQs are only taken when hypervized and non-secure.  */
             return false;
         }
-        return irq_unmasked;
+        return !(env->daif & PSTATE_I)
+               && (!IS_M(env) || env->regs[15] < 0xfffffff0);
     default:
         g_assert_not_reached();
     }
+
+    /* We better not have a negative table value or something went wrong */
+    assert(unmasked >= 0);
+
+    return unmasked || pstate_unmasked;
 }
 
 static inline CPUARMState *cpu_init(const char *cpu_model)