diff mbox series

[PULL,4/6] accel/tcg: Don't treat invalid TLB entries as needing recheck

Message ID 20180702160546.31969-5-richard.henderson@linaro.org
State New
Headers show
Series [PULL,1/6] translate-all: fix locking of TBs whose two pages share the same physical page | expand

Commit Message

Richard Henderson July 2, 2018, 4:05 p.m. UTC
From: Peter Maydell <peter.maydell@linaro.org>

In get_page_addr_code() when we check whether the TLB entry
is marked as TLB_RECHECK, we should not go down that code
path if the TLB entry is not valid at all (ie the TLB_INVALID
bit is set).

Tested-by: Laurent Vivier <laurent@vivier.eu>
Reported-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180629161731.16239-1-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/cputlb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Peter Maydell July 13, 2018, 11:05 a.m. UTC | #1
On 2 July 2018 at 17:05, Richard Henderson <richard.henderson@linaro.org> wrote:
> From: Peter Maydell <peter.maydell@linaro.org>
>
> In get_page_addr_code() when we check whether the TLB entry
> is marked as TLB_RECHECK, we should not go down that code
> path if the TLB entry is not valid at all (ie the TLB_INVALID
> bit is set).
>
> Tested-by: Laurent Vivier <laurent@vivier.eu>
> Reported-by: Laurent Vivier <laurent@vivier.eu>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> Message-Id: <20180629161731.16239-1-peter.maydell@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  accel/tcg/cputlb.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 3ae1198c24..cc90a5fe92 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -963,7 +963,8 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
>          }
>      }
>
> -    if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
> +    if (unlikely((env->tlb_table[mmu_idx][index].addr_code &
> +                  (TLB_RECHECK | TLB_INVALID_MASK)) == TLB_RECHECK)) {
>          /*
>           * This is a TLB_RECHECK access, where the MMU protection
>           * covers a smaller range than a target page, and we must

Looking again at this code, I think that now we have the code to
ensure that there's only ever one entry in the TLB/victim TLB for
a given guest address, this change is unnecessary. The sequence

    if (unlikely(!tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr))) {
        if (!VICTIM_TLB_HIT(addr_read, addr)) {
            tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
        }
    }

should result in us always either (a) taking a guest exception and
longjumping out of the tlb_fill(), or (b) ending up with the TLB
containing an entry valid for an insn fetch, ie addr_code does not
have TLB_INVALID_MASK set. So we could drop the check on TLB_INVALID_MASK
here and instead have:

    assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr));

(I'm looking at this code and trying to clean up the mishandling of
execution from rom-device-not-in-romd-mode. Patches to follow later...)

thanks
-- PMM
Richard Henderson July 13, 2018, 12:36 p.m. UTC | #2
On 07/13/2018 06:05 AM, Peter Maydell wrote:
>> -    if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
>> +    if (unlikely((env->tlb_table[mmu_idx][index].addr_code &
>> +                  (TLB_RECHECK | TLB_INVALID_MASK)) == TLB_RECHECK)) {
>>          /*
>>           * This is a TLB_RECHECK access, where the MMU protection
>>           * covers a smaller range than a target page, and we must
> 
> Looking again at this code, I think that now we have the code to
> ensure that there's only ever one entry in the TLB/victim TLB for
> a given guest address...

Which probably wasn't the case the first time you wrote this, no?
Fixing that single entry condition was just a few weeks ago.


> The sequence
> 
>     if (unlikely(!tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr))) {
>         if (!VICTIM_TLB_HIT(addr_read, addr)) {
>             tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
>         }
>     }
> 
> should result in us always either (a) taking a guest exception and
> longjumping out of the tlb_fill(), or (b) ending up with the TLB
> containing an entry valid for an insn fetch, ie addr_code does not
> have TLB_INVALID_MASK set. So we could drop the check on TLB_INVALID_MASK
> here and instead have:
> 
>     assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr));

Tuck that assert just after the tlb_fill, if you like.
I think it's unnecessary; we don't do that any of the
other places we use tlb_fill.


r~
Peter Maydell July 13, 2018, 12:50 p.m. UTC | #3
On 13 July 2018 at 13:36, Richard Henderson
<richard.henderson@linaro.org> wrote:
> On 07/13/2018 06:05 AM, Peter Maydell wrote:
>>> -    if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
>>> +    if (unlikely((env->tlb_table[mmu_idx][index].addr_code &
>>> +                  (TLB_RECHECK | TLB_INVALID_MASK)) == TLB_RECHECK)) {
>>>          /*
>>>           * This is a TLB_RECHECK access, where the MMU protection
>>>           * covers a smaller range than a target page, and we must
>>
>> Looking again at this code, I think that now we have the code to
>> ensure that there's only ever one entry in the TLB/victim TLB for
>> a given guest address...
>
> Which probably wasn't the case the first time you wrote this, no?
> Fixing that single entry condition was just a few weeks ago.

Yes, exactly.

OTOH with Laurent's m68k test case I'm finding that the assert
is firing, and I'm not entirely sure why yet...

thanks
-- PMM
diff mbox series

Patch

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 3ae1198c24..cc90a5fe92 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -963,7 +963,8 @@  tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
         }
     }
 
-    if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
+    if (unlikely((env->tlb_table[mmu_idx][index].addr_code &
+                  (TLB_RECHECK | TLB_INVALID_MASK)) == TLB_RECHECK)) {
         /*
          * This is a TLB_RECHECK access, where the MMU protection
          * covers a smaller range than a target page, and we must