diff mbox series

[1/2] target/ppc: Fix slbia TLB invalidation gap

Message ID 20200318044135.851716-1-npiggin@gmail.com
State New
Headers show
Series [1/2] target/ppc: Fix slbia TLB invalidation gap | expand

Commit Message

Nicholas Piggin March 18, 2020, 4:41 a.m. UTC
slbia must invalidate TLBs even if it does not remove a valid SLB
entry, because slbmte can overwrite valid entries without removing
their TLBs.

As the architecture says, slbia invalidates all lookaside information,
not conditionally based on if it removed valid entries.

It does not seem possible for POWER8 or earlier Linux kernels to hit
this bug because it never changes its kernel SLB translations, and it
should always have valid entries if any accesses are made to usespace
regions. However other operating systems which may modify SLB entry 0
or do more fancy things with segments might be affected.

When POWER9 slbia support is added in the next patch, this becomes a
real problem because some new slbia variants don't invalidate all
non-zero entries.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 target/ppc/mmu-hash64.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

Comments

Cédric Le Goater March 18, 2020, 4:45 p.m. UTC | #1
On 3/18/20 5:41 AM, Nicholas Piggin wrote:
> slbia must invalidate TLBs even if it does not remove a valid SLB
> entry, because slbmte can overwrite valid entries without removing
> their TLBs.
> 
> As the architecture says, slbia invalidates all lookaside information,
> not conditionally based on if it removed valid entries.
> 
> It does not seem possible for POWER8 or earlier Linux kernels to hit
> this bug because it never changes its kernel SLB translations, and it
> should always have valid entries if any accesses are made to usespace
> regions. However other operating systems which may modify SLB entry 0
> or do more fancy things with segments might be affected.

Did you hit the bug on the other OS ? 
 
> When POWER9 slbia support is added in the next patch, this becomes a
> real problem because some new slbia variants don't invalidate all
> non-zero entries.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Looks correct.

Reviewed-by: Cédric Le Goater <clg@kaod.org>

Thanks,

C. 

> ---
>  target/ppc/mmu-hash64.c | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
> 
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 34f6009b1e..373d44de74 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
>      PowerPCCPU *cpu = env_archcpu(env);
>      int n;
> 
> +    /*
> +     * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> +     * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
> +     * can overwrite a valid SLB without flushing its lookaside information.
> +     *
> +     * It would be possible to keep the TLB in synch with the SLB by flushing
> +     * when a valid entry is overwritten by slbmte, and therefore slbia would
> +     * not have to flush unless it evicts a valid SLB entry. However it is
> +     * expected that slbmte is more common than slbia, and slbia is usually
> +     * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> +     * good one.
> +     */
> +
>      /* XXX: Warning: slbia never invalidates the first segment */
>      for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
>          ppc_slb_t *slb = &env->slb[n];
> 
>          if (slb->esid & SLB_ESID_V) {
>              slb->esid &= ~SLB_ESID_V;
> -            /*
> -             * XXX: given the fact that segment size is 256 MB or 1TB,
> -             *      and we still don't have a tlb_flush_mask(env, n, mask)
> -             *      in QEMU, we just invalidate all TLBs
> -             */
> -            env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
>          }
>      }
> +
> +    env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
>  }
> 
>  static void __helper_slbie(CPUPPCState *env, target_ulong addr,
>
Greg Kurz March 18, 2020, 4:52 p.m. UTC | #2
On Wed, 18 Mar 2020 14:41:34 +1000
Nicholas Piggin <npiggin@gmail.com> wrote:

> slbia must invalidate TLBs even if it does not remove a valid SLB
> entry, because slbmte can overwrite valid entries without removing
> their TLBs.
> 
> As the architecture says, slbia invalidates all lookaside information,
> not conditionally based on if it removed valid entries.
> 
> It does not seem possible for POWER8 or earlier Linux kernels to hit
> this bug because it never changes its kernel SLB translations, and it
> should always have valid entries if any accesses are made to usespace

s/usespace/userspace

> regions. However other operating systems which may modify SLB entry 0
> or do more fancy things with segments might be affected.
> 
> When POWER9 slbia support is added in the next patch, this becomes a
> real problem because some new slbia variants don't invalidate all
> non-zero entries.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---

LGTM

Reviewed-by: Greg Kurz <groug@kaod.org>

>  target/ppc/mmu-hash64.c | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
> 
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 34f6009b1e..373d44de74 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
>      PowerPCCPU *cpu = env_archcpu(env);
>      int n;
>  
> +    /*
> +     * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> +     * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
> +     * can overwrite a valid SLB without flushing its lookaside information.
> +     *
> +     * It would be possible to keep the TLB in synch with the SLB by flushing
> +     * when a valid entry is overwritten by slbmte, and therefore slbia would
> +     * not have to flush unless it evicts a valid SLB entry. However it is
> +     * expected that slbmte is more common than slbia, and slbia is usually
> +     * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> +     * good one.
> +     */
> +
>      /* XXX: Warning: slbia never invalidates the first segment */
>      for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
>          ppc_slb_t *slb = &env->slb[n];
>  
>          if (slb->esid & SLB_ESID_V) {
>              slb->esid &= ~SLB_ESID_V;
> -            /*
> -             * XXX: given the fact that segment size is 256 MB or 1TB,
> -             *      and we still don't have a tlb_flush_mask(env, n, mask)
> -             *      in QEMU, we just invalidate all TLBs
> -             */
> -            env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
>          }
>      }
> +
> +    env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
>  }
>  
>  static void __helper_slbie(CPUPPCState *env, target_ulong addr,
Nicholas Piggin March 19, 2020, 2:24 a.m. UTC | #3
Cédric Le Goater's on March 19, 2020 2:45 am:
> On 3/18/20 5:41 AM, Nicholas Piggin wrote:
>> slbia must invalidate TLBs even if it does not remove a valid SLB
>> entry, because slbmte can overwrite valid entries without removing
>> their TLBs.
>> 
>> As the architecture says, slbia invalidates all lookaside information,
>> not conditionally based on if it removed valid entries.
>> 
>> It does not seem possible for POWER8 or earlier Linux kernels to hit
>> this bug because it never changes its kernel SLB translations, and it
>> should always have valid entries if any accesses are made to usespace
>> regions. However other operating systems which may modify SLB entry 0
>> or do more fancy things with segments might be affected.
> 
> Did you hit the bug on the other OS ? 

No, hit it when fixing POWER9 hash.

>  
>> When POWER9 slbia support is added in the next patch, this becomes a
>> real problem because some new slbia variants don't invalidate all
>> non-zero entries.
>> 
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> 
> Looks correct.
> 
> Reviewed-by: Cédric Le Goater <clg@kaod.org>

Thanks,
Nick
David Gibson March 19, 2020, 5:19 a.m. UTC | #4
On Wed, Mar 18, 2020 at 02:41:34PM +1000, Nicholas Piggin wrote:
> slbia must invalidate TLBs even if it does not remove a valid SLB
> entry, because slbmte can overwrite valid entries without removing
> their TLBs.
> 
> As the architecture says, slbia invalidates all lookaside information,
> not conditionally based on if it removed valid entries.
> 
> It does not seem possible for POWER8 or earlier Linux kernels to hit
> this bug because it never changes its kernel SLB translations, and it
> should always have valid entries if any accesses are made to usespace
> regions. However other operating systems which may modify SLB entry 0
> or do more fancy things with segments might be affected.
> 
> When POWER9 slbia support is added in the next patch, this becomes a
> real problem because some new slbia variants don't invalidate all
> non-zero entries.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Applied to ppc-for-5.0, thanks.

> ---
>  target/ppc/mmu-hash64.c | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
> 
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 34f6009b1e..373d44de74 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
>      PowerPCCPU *cpu = env_archcpu(env);
>      int n;
>  
> +    /*
> +     * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> +     * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
> +     * can overwrite a valid SLB without flushing its lookaside information.
> +     *
> +     * It would be possible to keep the TLB in synch with the SLB by flushing
> +     * when a valid entry is overwritten by slbmte, and therefore slbia would
> +     * not have to flush unless it evicts a valid SLB entry. However it is
> +     * expected that slbmte is more common than slbia, and slbia is usually
> +     * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> +     * good one.
> +     */
> +
>      /* XXX: Warning: slbia never invalidates the first segment */
>      for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
>          ppc_slb_t *slb = &env->slb[n];
>  
>          if (slb->esid & SLB_ESID_V) {
>              slb->esid &= ~SLB_ESID_V;
> -            /*
> -             * XXX: given the fact that segment size is 256 MB or 1TB,
> -             *      and we still don't have a tlb_flush_mask(env, n, mask)
> -             *      in QEMU, we just invalidate all TLBs
> -             */
> -            env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
>          }
>      }
> +
> +    env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
>  }
>  
>  static void __helper_slbie(CPUPPCState *env, target_ulong addr,
David Gibson March 19, 2020, 5:20 a.m. UTC | #5
On Wed, Mar 18, 2020 at 05:52:32PM +0100, Greg Kurz wrote:
65;5803;1c> On Wed, 18 Mar 2020 14:41:34 +1000
> Nicholas Piggin <npiggin@gmail.com> wrote:
> 
> > slbia must invalidate TLBs even if it does not remove a valid SLB
> > entry, because slbmte can overwrite valid entries without removing
> > their TLBs.
> > 
> > As the architecture says, slbia invalidates all lookaside information,
> > not conditionally based on if it removed valid entries.
> > 
> > It does not seem possible for POWER8 or earlier Linux kernels to hit
> > this bug because it never changes its kernel SLB translations, and it
> > should always have valid entries if any accesses are made to usespace
> 
> s/usespace/userspace

Corrected in my tree, thanks.

> 
> > regions. However other operating systems which may modify SLB entry 0
> > or do more fancy things with segments might be affected.
> > 
> > When POWER9 slbia support is added in the next patch, this becomes a
> > real problem because some new slbia variants don't invalidate all
> > non-zero entries.
> > 
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
> 
> LGTM
> 
> Reviewed-by: Greg Kurz <groug@kaod.org>
> 
> >  target/ppc/mmu-hash64.c | 21 +++++++++++++++------
> >  1 file changed, 15 insertions(+), 6 deletions(-)
> > 
> > diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> > index 34f6009b1e..373d44de74 100644
> > --- a/target/ppc/mmu-hash64.c
> > +++ b/target/ppc/mmu-hash64.c
> > @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
> >      PowerPCCPU *cpu = env_archcpu(env);
> >      int n;
> >  
> > +    /*
> > +     * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> > +     * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
> > +     * can overwrite a valid SLB without flushing its lookaside information.
> > +     *
> > +     * It would be possible to keep the TLB in synch with the SLB by flushing
> > +     * when a valid entry is overwritten by slbmte, and therefore slbia would
> > +     * not have to flush unless it evicts a valid SLB entry. However it is
> > +     * expected that slbmte is more common than slbia, and slbia is usually
> > +     * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> > +     * good one.
> > +     */
> > +
> >      /* XXX: Warning: slbia never invalidates the first segment */
> >      for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
> >          ppc_slb_t *slb = &env->slb[n];
> >  
> >          if (slb->esid & SLB_ESID_V) {
> >              slb->esid &= ~SLB_ESID_V;
> > -            /*
> > -             * XXX: given the fact that segment size is 256 MB or 1TB,
> > -             *      and we still don't have a tlb_flush_mask(env, n, mask)
> > -             *      in QEMU, we just invalidate all TLBs
> > -             */
> > -            env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> >          }
> >      }
> > +
> > +    env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> >  }
> >  
> >  static void __helper_slbie(CPUPPCState *env, target_ulong addr,
>
diff mbox series

Patch

diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 34f6009b1e..373d44de74 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -100,20 +100,29 @@  void helper_slbia(CPUPPCState *env)
     PowerPCCPU *cpu = env_archcpu(env);
     int n;
 
+    /*
+     * slbia must always flush all TLB (which is equivalent to ERAT in ppc
+     * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
+     * can overwrite a valid SLB without flushing its lookaside information.
+     *
+     * It would be possible to keep the TLB in synch with the SLB by flushing
+     * when a valid entry is overwritten by slbmte, and therefore slbia would
+     * not have to flush unless it evicts a valid SLB entry. However it is
+     * expected that slbmte is more common than slbia, and slbia is usually
+     * going to evict valid SLB entries, so that tradeoff is unlikely to be a
+     * good one.
+     */
+
     /* XXX: Warning: slbia never invalidates the first segment */
     for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
         ppc_slb_t *slb = &env->slb[n];
 
         if (slb->esid & SLB_ESID_V) {
             slb->esid &= ~SLB_ESID_V;
-            /*
-             * XXX: given the fact that segment size is 256 MB or 1TB,
-             *      and we still don't have a tlb_flush_mask(env, n, mask)
-             *      in QEMU, we just invalidate all TLBs
-             */
-            env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
         }
     }
+
+    env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
 }
 
 static void __helper_slbie(CPUPPCState *env, target_ulong addr,