[01/12] powerpc/64s/hash: Fix stab_rr off by one initialization

Message ID 20180914153056.3644-2-npiggin@gmail.com
State Accepted
Commit 09b4438db13fa83b6219aee5993711a2aa2a0c64
Headers show
Series
  • SLB miss conversion to C, and SLB optimisations
Related show

Checks

Context Check Description
snowpatch_ozlabs/checkpatch success Test checkpatch on branch next
snowpatch_ozlabs/apply_patch success next/apply_patch Successfully applied

Commit Message

Nicholas Piggin Sept. 14, 2018, 3:30 p.m.
This causes SLB alloation to start 1 beyond the start of the SLB.
There is no real problem because after it wraps it stats behaving
properly, it's just surprisig to see when looking at SLB traces.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/slb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Joel Stanley Sept. 17, 2018, 6:51 a.m. | #1
On Sat, 15 Sep 2018 at 01:03, Nicholas Piggin <npiggin@gmail.com> wrote:
>
> This causes SLB alloation to start 1 beyond the start of the SLB.
> There is no real problem because after it wraps it stats behaving

starts?

> properly, it's just surprisig to see when looking at SLB traces.

surprising

>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

> ---
>  arch/powerpc/mm/slb.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 9f574e59d178..2f162c6e52d4 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -355,7 +355,7 @@ void slb_initialize(void)
>  #endif
>         }
>
> -       get_paca()->stab_rr = SLB_NUM_BOLTED;
> +       get_paca()->stab_rr = SLB_NUM_BOLTED - 1;
>
>         lflags = SLB_VSID_KERNEL | linear_llp;
>         vflags = SLB_VSID_KERNEL | vmalloc_llp;
> --
> 2.18.0
>
Nicholas Piggin Sept. 17, 2018, 7:35 a.m. | #2
On Mon, 17 Sep 2018 16:21:51 +0930
Joel Stanley <joel@jms.id.au> wrote:

> On Sat, 15 Sep 2018 at 01:03, Nicholas Piggin <npiggin@gmail.com> wrote:
> >
> > This causes SLB alloation to start 1 beyond the start of the SLB.

allocation

> > There is no real problem because after it wraps it stats behaving  
> 
> starts?
> 
> > properly, it's just surprisig to see when looking at SLB traces.  
> 
> surprising

My keyboard is dying :(
Michael Ellerman Sept. 20, 2018, 4:21 a.m. | #3
On Fri, 2018-09-14 at 15:30:45 UTC, Nicholas Piggin wrote:
> This causes SLB alloation to start 1 beyond the start of the SLB.
> There is no real problem because after it wraps it stats behaving
> properly, it's just surprisig to see when looking at SLB traces.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/09b4438db13fa83b6219aee5993711

cheers

Patch

diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 9f574e59d178..2f162c6e52d4 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -355,7 +355,7 @@  void slb_initialize(void)
 #endif
 	}
 
-	get_paca()->stab_rr = SLB_NUM_BOLTED;
+	get_paca()->stab_rr = SLB_NUM_BOLTED - 1;
 
 	lflags = SLB_VSID_KERNEL | linear_llp;
 	vflags = SLB_VSID_KERNEL | vmalloc_llp;