diff mbox series

poewrpc/mce: Fix SLB rebolting during MCE recovery path.

Message ID 153449765953.21426.6928471250286444535.stgit@jupiter.in.ibm.com (mailing list archive)
State Changes Requested
Headers show
Series poewrpc/mce: Fix SLB rebolting during MCE recovery path. | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success next/apply_patch Successfully applied
snowpatch_ozlabs/checkpatch fail Test checkpatch on branch next
snowpatch_ozlabs/build-ppc64le success Test build-ppc64le on branch next
snowpatch_ozlabs/build-ppc64be success Test build-ppc64be on branch next
snowpatch_ozlabs/build-ppc64e success Test build-ppc64e on branch next
snowpatch_ozlabs/build-ppc32 success Test build-ppc32 on branch next

Commit Message

Mahesh J Salgaonkar Aug. 17, 2018, 9:21 a.m. UTC
From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>

With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
during MCE recovery path.), the SLB error recovery is broken. The
commit missed a crucial change of OR-ing index value to RB[52-63] which
selects the SLB entry while rebolting. This patch fixes that.

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/slb.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Nicholas Piggin Aug. 21, 2018, 10:27 a.m. UTC | #1
On Fri, 17 Aug 2018 14:51:47 +0530
Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com> wrote:

> From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
> 
> With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
> during MCE recovery path.), the SLB error recovery is broken. The
> commit missed a crucial change of OR-ing index value to RB[52-63] which
> selects the SLB entry while rebolting. This patch fixes that.
> 
> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/mm/slb.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 0b095fa54049..6dd9913425bc 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -101,9 +101,12 @@ void __slb_restore_bolted_realmode(void)
>  
>  	 /* No isync needed because realmode. */
>  	for (index = 0; index < SLB_NUM_BOLTED; index++) {
> +		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
> +
> +		rb = (rb & ~0xFFFul) | index;
>  		asm volatile("slbmte  %0,%1" :
>  		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
> -		       "r" (be64_to_cpu(p->save_area[index].esid)));
> +		       "r" (rb));
>  	}
>  }
>  
> 

I'm just looking at this again. The bolted save areas do have the
index field set. So for the OS, your patch should be equivalent to
this, right?

 static inline void slb_shadow_clear(enum slb_index index)
 {
-       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
+       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, index);
 }

Which seems like a better fix.

PAPR says:

  Note: SLB is filled sequentially starting at index 0
  from the shadow buffer ignoring the contents of
  RB field bits 52-63

So that shouldn't be an issue.

Thanks,
Nick
Mahesh J Salgaonkar Aug. 23, 2018, 4:28 a.m. UTC | #2
On 08/21/2018 03:57 PM, Nicholas Piggin wrote:
> On Fri, 17 Aug 2018 14:51:47 +0530
> Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com> wrote:
> 
>> From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
>>
>> With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
>> during MCE recovery path.), the SLB error recovery is broken. The
>> commit missed a crucial change of OR-ing index value to RB[52-63] which
>> selects the SLB entry while rebolting. This patch fixes that.
>>
>> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
>> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>  arch/powerpc/mm/slb.c |    5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
>> index 0b095fa54049..6dd9913425bc 100644
>> --- a/arch/powerpc/mm/slb.c
>> +++ b/arch/powerpc/mm/slb.c
>> @@ -101,9 +101,12 @@ void __slb_restore_bolted_realmode(void)
>>  
>>  	 /* No isync needed because realmode. */
>>  	for (index = 0; index < SLB_NUM_BOLTED; index++) {
>> +		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
>> +
>> +		rb = (rb & ~0xFFFul) | index;
>>  		asm volatile("slbmte  %0,%1" :
>>  		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
>> -		       "r" (be64_to_cpu(p->save_area[index].esid)));
>> +		       "r" (rb));
>>  	}
>>  }
>>  
>>
> 
> I'm just looking at this again. The bolted save areas do have the
> index field set. So for the OS, your patch should be equivalent to
> this, right?
> 
>  static inline void slb_shadow_clear(enum slb_index index)
>  {
> -       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
> +       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, index);
>  }
> 
> Which seems like a better fix.

Yeah this also fixes the issue. The only additional change required is
cpu_to_be64(index). As long as we maintain index in bolted save areas
(for valid/invalid entries) we should be ok. Will respin v2 with this
change.

Thanks,
-Mahesh.
Nicholas Piggin Aug. 23, 2018, 4:36 a.m. UTC | #3
On Thu, 23 Aug 2018 09:58:31 +0530
Mahesh Jagannath Salgaonkar <mahesh@linux.vnet.ibm.com> wrote:

> On 08/21/2018 03:57 PM, Nicholas Piggin wrote:
> > On Fri, 17 Aug 2018 14:51:47 +0530
> > Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com> wrote:
> >   
> >> From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
> >>
> >> With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
> >> during MCE recovery path.), the SLB error recovery is broken. The
> >> commit missed a crucial change of OR-ing index value to RB[52-63] which
> >> selects the SLB entry while rebolting. This patch fixes that.
> >>
> >> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
> >> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> >> ---
> >>  arch/powerpc/mm/slb.c |    5 ++++-
> >>  1 file changed, 4 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> >> index 0b095fa54049..6dd9913425bc 100644
> >> --- a/arch/powerpc/mm/slb.c
> >> +++ b/arch/powerpc/mm/slb.c
> >> @@ -101,9 +101,12 @@ void __slb_restore_bolted_realmode(void)
> >>  
> >>  	 /* No isync needed because realmode. */
> >>  	for (index = 0; index < SLB_NUM_BOLTED; index++) {
> >> +		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
> >> +
> >> +		rb = (rb & ~0xFFFul) | index;
> >>  		asm volatile("slbmte  %0,%1" :
> >>  		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
> >> -		       "r" (be64_to_cpu(p->save_area[index].esid)));
> >> +		       "r" (rb));
> >>  	}
> >>  }
> >>  
> >>  
> > 
> > I'm just looking at this again. The bolted save areas do have the
> > index field set. So for the OS, your patch should be equivalent to
> > this, right?
> > 
> >  static inline void slb_shadow_clear(enum slb_index index)
> >  {
> > -       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
> > +       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, index);
> >  }
> > 
> > Which seems like a better fix.  
> 
> Yeah this also fixes the issue. The only additional change required is
> cpu_to_be64(index).

Ah yep.

> As long as we maintain index in bolted save areas
> (for valid/invalid entries) we should be ok. Will respin v2 with this
> change.

Cool, Reviewed-by: Nicholas Piggin <npiggin@gmail.com> in that case :)

Thanks,
Nick
diff mbox series

Patch

diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 0b095fa54049..6dd9913425bc 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -101,9 +101,12 @@  void __slb_restore_bolted_realmode(void)
 
 	 /* No isync needed because realmode. */
 	for (index = 0; index < SLB_NUM_BOLTED; index++) {
+		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
+
+		rb = (rb & ~0xFFFul) | index;
 		asm volatile("slbmte  %0,%1" :
 		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
-		       "r" (be64_to_cpu(p->save_area[index].esid)));
+		       "r" (rb));
 	}
 }