Patchwork [-V3,09/11] arch/powerpc: Use 50 bits of VSID in slbmte

login
register
mail settings
Submitter Aneesh Kumar K.V
Date July 9, 2012, 1:13 p.m.
Message ID <1341839621-28332-10-git-send-email-aneesh.kumar@linux.vnet.ibm.com>
Download mbox | patch
Permalink /patch/169833/
State Changes Requested
Delegated to: Benjamin Herrenschmidt
Headers show

Comments

Aneesh Kumar K.V - July 9, 2012, 1:13 p.m.
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Increase the number of valid VSID bits in slbmte instruction.
We will use the new bits when we increase valid VSID bits.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/mm/slb_low.S |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
Paul Mackerras - July 23, 2012, 12:06 a.m.
On Mon, Jul 09, 2012 at 06:43:39PM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> Increase the number of valid VSID bits in slbmte instruction.
> We will use the new bits when we increase valid VSID bits.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/mm/slb_low.S |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
> index c355af6..c1fc81c 100644
> --- a/arch/powerpc/mm/slb_low.S
> +++ b/arch/powerpc/mm/slb_low.S
> @@ -226,7 +226,7 @@ _GLOBAL(slb_allocate_user)
>   */
>  slb_finish_load:
>  	ASM_VSID_SCRAMBLE(r10,r9,256M)
> -	rldimi	r11,r10,SLB_VSID_SHIFT,16	/* combine VSID and flags */
> +	rldimi	r11,r10,SLB_VSID_SHIFT,2	/* combine VSID and flags */

You can't do that without either changing ASM_VSID_SCRAMBLE or masking
the VSID it generates to 36 bits, since the logic in ASM_VSID_SCRAMBLE
can leave non-zero bits in the high 28 bits of the result.  Similarly
for the 1T case.

Paul.

Patch

diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
index c355af6..c1fc81c 100644
--- a/arch/powerpc/mm/slb_low.S
+++ b/arch/powerpc/mm/slb_low.S
@@ -226,7 +226,7 @@  _GLOBAL(slb_allocate_user)
  */
 slb_finish_load:
 	ASM_VSID_SCRAMBLE(r10,r9,256M)
-	rldimi	r11,r10,SLB_VSID_SHIFT,16	/* combine VSID and flags */
+	rldimi	r11,r10,SLB_VSID_SHIFT,2	/* combine VSID and flags */
 
 	/* r3 = EA, r11 = VSID data */
 	/*
@@ -290,7 +290,7 @@  _GLOBAL(slb_compare_rr_to_size)
 slb_finish_load_1T:
 	srdi	r10,r10,40-28		/* get 1T ESID */
 	ASM_VSID_SCRAMBLE(r10,r9,1T)
-	rldimi	r11,r10,SLB_VSID_SHIFT_1T,16	/* combine VSID and flags */
+	rldimi	r11,r10,SLB_VSID_SHIFT_1T,2	/* combine VSID and flags */
 	li	r10,MMU_SEGSIZE_1T
 	rldimi	r11,r10,SLB_VSID_SSIZE_SHIFT,0	/* insert segment size */