diff mbox

[1/5] powerpc/mm: Add MMU features for TLB reservation & Paired MAS registers

Message ID 1250658513-13009-1-git-send-email-galak@kernel.crashing.org (mailing list archive)
State Superseded
Delegated to: Benjamin Herrenschmidt
Headers show

Commit Message

Kumar Gala Aug. 19, 2009, 5:08 a.m. UTC
Support for TLB reservation (or TLB Write Conditional) and Paired MAS
registers are optional for a processor implementation so we handle
them via MMU feature sections.

We currently only used paired MAS registers to access the full RPN + perm
bits that are kept in MAS7||MAS3.  We assume that if an implementation has
hardware page table at this time it also implements in TLB reservations.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
---
 arch/powerpc/include/asm/mmu.h |    9 +++++++++
 arch/powerpc/mm/tlb_low_64e.S  |   36 +++++++++++++++++++++++++++++++++++-
 2 files changed, 44 insertions(+), 1 deletions(-)

Comments

Benjamin Herrenschmidt Aug. 19, 2009, 7:25 a.m. UTC | #1
On Wed, 2009-08-19 at 00:08 -0500, Kumar Gala wrote:
> Support for TLB reservation (or TLB Write Conditional) and Paired MAS
> registers are optional for a processor implementation so we handle
> them via MMU feature sections.
> 
> We currently only used paired MAS registers to access the full RPN + perm
> bits that are kept in MAS7||MAS3.  We assume that if an implementation has
> hardware page table at this time it also implements in TLB reservations.

You also need to be careful with this code:

virt_page_table_tlb_miss_done:

	/* We have overriden MAS2:EPN but currently our primary TLB miss
	 * handler will always restore it so that should not be an issue,
	 * if we ever optimize the primary handler to not write MAS2 on
	 * some cases, we'll have to restore MAS2:EPN here based on the
	 * original fault's DEAR. If we do that we have to modify the
	 * ITLB miss handler to also store SRR0 in the exception frame
	 * as DEAR.
	 *
	 * However, one nasty thing we did is we cleared the reservation
	 * (well, potentially we did). We do a trick here thus if we
	 * are not a level 0 exception (we interrupted the TLB miss) we
	 * offset the return address by -4 in order to replay the tlbsrx
	 * instruction there
	 */
	subf	r10,r13,r12
	cmpldi	cr0,r10,PACA_EXTLB+EX_TLB_SIZE
	bne-	1f
	ld	r11,PACA_EXTLB+EX_TLB_SIZE+EX_TLB_SRR0(r13)
	addi	r10,r11,-4
	std	r10,PACA_EXTLB+EX_TLB_SIZE+EX_TLB_SRR0(r13)

You may want to make the 3 last lines conditional on having tlbsrx.

Right now, in the no-tlbsrx. case, what happens is that it will go back
to the previous instruction, an or, which hopefully should be harmless
-but- this code is nasty enough you really don't want to take that
sort of chances.

Feel free to add a fat comment next to the ld in the tlbsrx case itself
explaining why those two instructions must be kept together and any
change here must be reflected in the second level handler.

Cheers,
Ben.
Kumar Gala Aug. 19, 2009, 9:37 p.m. UTC | #2
On Aug 19, 2009, at 2:25 AM, Benjamin Herrenschmidt wrote:

> On Wed, 2009-08-19 at 00:08 -0500, Kumar Gala wrote:
>> Support for TLB reservation (or TLB Write Conditional) and Paired MAS
>> registers are optional for a processor implementation so we handle
>> them via MMU feature sections.
>>
>> We currently only used paired MAS registers to access the full RPN  
>> + perm
>> bits that are kept in MAS7||MAS3.  We assume that if an  
>> implementation has
>> hardware page table at this time it also implements in TLB  
>> reservations.
>
> You also need to be careful with this code:
>
> virt_page_table_tlb_miss_done:
>
> 	/* We have overriden MAS2:EPN but currently our primary TLB miss
> 	 * handler will always restore it so that should not be an issue,
> 	 * if we ever optimize the primary handler to not write MAS2 on
> 	 * some cases, we'll have to restore MAS2:EPN here based on the
> 	 * original fault's DEAR. If we do that we have to modify the
> 	 * ITLB miss handler to also store SRR0 in the exception frame
> 	 * as DEAR.
> 	 *
> 	 * However, one nasty thing we did is we cleared the reservation
> 	 * (well, potentially we did). We do a trick here thus if we
> 	 * are not a level 0 exception (we interrupted the TLB miss) we
> 	 * offset the return address by -4 in order to replay the tlbsrx
> 	 * instruction there
> 	 */
> 	subf	r10,r13,r12
> 	cmpldi	cr0,r10,PACA_EXTLB+EX_TLB_SIZE
> 	bne-	1f
> 	ld	r11,PACA_EXTLB+EX_TLB_SIZE+EX_TLB_SRR0(r13)
> 	addi	r10,r11,-4
> 	std	r10,PACA_EXTLB+EX_TLB_SIZE+EX_TLB_SRR0(r13)
>
> You may want to make the 3 last lines conditional on having tlbsrx.

The whole thing only ever gets called if we had tlbsrx. so is there  
any utility in making a part of conditional on tlbsrx?

> Right now, in the no-tlbsrx. case, what happens is that it will go  
> back
> to the previous instruction, an or, which hopefully should be harmless
> -but- this code is nasty enough you really don't want to take that
> sort of chances.
>
> Feel free to add a fat comment next to the ld in the tlbsrx case  
> itself
> explaining why those two instructions must be kept together and any
> change here must be reflected in the second level handler.
>
> Cheers,
> Ben.
>
Benjamin Herrenschmidt Aug. 20, 2009, 12:43 a.m. UTC | #3
On Wed, 2009-08-19 at 16:37 -0500, Kumar Gala wrote:
> On Aug 19, 2009, at 2:25 AM, Benjamin Herrenschmidt wrote:

> The whole thing only ever gets called if we had tlbsrx. so is there  
> any utility in making a part of conditional on tlbsrx?

I don't think so ... this is the second level TLB miss handler when
the first level takes a hit on the virtually linear page tables, I
has nothing to do with tlbsrx... however, it does offset the return
address back into the first level handler by -4 to account for
replaying the tlbsrx instruction which you probably don't want to do.

Ben.
Kumar Gala Aug. 24, 2009, 4:12 p.m. UTC | #4
On Aug 19, 2009, at 7:43 PM, Benjamin Herrenschmidt wrote:

> On Wed, 2009-08-19 at 16:37 -0500, Kumar Gala wrote:
>> On Aug 19, 2009, at 2:25 AM, Benjamin Herrenschmidt wrote:
>
>> The whole thing only ever gets called if we had tlbsrx. so is there
>> any utility in making a part of conditional on tlbsrx?
>
> I don't think so ... this is the second level TLB miss handler when
> the first level takes a hit on the virtually linear page tables, I
> has nothing to do with tlbsrx... however, it does offset the return
> address back into the first level handler by -4 to account for
> replaying the tlbsrx instruction which you probably don't want to do.

Duh.  Wasn't looking at the fall through.

But is there any reason to even have any of the 6 instructions in the  
'virt_page_table_tlb_miss_done' path if we don't have TLBSRX?

- k
Benjamin Herrenschmidt Aug. 25, 2009, 1:08 a.m. UTC | #5
On Mon, 2009-08-24 at 11:12 -0500, Kumar Gala wrote:
> Duh.  Wasn't looking at the fall through.
> 
> But is there any reason to even have any of the 6 instructions in
> the  
> 'virt_page_table_tlb_miss_done' path if we don't have TLBSRX?
> 
No, that's what I said in my initial email :-) You can probably
"alternate out" that whole thing.

Cheers,
Ben.
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 2fcfefc..7ffbb65 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -58,6 +58,15 @@ 
  */
 #define MMU_FTR_TLBIE_206		ASM_CONST(0x00400000)
 
+/* Enable use of TLB reservation.  Processor should support tlbsrx.
+ * instruction and MAS0[WQ].
+ */
+#define MMU_FTR_USE_TLBRSRV		ASM_CONST(0x00800000)
+
+/* Use paired MAS registers (MAS7||MAS3, etc.)
+ */
+#define MMU_FTR_USE_PAIRED_MAS		ASM_CONST(0x01000000)
+
 #ifndef __ASSEMBLY__
 #include <asm/cputable.h>
 
diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
index 10d524d..5b8e274 100644
--- a/arch/powerpc/mm/tlb_low_64e.S
+++ b/arch/powerpc/mm/tlb_low_64e.S
@@ -189,12 +189,16 @@  normal_tlb_miss:
 	clrrdi	r14,r14,3
 	or	r10,r15,r14
 
+BEGIN_MMU_FTR_SECTION
 	/* Set the TLB reservation and seach for existing entry. Then load
 	 * the entry.
 	 */
 	PPC_TLBSRX_DOT(0,r16)
 	ld	r14,0(r10)
 	beq	normal_tlb_miss_done
+MMU_FTR_SECTION_ELSE
+	ld	r14,0(r10)
+ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_USE_TLBRSRV)
 
 finish_normal_tlb_miss:
 	/* Check if required permissions are met */
@@ -241,7 +245,14 @@  finish_normal_tlb_miss:
 	bne	1f
 	li	r11,MAS3_SW|MAS3_UW
 	andc	r15,r15,r11
-1:	mtspr	SPRN_MAS7_MAS3,r15
+1:
+BEGIN_MMU_FTR_SECTION
+	srdi	r16,r15,32
+	mtspr	SPRN_MAS3,r15
+	mtspr	SPRN_MAS7,r16
+MMU_FTR_SECTION_ELSE
+	mtspr	SPRN_MAS7_MAS3,r15
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_USE_PAIRED_MAS)
 
 	tlbwe
 
@@ -311,11 +322,13 @@  virt_page_table_tlb_miss:
 	rlwinm	r10,r10,0,16,1			/* Clear TID */
 	mtspr	SPRN_MAS1,r10
 1:
+BEGIN_MMU_FTR_SECTION
 	/* Search if we already have a TLB entry for that virtual address, and
 	 * if we do, bail out.
 	 */
 	PPC_TLBSRX_DOT(0,r16)
 	beq	virt_page_table_tlb_miss_done
+END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_TLBRSRV)
 
 	/* Now, we need to walk the page tables. First check if we are in
 	 * range.
@@ -367,7 +380,14 @@  virt_page_table_tlb_miss:
 	 */
 	clrldi	r11,r15,4		/* remove region ID from RPN */
 	ori	r10,r11,1		/* Or-in SR */
+
+BEGIN_MMU_FTR_SECTION
+	srdi	r16,r10,32
+	mtspr	SPRN_MAS3,r10
+	mtspr	SPRN_MAS7,r16
+MMU_FTR_SECTION_ELSE
 	mtspr	SPRN_MAS7_MAS3,r10
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_USE_PAIRED_MAS)
 
 	tlbwe
 
@@ -618,7 +638,14 @@  htw_tlb_miss:
 #else
 	ori	r10,r15,(BOOK3E_PAGESZ_4K << MAS3_SPSIZE_SHIFT)
 #endif
+
+BEGIN_MMU_FTR_SECTION
+	srdi	r16,r10,32
+	mtspr	SPRN_MAS3,r10
+	mtspr	SPRN_MAS7,r16
+MMU_FTR_SECTION_ELSE
 	mtspr	SPRN_MAS7_MAS3,r10
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_USE_PAIRED_MAS)
 
 	tlbwe
 
@@ -700,7 +727,14 @@  tlb_load_linear:
 	clrrdi	r10,r16,30		/* 1G page index */
 	clrldi	r10,r10,4		/* clear region bits */
 	ori	r10,r10,MAS3_SR|MAS3_SW|MAS3_SX
+
+BEGIN_MMU_FTR_SECTION
+	srdi	r16,r10,32
+	mtspr	SPRN_MAS3,r10
+	mtspr	SPRN_MAS7,r16
+MMU_FTR_SECTION_ELSE
 	mtspr	SPRN_MAS7_MAS3,r10
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_USE_PAIRED_MAS)
 
 	tlbwe