diff mbox

[v2] powerpc/booke64: wrap tlb lock and search in htw miss with FTR_SMT

Message ID 1401489946-12935-1-git-send-email-scottwood@freescale.com (mailing list archive)
State Accepted
Delegated to: Scott Wood
Headers show

Commit Message

Scott Wood May 30, 2014, 10:45 p.m. UTC
From: Laurentiu Tudor <Laurentiu.Tudor@freescale.com>

Virtualized environments may expose a e6500 dual-threaded core
as two single-threaded e6500 cores. Take advantage of this
and get rid of the tlb lock and the trap-causing tlbsx in
the htw miss handler by guarding with CPU_FTR_SMT, as it's
already being done in the bolted tlb1 miss handler.

As seen in the results below, measurements done with lmbench
random memory access latency test running under Freescale's
Embedded Hypervisor, there is a ~34% improvement.

Memory latencies in nanoseconds - smaller is better
    (WARNING - may not be correct, check graphs)
----------------------------------------------------
Host       Mhz   L1 $   L2 $    Main mem    Rand mem
---------  ---   ----   ----    --------    --------
smt       1665 1.8020   13.2    83.0         1149.7
nosmt     1665 1.8020   13.2    83.0          758.1

Signed-off-by: Laurentiu Tudor <Laurentiu.Tudor@freescale.com>
Cc: Scott Wood <scottwood@freescale.com>
[scottwood@freescale.com: commit message tweak]
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
v2:
 - s/expose/may expose/ in commit message
 - rebased onto my patch queue to resolve conflict
 - resent since the original didn't make it to the list archives
   or patchwork.

 arch/powerpc/mm/tlb_low_64e.S | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Tudor Laurentiu June 2, 2014, 12:48 p.m. UTC | #1
On 05/31/2014 01:45 AM, Scott Wood wrote:
> From: Laurentiu Tudor <Laurentiu.Tudor@freescale.com>
>
> Virtualized environments may expose a e6500 dual-threaded core
> as two single-threaded e6500 cores. Take advantage of this
> and get rid of the tlb lock and the trap-causing tlbsx in
> the htw miss handler by guarding with CPU_FTR_SMT, as it's
> already being done in the bolted tlb1 miss handler.
>
> As seen in the results below, measurements done with lmbench
> random memory access latency test running under Freescale's
> Embedded Hypervisor, there is a ~34% improvement.
>
> Memory latencies in nanoseconds - smaller is better
>      (WARNING - may not be correct, check graphs)
> ----------------------------------------------------
> Host       Mhz   L1 $   L2 $    Main mem    Rand mem
> ---------  ---   ----   ----    --------    --------
> smt       1665 1.8020   13.2    83.0         1149.7
> nosmt     1665 1.8020   13.2    83.0          758.1
>
> Signed-off-by: Laurentiu Tudor <Laurentiu.Tudor@freescale.com>
> Cc: Scott Wood <scottwood@freescale.com>
> [scottwood@freescale.com: commit message tweak]
> Signed-off-by: Scott Wood <scottwood@freescale.com>
> ---
> v2:
>   - s/expose/may expose/ in commit message
>   - rebased onto my patch queue to resolve conflict

Thanks!

>   - resent since the original didn't make it to the list archives
>     or patchwork.

The only thing i can think of is that maybe i've misspelled the mailing 
list address ...

---
Best Regards, Laurentiu
Scott Wood June 2, 2014, 4:45 p.m. UTC | #2
On Mon, 2014-06-02 at 15:48 +0300, Tudor Laurentiu wrote:
> On 05/31/2014 01:45 AM, Scott Wood wrote:
> > From: Laurentiu Tudor <Laurentiu.Tudor@freescale.com>
> >   - resent since the original didn't make it to the list archives
> >     or patchwork.
> 
> The only thing i can think of is that maybe i've misspelled the mailing 
> list address ...

It looks right to me.  Did you get a bounce?

-Scott
Tudor Laurentiu June 3, 2014, 2:49 p.m. UTC | #3
On 06/02/2014 07:45 PM, Scott Wood wrote:
> On Mon, 2014-06-02 at 15:48 +0300, Tudor Laurentiu wrote:
>> On 05/31/2014 01:45 AM, Scott Wood wrote:
>>> From: Laurentiu Tudor <Laurentiu.Tudor@freescale.com>
>>>    - resent since the original didn't make it to the list archives
>>>      or patchwork.
>>
>> The only thing i can think of is that maybe i've misspelled the mailing
>> list address ...
>
> It looks right to me.  Did you get a bounce?
>

Strangely, no. I'm out of ideas.

---
Best Regards, Laurentiu
diff mbox

Patch

diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
index 131f1f4..57c4d66 100644
--- a/arch/powerpc/mm/tlb_low_64e.S
+++ b/arch/powerpc/mm/tlb_low_64e.S
@@ -299,6 +299,7 @@  itlb_miss_fault_bolted:
  * r10 = crap (free to use)
  */
 tlb_miss_common_e6500:
+BEGIN_FTR_SECTION
 	/*
 	 * Search if we already have an indirect entry for that virtual
 	 * address, and if we do, bail out.
@@ -333,6 +334,7 @@  tlb_miss_common_e6500:
 
 	andis.	r10,r10,MAS1_VALID@h
 	bne	tlb_miss_done_e6500
+END_FTR_SECTION_IFSET(CPU_FTR_SMT)
 
 	/* Now, we need to walk the page tables. First check if we are in
 	 * range.
@@ -393,11 +395,13 @@  tlb_miss_common_e6500:
 
 tlb_miss_done_e6500:
 	.macro	tlb_unlock_e6500
+BEGIN_FTR_SECTION
 	beq	cr1,1f		/* no unlock if lock was recursively grabbed */
 	li	r15,0
 	isync
 	stb	r15,0(r11)
 1:
+END_FTR_SECTION_IFSET(CPU_FTR_SMT)
 	.endm
 
 	tlb_unlock_e6500