[05/13] KVM: PPC: Ultravisor: Flush partition tlb cache, only if smf is not enabled.

Message ID 1548172784-27414-6-git-send-email-linuxram@us.ibm.com
State New
Headers show
Series
  • KVM: PPC: Paravirtualize KVM to support Ultravisor
Related show

Commit Message

Ram Pai Jan. 22, 2019, 3:59 p.m.
Ultravisor is responsible for flushing the tlb cache, since it manages
the PATE entries. Hence skip tlb flush, if a Ultravisor is enabled
on the system.

Signed-off-by: Ram Pai <linuxram@us.ibm.com>
---
 arch/powerpc/mm/pgtable-book3s64.c | 36 +++++++++++++++++++++---------------
 1 file changed, 21 insertions(+), 15 deletions(-)

Comments

Paul Mackerras Jan. 22, 2019, 11:58 p.m. | #1
On Tue, Jan 22, 2019 at 07:59:36AM -0800, Ram Pai wrote:
> Ultravisor is responsible for flushing the tlb cache, since it manages
> the PATE entries. Hence skip tlb flush, if a Ultravisor is enabled
> on the system.

Do you know that the nest MMU doesn't snoop those tlbies and act on
them?  If it does act on them then we need to keep them.

Paul.

Patch

diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
index ba6b34d..1d79b06 100644
--- a/arch/powerpc/mm/pgtable-book3s64.c
+++ b/arch/powerpc/mm/pgtable-book3s64.c
@@ -223,21 +223,9 @@  void __init mmu_partition_table_init(void)
 	powernv_set_nmmu_ptcr(ptcr);
 }
 
-static void __mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
-				   unsigned long dw1)
+static void flush_partition(unsigned int lpid, unsigned long dw0)
 {
-	unsigned long old = be64_to_cpu(partition_tb[lpid].patb0);
-
-	partition_tb[lpid].patb0 = cpu_to_be64(dw0);
-	partition_tb[lpid].patb1 = cpu_to_be64(dw1);
-
-	/*
-	 * Global flush of TLBs and partition table caches for this lpid.
-	 * The type of flush (hash or radix) depends on what the previous
-	 * use of this partition ID was, not the new use.
-	 */
-	asm volatile("ptesync" : : : "memory");
-	if (old & PATB_HR) {
+	if (dw0 & PATB_HR) {
 		asm volatile(PPC_TLBIE_5(%0,%1,2,0,1) : :
 			     "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
 		asm volatile(PPC_TLBIE_5(%0,%1,2,1,1) : :
@@ -252,10 +240,28 @@  static void __mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0
 	asm volatile("eieio; tlbsync; ptesync" : : : "memory");
 }
 
+static void __mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
+				unsigned long dw1)
+{
+	unsigned long old = be64_to_cpu(partition_tb[lpid].patb0);
+
+	partition_tb[lpid].patb0 = cpu_to_be64(dw0);
+	partition_tb[lpid].patb1 = cpu_to_be64(dw1);
+
+	/*
+	 * Global flush of TLBs and partition table caches for this lpid.
+	 * The type of flush (hash or radix) depends on what the previous
+	 * use of this partition ID was, not the new use.
+	 */
+	if (!smf_enabled())
+		flush_partition(lpid, old);
+}
+
 void mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
 				  unsigned long dw1)
 {
-	pr_info("%s: SMF Regitered PATE for Hypervisor dw0 = 0x%lx dw1 = 0x%lx ", __func__, dw0, dw1);
+	pr_info("%s: SMF Regitered PATE dw0 = 0x%lx dw1 = 0x%lx lpid=%x",
+			__func__, dw0, dw1, lpid);
 	if (smf_enabled())
 		uv_register_pate(lpid, dw0, dw1);