diff mbox series

powerpc/mm/hash: hard disable irq in the SLB insert path

Message ID 20180601082402.17192-1-aneesh.kumar@linux.ibm.com (mailing list archive)
State Accepted
Commit a5db5060e0b2e27605df272224bfd470f644d8a5
Headers show
Series powerpc/mm/hash: hard disable irq in the SLB insert path | expand

Commit Message

Aneesh Kumar K V June 1, 2018, 8:24 a.m. UTC
When inserting SLB entries for EA above 512TB, we need to hard disable irq.
This will make sure we don't take a PMU interrupt that can possibly touch
user space address via a stack dump. To prevent this, we need to hard disable
the interrupt.

Also add a comment explaining why we don't need context synchronizing isync
with slbmte.

Fixes: f384796c4 ("powerpc/mm: Add support for handling > 512TB address in SLB miss")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/mm/slb.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

Comments

Michael Ellerman June 4, 2018, 2:11 p.m. UTC | #1
On Fri, 2018-06-01 at 08:24:02 UTC, "Aneesh Kumar K.V" wrote:
> When inserting SLB entries for EA above 512TB, we need to hard disable irq.
> This will make sure we don't take a PMU interrupt that can possibly touch
> user space address via a stack dump. To prevent this, we need to hard disable
> the interrupt.
> 
> Also add a comment explaining why we don't need context synchronizing isync
> with slbmte.
> 
> Fixes: f384796c4 ("powerpc/mm: Add support for handling > 512TB address in SLB miss")
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/a5db5060e0b2e27605df272224bfd4

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 66577cc66dc9..27f5b81c372a 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -352,6 +352,14 @@  static void insert_slb_entry(unsigned long vsid, unsigned long ea,
 	/*
 	 * We are irq disabled, hence should be safe to access PACA.
 	 */
+	VM_WARN_ON(!irqs_disabled());
+
+	/*
+	 * We can't take a PMU exception in the following code, so hard
+	 * disable interrupts.
+	 */
+	hard_irq_disable();
+
 	index = get_paca()->stab_rr;
 
 	/*
@@ -369,6 +377,11 @@  static void insert_slb_entry(unsigned long vsid, unsigned long ea,
 		    ((unsigned long) ssize << SLB_VSID_SSIZE_SHIFT);
 	esid_data = mk_esid_data(ea, ssize, index);
 
+	/*
+	 * No need for an isync before or after this slbmte. The exception
+	 * we enter with and the rfid we exit with are context synchronizing.
+	 * Also we only handle user segments here.
+	 */
 	asm volatile("slbmte %0, %1" : : "r" (vsid_data), "r" (esid_data)
 		     : "memory");