From patchwork Fri Aug 10 06:42:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 956006 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="j9ekbJ4g"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41mwYL5XYqz9s9l for ; Fri, 10 Aug 2018 16:43:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727335AbeHJJL3 (ORCPT ); Fri, 10 Aug 2018 05:11:29 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:35183 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727225AbeHJJL3 (ORCPT ); Fri, 10 Aug 2018 05:11:29 -0400 Received: by mail-pf1-f196.google.com with SMTP id p12-v6so4066286pfh.2 for ; Thu, 09 Aug 2018 23:43:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=Lsrs8RyHJr6/RJL7AS4czcStnEU8am4pVnioyGATnZA=; b=j9ekbJ4g9DdEvKhhHXou3ZAg+9xzdW0tFNu0I93sPaPP8V1zxMMfgZNOtLnUQcElFe GofFvs0QFMf3dey0fkG4epCYYSViEBYMyyXoo8UHSya1/qI5qowsPjK8VVf40vgXvNdW SaOU3EhnA1sRmPE/fd+RaJhS+6uv4oeqxQij+CIhrvbU8nEOv3D4PAOE0qSzHjuAjtf5 MWHpuKwhRY33mkUhoxfT+S5K0Zc5T+x5OsPgv9fuPEs2231gzi7oiHNpMVzP1jbzbwnU n5Cldje7/kPxEFROBqFOigIG3Wiku8rd6Om85P91Ai4Njc9D0VXlbT7fBVvZa4GxsNzV Mhjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=Lsrs8RyHJr6/RJL7AS4czcStnEU8am4pVnioyGATnZA=; b=gOwI8cel8T6NIf+bED1iTtZL5NLn+ZksmpPOlynFJoS96VJjbCHP4aquyipfNztiqz bs7kTOBOgACgHzzuWbzbzhHQsSJSgMtFnklVYvriqjkRwtdKuY/5EW81aGd34vbt8XO/ kGAfwT4bVihi7RGvY/HxCW7yQURyw1eEZD+ufd+17o/2CUdg/n8u5367xkuZz2zF5odM ITfaPS1Q0+QgTSoxK7soi3YSgow7thtl9Gvd7sKknV7366azwhfME6AfTuc09wa/TS7a CFWeVN1Owh7S1jGnSE8WCY6JHivJcGZkIwWaiLYneTF87vuPC6xDBhAo6auBLqnwdZr6 Xusg== X-Gm-Message-State: AOUpUlF0wNTT7VVpKNO8YT0O9N9pcwSzWbA7MZy2sMurcs6qolXEOcke 2+lKcfQtaQpNx48UDnO9gso= X-Google-Smtp-Source: AA+uWPxFXqIHsguUEDxs+WR0fctiti4aKvXzvYo+LXHFTLjGa8L+sXerT6HucEcKmXPa59Hbf/9Kzg== X-Received: by 2002:a63:d916:: with SMTP id r22-v6mr5000924pgg.381.1533883380865; Thu, 09 Aug 2018 23:43:00 -0700 (PDT) Received: from roar.au.ibm.com (123-243-222-142.tpgi.com.au. [123.243.222.142]) by smtp.gmail.com with ESMTPSA id n22-v6sm17038971pfj.68.2018.08.09.23.42.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Aug 2018 23:42:59 -0700 (PDT) From: Nicholas Piggin To: linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , kvm-ppc@vger.kernel.org, "Gautham R . Shenoy" , Mahesh Jagannath Salgaonkar , "Aneesh Kumar K.V" , Akshay Adiga Subject: [PATCH v2 1/2] powerpc/64s: move machine check SLB flushing to mm/slb.c Date: Fri, 10 Aug 2018 16:42:48 +1000 Message-Id: <20180810064249.13724-1-npiggin@gmail.com> X-Mailer: git-send-email 2.17.0 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The machine check code that flushes and restores bolted segments in real mode belongs in mm/slb.c. This will also be used by pseries machine check and idle code in future changes. Signed-off-by: Nicholas Piggin Since v1: - Restore the test for slb_shadow (mpe) --- arch/powerpc/include/asm/book3s/64/mmu-hash.h | 3 ++ arch/powerpc/kernel/mce_power.c | 26 +++++-------- arch/powerpc/mm/slb.c | 39 +++++++++++++++++++ 3 files changed, 51 insertions(+), 17 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h index 2f74bdc805e0..d4e398185b3a 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h @@ -497,6 +497,9 @@ extern void hpte_init_native(void); extern void slb_initialize(void); extern void slb_flush_and_rebolt(void); +extern void slb_flush_all_realmode(void); +extern void __slb_restore_bolted_realmode(void); +extern void slb_restore_bolted_realmode(void); extern void slb_vmalloc_update(void); extern void slb_set_size(u16 size); diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c index d6756af6ec78..3497c8329c1d 100644 --- a/arch/powerpc/kernel/mce_power.c +++ b/arch/powerpc/kernel/mce_power.c @@ -62,11 +62,8 @@ static unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr) #ifdef CONFIG_PPC_BOOK3S_64 static void flush_and_reload_slb(void) { - struct slb_shadow *slb; - unsigned long i, n; - /* Invalidate all SLBs */ - asm volatile("slbmte %0,%0; slbia" : : "r" (0)); + slb_flush_all_realmode(); #ifdef CONFIG_KVM_BOOK3S_HANDLER /* @@ -76,22 +73,17 @@ static void flush_and_reload_slb(void) if (get_paca()->kvm_hstate.in_guest) return; #endif - - /* For host kernel, reload the SLBs from shadow SLB buffer. */ - slb = get_slb_shadow(); - if (!slb) + if (early_radix_enabled()) return; - n = min_t(u32, be32_to_cpu(slb->persistent), SLB_MIN_SIZE); - - /* Load up the SLB entries from shadow SLB */ - for (i = 0; i < n; i++) { - unsigned long rb = be64_to_cpu(slb->save_area[i].esid); - unsigned long rs = be64_to_cpu(slb->save_area[i].vsid); + /* + * This probably shouldn't happen, but it may be possible it's + * called in early boot before SLB shadows are allocated. + */ + if (!get_slb_shadow()) + return; - rb = (rb & ~0xFFFul) | i; - asm volatile("slbmte %0,%1" : : "r" (rs), "r" (rb)); - } + slb_restore_bolted_realmode(); } #endif diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c index cb796724a6fc..0b095fa54049 100644 --- a/arch/powerpc/mm/slb.c +++ b/arch/powerpc/mm/slb.c @@ -90,6 +90,45 @@ static inline void create_shadowed_slbe(unsigned long ea, int ssize, : "memory" ); } +/* + * Insert bolted entries into SLB (which may not be empty, so don't clear + * slb_cache_ptr). + */ +void __slb_restore_bolted_realmode(void) +{ + struct slb_shadow *p = get_slb_shadow(); + enum slb_index index; + + /* No isync needed because realmode. */ + for (index = 0; index < SLB_NUM_BOLTED; index++) { + asm volatile("slbmte %0,%1" : + : "r" (be64_to_cpu(p->save_area[index].vsid)), + "r" (be64_to_cpu(p->save_area[index].esid))); + } +} + +/* + * Insert the bolted entries into an empty SLB. + * This is not the same as rebolt because the bolted segments are not + * changed, just loaded from the shadow area. + */ +void slb_restore_bolted_realmode(void) +{ + __slb_restore_bolted_realmode(); + get_paca()->slb_cache_ptr = 0; +} + +/* + * This flushes all SLB entries including 0, so it must be realmode. + */ +void slb_flush_all_realmode(void) +{ + /* + * This flushes all SLB entries including 0, so it must be realmode. + */ + asm volatile("slbmte %0,%0; slbia" : : "r" (0)); +} + static void __slb_flush_and_rebolt(void) { /* If you change this make sure you change SLB_NUM_BOLTED