From patchwork Tue Sep 12 09:45:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Bader X-Patchwork-Id: 812743 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 3xs0KX5PjQz9sRg; Tue, 12 Sep 2017 19:45:52 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1drhlK-0003Qx-Ul; Tue, 12 Sep 2017 09:45:46 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1drhlH-0003Oz-ML for kernel-team@lists.ubuntu.com; Tue, 12 Sep 2017 09:45:43 +0000 Received: from 1.general.smb.uk.vpn ([10.172.193.28] helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1drhlH-0006U9-Ds for kernel-team@lists.ubuntu.com; Tue, 12 Sep 2017 09:45:43 +0000 From: Stefan Bader To: kernel-team@lists.ubuntu.com Subject: [Xenial PATCH 2/3] UBUNTU: SAUCE: s390/mm: fix local TLB flushing vs. detach of an mm address space Date: Tue, 12 Sep 2017 11:45:39 +0200 Message-Id: <1505209542-17445-3-git-send-email-stefan.bader@canonical.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1505209542-17445-1-git-send-email-stefan.bader@canonical.com> References: <1505209542-17445-1-git-send-email-stefan.bader@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Martin Schwidefsky BugLink: http://bugs.launchpad.net/bugs/1708399 The local TLB flushing code keeps an additional mask in the mm.context, the cpu_attach_mask. At the time a global flush of an address space is done the cpu_attach_mask is copied to the mm_cpumask in order to avoid future global flushes in case the mm is used by a single CPU only after the flush. Trouble is that the reset of the mm_cpumask is racy against the detach of an mm address space by switch_mm. The current order is first the global TLB flush and then the copy of the cpu_attach_mask to the mm_cpumask. The order needs to be the other way around. Cc: Reviewed-by: Heiko Carstens Signed-off-by: Martin Schwidefsky (backported from b3e5dc45fd1ec2aa1de6b80008f9295eb17e0659 linux-next) [merged with "s390/mm,kvm: flush gmap address space with IDTE"] Signed-off-by: Stefan Bader Acked-by: Colin Ian King --- arch/s390/include/asm/tlbflush.h | 56 ++++++++++++---------------------------- 1 file changed, 16 insertions(+), 40 deletions(-) diff --git a/arch/s390/include/asm/tlbflush.h b/arch/s390/include/asm/tlbflush.h index 80868c84..d54cc83 100644 --- a/arch/s390/include/asm/tlbflush.h +++ b/arch/s390/include/asm/tlbflush.h @@ -47,47 +47,31 @@ static inline void __tlb_flush_global(void) } /* - * Flush TLB entries for a specific mm on all CPUs (in case gmap is used - * this implicates multiple ASCEs!). + * Flush TLB entries for a specific ASCE on all CPUs. */ -static inline void __tlb_flush_full(struct mm_struct *mm) +static inline void __tlb_flush_mm(struct mm_struct * mm) { + /* + * If the machine has IDTE we prefer to do a per mm flush + * on all cpus instead of doing a local flush if the mm + * only ran on the local cpu. + */ preempt_disable(); atomic_add(0x10000, &mm->context.attach_count); - if (cpumask_equal(mm_cpumask(mm), cpumask_of(smp_processor_id()))) { - /* Local TLB flush */ - __tlb_flush_local(); + /* Reset TLB flush mask */ + if (MACHINE_HAS_TLB_LC) + cpumask_copy(mm_cpumask(mm), &mm->context.cpu_attach_mask); + barrier(); + if (MACHINE_HAS_IDTE && list_empty(&mm->context.gmap_list)) { + __tlb_flush_idte(mm->context.asce); } else { /* Global TLB flush */ __tlb_flush_global(); - /* Reset TLB flush mask */ - if (MACHINE_HAS_TLB_LC) - cpumask_copy(mm_cpumask(mm), - &mm->context.cpu_attach_mask); } atomic_sub(0x10000, &mm->context.attach_count); preempt_enable(); } -/* - * Flush TLB entries for a specific ASCE on all CPUs. Should never be used - * when more than one asce (e.g. gmap) ran on this mm. - */ -static inline void __tlb_flush_asce(struct mm_struct *mm, unsigned long asce) -{ - preempt_disable(); - atomic_add(0x10000, &mm->context.attach_count); - if (MACHINE_HAS_IDTE) - __tlb_flush_idte(asce); - else - __tlb_flush_global(); - /* Reset TLB flush mask */ - if (MACHINE_HAS_TLB_LC) - cpumask_copy(mm_cpumask(mm), &mm->context.cpu_attach_mask); - atomic_sub(0x10000, &mm->context.attach_count); - preempt_enable(); -} - static inline void __tlb_flush_kernel(void) { if (MACHINE_HAS_IDTE) @@ -97,7 +81,6 @@ static inline void __tlb_flush_kernel(void) } #else #define __tlb_flush_global() __tlb_flush_local() -#define __tlb_flush_full(mm) __tlb_flush_local() /* * Flush TLB entries for a specific ASCE on all CPUs. @@ -111,21 +94,14 @@ static inline void __tlb_flush_kernel(void) { __tlb_flush_local(); } -#endif static inline void __tlb_flush_mm(struct mm_struct * mm) { - /* - * If the machine has IDTE we prefer to do a per mm flush - * on all cpus instead of doing a local flush if the mm - * only ran on the local cpu. - */ - if (MACHINE_HAS_IDTE && list_empty(&mm->context.gmap_list)) - __tlb_flush_asce(mm, mm->context.asce); - else - __tlb_flush_full(mm); + __tlb_flush_local(); } +#endif + static inline void __tlb_flush_mm_lazy(struct mm_struct * mm) { if (mm->context.flush_mm) {