From patchwork Thu Jun 27 20:00:54 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 255151 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 179B52C02F6 for ; Fri, 28 Jun 2013 06:04:49 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755332Ab3F0UE1 (ORCPT ); Thu, 27 Jun 2013 16:04:27 -0400 Received: from e23smtp05.au.ibm.com ([202.81.31.147]:46676 "EHLO e23smtp05.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755252Ab3F0UEX (ORCPT ); Thu, 27 Jun 2013 16:04:23 -0400 Received: from /spool/local by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 28 Jun 2013 05:58:12 +1000 Received: from d23dlp02.au.ibm.com (202.81.31.213) by e23smtp05.au.ibm.com (202.81.31.211) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 28 Jun 2013 05:58:08 +1000 Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id 4A7512BB0053; Fri, 28 Jun 2013 06:04:18 +1000 (EST) Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r5RJnQaf6423034; Fri, 28 Jun 2013 05:49:26 +1000 Received: from d23av01.au.ibm.com (loopback [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r5RK4FMO022086; Fri, 28 Jun 2013 06:04:18 +1000 Received: from srivatsabhat.in.ibm.com ([9.79.209.72]) by d23av01.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r5RK48rA021992; Fri, 28 Jun 2013 06:04:09 +1000 From: "Srivatsa S. Bhat" Subject: [PATCH v3 45/45] tile: Use get/put_online_cpus_atomic() to prevent CPU offline To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org, walken@google.com, vincent.guittot@linaro.org, laijs@cn.fujitsu.com, David.Laight@aculab.com Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, sbw@mit.edu, fweisbec@gmail.com, zhong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Chris Metcalf , "Srivatsa S. Bhat" Date: Fri, 28 Jun 2013 01:30:54 +0530 Message-ID: <20130627200054.29830.68222.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130627195136.29830.10445.stgit@srivatsabhat.in.ibm.com> References: <20130627195136.29830.10445.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13062719-1396-0000-0000-0000032F2BEF Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Once stop_machine() is gone from the CPU offline path, we won't be able to depend on disabling preemption to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Cc: Chris Metcalf Signed-off-by: Srivatsa S. Bhat --- arch/tile/kernel/module.c | 3 +++ arch/tile/kernel/tlb.c | 15 +++++++++++++++ arch/tile/mm/homecache.c | 3 +++ 3 files changed, 21 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/tile/kernel/module.c b/arch/tile/kernel/module.c index 4918d91..db7d858 100644 --- a/arch/tile/kernel/module.c +++ b/arch/tile/kernel/module.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -79,8 +80,10 @@ void module_free(struct module *mod, void *module_region) vfree(module_region); /* Globally flush the L1 icache. */ + get_online_cpus_atomic(); flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask, 0, 0, 0, NULL, NULL, 0); + put_online_cpus_atomic(); /* * FIXME: If module_region == mod->module_init, trim exception diff --git a/arch/tile/kernel/tlb.c b/arch/tile/kernel/tlb.c index 3fd54d5..a32b9dd 100644 --- a/arch/tile/kernel/tlb.c +++ b/arch/tile/kernel/tlb.c @@ -14,6 +14,7 @@ */ #include +#include #include #include #include @@ -35,6 +36,8 @@ void flush_tlb_mm(struct mm_struct *mm) { HV_Remote_ASID asids[NR_CPUS]; int i = 0, cpu; + + get_online_cpus_atomic(); for_each_cpu(cpu, mm_cpumask(mm)) { HV_Remote_ASID *asid = &asids[i++]; asid->y = cpu / smp_topology.width; @@ -43,6 +46,7 @@ void flush_tlb_mm(struct mm_struct *mm) } flush_remote(0, HV_FLUSH_EVICT_L1I, mm_cpumask(mm), 0, 0, 0, NULL, asids, i); + put_online_cpus_atomic(); } void flush_tlb_current_task(void) @@ -55,8 +59,11 @@ void flush_tlb_page_mm(struct vm_area_struct *vma, struct mm_struct *mm, { unsigned long size = vma_kernel_pagesize(vma); int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0; + + get_online_cpus_atomic(); flush_remote(0, cache, mm_cpumask(mm), va, size, size, mm_cpumask(mm), NULL, 0); + put_online_cpus_atomic(); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long va) @@ -71,13 +78,18 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long size = vma_kernel_pagesize(vma); struct mm_struct *mm = vma->vm_mm; int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0; + + get_online_cpus_atomic(); flush_remote(0, cache, mm_cpumask(mm), start, end - start, size, mm_cpumask(mm), NULL, 0); + put_online_cpus_atomic(); } void flush_tlb_all(void) { int i; + + get_online_cpus_atomic(); for (i = 0; ; ++i) { HV_VirtAddrRange r = hv_inquire_virtual(i); if (r.size == 0) @@ -89,10 +101,13 @@ void flush_tlb_all(void) r.start, r.size, HPAGE_SIZE, cpu_online_mask, NULL, 0); } + put_online_cpus_atomic(); } void flush_tlb_kernel_range(unsigned long start, unsigned long end) { + get_online_cpus_atomic(); flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask, start, end - start, PAGE_SIZE, cpu_online_mask, NULL, 0); + put_online_cpus_atomic(); } diff --git a/arch/tile/mm/homecache.c b/arch/tile/mm/homecache.c index 1ae9119..7ff5bf0 100644 --- a/arch/tile/mm/homecache.c +++ b/arch/tile/mm/homecache.c @@ -397,9 +397,12 @@ void homecache_change_page_home(struct page *page, int order, int home) BUG_ON(page_count(page) > 1); BUG_ON(page_mapcount(page) != 0); kva = (unsigned long) page_address(page); + + get_online_cpus_atomic(); flush_remote(0, HV_FLUSH_EVICT_L2, &cpu_cacheable_map, kva, pages * PAGE_SIZE, PAGE_SIZE, cpu_online_mask, NULL, 0); + put_online_cpus_atomic(); for (i = 0; i < pages; ++i, kva += PAGE_SIZE) { pte_t *ptep = virt_to_pte(NULL, kva);