From patchwork Thu Jun 27 20:00:38 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 255154 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id E24182C06FF for ; Fri, 28 Jun 2013 06:05:09 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755293Ab3F0UEK (ORCPT ); Thu, 27 Jun 2013 16:04:10 -0400 Received: from e28smtp07.in.ibm.com ([122.248.162.7]:41963 "EHLO e28smtp07.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755288Ab3F0UEE (ORCPT ); Thu, 27 Jun 2013 16:04:04 -0400 Received: from /spool/local by e28smtp07.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 28 Jun 2013 01:26:53 +0530 Received: from d28dlp01.in.ibm.com (9.184.220.126) by e28smtp07.in.ibm.com (192.168.1.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 28 Jun 2013 01:26:51 +0530 Received: from d28relay02.in.ibm.com (d28relay02.in.ibm.com [9.184.220.59]) by d28dlp01.in.ibm.com (Postfix) with ESMTP id 1081DE0043; Fri, 28 Jun 2013 01:33:33 +0530 (IST) Received: from d28av03.in.ibm.com (d28av03.in.ibm.com [9.184.220.65]) by d28relay02.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r5RK4JLt22216864; Fri, 28 Jun 2013 01:34:19 +0530 Received: from d28av03.in.ibm.com (loopback [127.0.0.1]) by d28av03.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r5RK3tdi027650; Fri, 28 Jun 2013 06:03:58 +1000 Received: from srivatsabhat.in.ibm.com ([9.79.209.72]) by d28av03.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r5RK3qgr027523; Fri, 28 Jun 2013 06:03:54 +1000 From: "Srivatsa S. Bhat" Subject: [PATCH v3 43/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org, walken@google.com, vincent.guittot@linaro.org, laijs@cn.fujitsu.com, David.Laight@aculab.com Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, sbw@mit.edu, fweisbec@gmail.com, zhong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Paul Mundt , Thomas Gleixner , linux-sh@vger.kernel.org, "Srivatsa S. Bhat" Date: Fri, 28 Jun 2013 01:30:38 +0530 Message-ID: <20130627200038.29830.83910.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130627195136.29830.10445.stgit@srivatsabhat.in.ibm.com> References: <20130627195136.29830.10445.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13062719-8878-0000-0000-000007B9C0EF Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Once stop_machine() is gone from the CPU offline path, we won't be able to depend on disabling preemption to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Cc: Paul Mundt Cc: Thomas Gleixner Cc: linux-sh@vger.kernel.org Signed-off-by: Srivatsa S. Bhat --- arch/sh/kernel/smp.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c index 4569645..42ec182 100644 --- a/arch/sh/kernel/smp.c +++ b/arch/sh/kernel/smp.c @@ -357,7 +357,7 @@ static void flush_tlb_mm_ipi(void *mm) */ void flush_tlb_mm(struct mm_struct *mm) { - preempt_disable(); + get_online_cpus_atomic(); if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) { smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1); @@ -369,7 +369,7 @@ void flush_tlb_mm(struct mm_struct *mm) } local_flush_tlb_mm(mm); - preempt_enable(); + put_online_cpus_atomic(); } struct flush_tlb_data { @@ -390,7 +390,7 @@ void flush_tlb_range(struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; - preempt_disable(); + get_online_cpus_atomic(); if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) { struct flush_tlb_data fd; @@ -405,7 +405,7 @@ void flush_tlb_range(struct vm_area_struct *vma, cpu_context(i, mm) = 0; } local_flush_tlb_range(vma, start, end); - preempt_enable(); + put_online_cpus_atomic(); } static void flush_tlb_kernel_range_ipi(void *info) @@ -433,7 +433,7 @@ static void flush_tlb_page_ipi(void *info) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) { - preempt_disable(); + get_online_cpus_atomic(); if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) { struct flush_tlb_data fd; @@ -448,7 +448,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) cpu_context(i, vma->vm_mm) = 0; } local_flush_tlb_page(vma, page); - preempt_enable(); + put_online_cpus_atomic(); } static void flush_tlb_one_ipi(void *info)