From patchwork Mon Feb 18 12:39:37 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 221277 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 9BF922C03B0 for ; Mon, 18 Feb 2013 23:42:42 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754716Ab3BRMlw (ORCPT ); Mon, 18 Feb 2013 07:41:52 -0500 Received: from e23smtp04.au.ibm.com ([202.81.31.146]:35112 "EHLO e23smtp04.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752086Ab3BRMlt (ORCPT ); Mon, 18 Feb 2013 07:41:49 -0500 Received: from /spool/local by e23smtp04.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 18 Feb 2013 22:32:40 +1000 Received: from d23dlp02.au.ibm.com (202.81.31.213) by e23smtp04.au.ibm.com (202.81.31.210) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 18 Feb 2013 22:32:38 +1000 Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [9.190.235.21]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id A1C542BB004F; Mon, 18 Feb 2013 23:41:43 +1100 (EST) Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r1ICffq260162196; Mon, 18 Feb 2013 23:41:41 +1100 Received: from d23av03.au.ibm.com (loopback [127.0.0.1]) by d23av03.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r1ICfemn019303; Mon, 18 Feb 2013 23:41:43 +1100 Received: from srivatsabhat.in.ibm.com (srivatsabhat.in.ibm.com [9.124.35.204] (may be forged)) by d23av03.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r1ICfY7e019239; Mon, 18 Feb 2013 23:41:34 +1100 From: "Srivatsa S. Bhat" Subject: [PATCH v6 10/46] smp, cpu hotplug: Fix smp_call_function_*() to prevent CPU offline properly To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, rjw@sisk.pl, sbw@mit.edu, fweisbec@gmail.com, linux@arm.linux.org.uk, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, walken@google.com, vincent.guittot@linaro.org Date: Mon, 18 Feb 2013 18:09:37 +0530 Message-ID: <20130218123937.26245.44200.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130218123714.26245.61816.stgit@srivatsabhat.in.ibm.com> References: <20130218123714.26245.61816.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13021812-9264-0000-0000-0000032CB833 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Once stop_machine() is gone from the CPU offline path, we won't be able to depend on preempt_disable() to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Signed-off-by: Srivatsa S. Bhat --- kernel/smp.c | 40 ++++++++++++++++++++++++++-------------- 1 file changed, 26 insertions(+), 14 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/kernel/smp.c b/kernel/smp.c index 69f38bd..0f40d36 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -315,7 +315,8 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info, * prevent preemption and reschedule on another processor, * as well as CPU removal */ - this_cpu = get_cpu(); + get_online_cpus_atomic(); + this_cpu = smp_processor_id(); /* * Can deadlock when called with interrupts disabled. @@ -347,7 +348,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info, } } - put_cpu(); + put_online_cpus_atomic(); return err; } @@ -376,8 +377,10 @@ int smp_call_function_any(const struct cpumask *mask, const struct cpumask *nodemask; int ret; + get_online_cpus_atomic(); /* Try for same CPU (cheapest) */ - cpu = get_cpu(); + cpu = smp_processor_id(); + if (cpumask_test_cpu(cpu, mask)) goto call; @@ -393,7 +396,7 @@ int smp_call_function_any(const struct cpumask *mask, cpu = cpumask_any_and(mask, cpu_online_mask); call: ret = smp_call_function_single(cpu, func, info, wait); - put_cpu(); + put_online_cpus_atomic(); return ret; } EXPORT_SYMBOL_GPL(smp_call_function_any); @@ -414,25 +417,28 @@ void __smp_call_function_single(int cpu, struct call_single_data *data, unsigned int this_cpu; unsigned long flags; - this_cpu = get_cpu(); + get_online_cpus_atomic(); + + this_cpu = smp_processor_id(); + /* * Can deadlock when called with interrupts disabled. * We allow cpu's that are not yet online though, as no one else can * send smp call function interrupt to this cpu and as such deadlocks * can't happen. */ - WARN_ON_ONCE(cpu_online(smp_processor_id()) && wait && irqs_disabled() + WARN_ON_ONCE(cpu_online(this_cpu) && wait && irqs_disabled() && !oops_in_progress); if (cpu == this_cpu) { local_irq_save(flags); data->func(data->info); local_irq_restore(flags); - } else { + } else if ((unsigned)cpu < nr_cpu_ids && cpu_online(cpu)) { csd_lock(data); generic_exec_single(cpu, data, wait); } - put_cpu(); + put_online_cpus_atomic(); } /** @@ -456,6 +462,8 @@ void smp_call_function_many(const struct cpumask *mask, unsigned long flags; int refs, cpu, next_cpu, this_cpu = smp_processor_id(); + get_online_cpus_atomic(); + /* * Can deadlock when called with interrupts disabled. * We allow cpu's that are not yet online though, as no one else can @@ -472,17 +480,18 @@ void smp_call_function_many(const struct cpumask *mask, /* No online cpus? We're done. */ if (cpu >= nr_cpu_ids) - return; + goto out_unlock; /* Do we have another CPU which isn't us? */ next_cpu = cpumask_next_and(cpu, mask, cpu_online_mask); if (next_cpu == this_cpu) - next_cpu = cpumask_next_and(next_cpu, mask, cpu_online_mask); + next_cpu = cpumask_next_and(next_cpu, mask, + cpu_online_mask); /* Fastpath: do that cpu by itself. */ if (next_cpu >= nr_cpu_ids) { smp_call_function_single(cpu, func, info, wait); - return; + goto out_unlock; } data = &__get_cpu_var(cfd_data); @@ -528,7 +537,7 @@ void smp_call_function_many(const struct cpumask *mask, /* Some callers race with other cpus changing the passed mask */ if (unlikely(!refs)) { csd_unlock(&data->csd); - return; + goto out_unlock; } /* @@ -565,6 +574,9 @@ void smp_call_function_many(const struct cpumask *mask, /* Optionally wait for the CPUs to complete */ if (wait) csd_lock_wait(&data->csd); + +out_unlock: + put_online_cpus_atomic(); } EXPORT_SYMBOL(smp_call_function_many); @@ -585,9 +597,9 @@ EXPORT_SYMBOL(smp_call_function_many); */ int smp_call_function(smp_call_func_t func, void *info, int wait) { - preempt_disable(); + get_online_cpus_atomic(); smp_call_function_many(cpu_online_mask, func, info, wait); - preempt_enable(); + put_online_cpus_atomic(); return 0; }