Patchwork [3.5.y.z,extended,stable] Patch "smp: Fix SMP function call empty cpu mask race" has been added to staging queue

mail settings
Submitter Herton Ronaldo Krzesinski
Date Jan. 31, 2013, 10:11 p.m.
Message ID <>
Download mbox | patch
Permalink /patch/217443/
State New
Headers show


Herton Ronaldo Krzesinski - Jan. 31, 2013, 10:11 p.m.
This is a note to let you know that I have just added a patch titled

    smp: Fix SMP function call empty cpu mask race

to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree 
which can be found at:;a=shortlog;h=refs/heads/linux-3.5.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.5.y.z tree, see



From 101bb7130aea8e476d86d164fb49f655d21656c5 Mon Sep 17 00:00:00 2001
From: Wang YanQing <>
Date: Sat, 26 Jan 2013 15:53:57 +0800
Subject: [PATCH] smp: Fix SMP function call empty cpu mask race

commit f44310b98ddb7f0d06550d73ed67df5865e3eda5 upstream.

I get the following warning every day with v3.7, once or
twice a day:

  [ 2235.186027] WARNING: at /mnt/sda7/kernel/linux/arch/x86/kernel/apic/ipi.c:109 default_send_IPI_mask_logical+0x2f/0xb8()

As explained by Linus as well:

 | Once we've done the "list_add_rcu()" to add it to the
 | queue, we can have (another) IPI to the target CPU that can
 | now see it and clear the mask.
 | So by the time we get to actually send the IPI, the mask might
 | have been cleared by another IPI.

This patch also fixes a system hang problem, if the data->cpumask
gets cleared after passing this point:

        if (WARN_ONCE(!mask, "empty IPI mask"))

then the problem in commit 83d349f35e1a ("x86: don't send an IPI to
the empty set of CPU's") will happen again.

Signed-off-by: Wang YanQing <>
Acked-by: Linus Torvalds <>
Acked-by: Jan Beulich <>
Cc: Paul E. McKenney <>
Cc: Andrew Morton <>
[ Tidied up the changelog and the comment in the code. ]
Signed-off-by: Ingo Molnar <>
Signed-off-by: Herton Ronaldo Krzesinski <>
 kernel/smp.c |   13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)



diff --git a/kernel/smp.c b/kernel/smp.c
index d0ae5b2..f0bfdcd 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -33,6 +33,7 @@  struct call_function_data {
 	struct call_single_data	csd;
 	atomic_t		refs;
 	cpumask_var_t		cpumask;
+	cpumask_var_t		cpumask_ipi;

 static DEFINE_PER_CPU_SHARED_ALIGNED(struct call_function_data, cfd_data);
@@ -56,6 +57,9 @@  hotplug_cfd(struct notifier_block *nfb, unsigned long action, void *hcpu)
 		if (!zalloc_cpumask_var_node(&cfd->cpumask, GFP_KERNEL,
 			return notifier_from_errno(-ENOMEM);
+		if (!zalloc_cpumask_var_node(&cfd->cpumask_ipi, GFP_KERNEL,
+				cpu_to_node(cpu)))
+			return notifier_from_errno(-ENOMEM);

@@ -65,6 +69,7 @@  hotplug_cfd(struct notifier_block *nfb, unsigned long action, void *hcpu)
 	case CPU_DEAD:
+		free_cpumask_var(cfd->cpumask_ipi);
@@ -526,6 +531,12 @@  void smp_call_function_many(const struct cpumask *mask,

+	/*
+	 * After we put an entry into the list, data->cpumask
+	 * may be cleared again when another CPU sends another IPI for
+	 * a SMP function call, so data->cpumask will be zero.
+	 */
+	cpumask_copy(data->cpumask_ipi, data->cpumask);
 	raw_spin_lock_irqsave(&call_function.lock, flags);
 	 * Place entry at the _HEAD_ of the list, so that any cpu still
@@ -549,7 +560,7 @@  void smp_call_function_many(const struct cpumask *mask,

 	/* Send a message to all CPUs in the map */
-	arch_send_call_function_ipi_mask(data->cpumask);
+	arch_send_call_function_ipi_mask(data->cpumask_ipi);

 	/* Optionally wait for the CPUs to complete */
 	if (wait)