Patchwork [3.5.y.z,extended,stable] Patch "x86, fpu: Avoid FPU lazy restore after suspend" has been added to staging queue

mail settings
Submitter Herton Ronaldo Krzesinski
Date Dec. 10, 2012, 5:01 p.m.
Message ID <>
Download mbox | patch
Permalink /patch/204985/
State New
Headers show


Herton Ronaldo Krzesinski - Dec. 10, 2012, 5:01 p.m.
This is a note to let you know that I have just added a patch titled

    x86, fpu: Avoid FPU lazy restore after suspend

to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree 
which can be found at:;a=shortlog;h=refs/heads/linux-3.5.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.5.y.z tree, see



From 124606b1fa23adc5a479f2ca02265af8f969820a Mon Sep 17 00:00:00 2001
From: Vincent Palatin <>
Date: Fri, 30 Nov 2012 12:15:32 -0800
Subject: [PATCH] x86, fpu: Avoid FPU lazy restore after suspend

commit 644c154186386bb1fa6446bc5e037b9ed098db46 upstream.

When a cpu enters S3 state, the FPU state is lost.
After resuming for S3, if we try to lazy restore the FPU for a process running
on the same CPU, this will result in a corrupted FPU context.

Ensure that "fpu_owner_task" is properly invalided when (re-)initializing a CPU,
so nobody will try to lazy restore a state which doesn't exist in the hardware.

Tested with a 64-bit kernel on a 4-core Ivybridge CPU with eagerfpu=off,
by doing thousands of suspend/resume cycles with 4 processes doing FPU
operations running. Without the patch, a process is killed after a
few hundreds cycles by a SIGFPE.

Cc: Duncan Laurie <>
Cc: Olof Johansson <>
Signed-off-by: Vincent Palatin <>
Signed-off-by: H. Peter Anvin <>
Signed-off-by: Herton Ronaldo Krzesinski <>
 arch/x86/include/asm/fpu-internal.h |   15 +++++++++------
 arch/x86/kernel/smpboot.c           |    5 +++++
 2 files changed, 14 insertions(+), 6 deletions(-)



diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index 75f4c6d..04cb0f8 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -334,14 +334,17 @@  static inline void __thread_fpu_begin(struct task_struct *tsk)
 typedef struct { int preload; } fpu_switch_t;

- * FIXME! We could do a totally lazy restore, but we need to
- * add a per-cpu "this was the task that last touched the FPU
- * on this CPU" variable, and the task needs to have a "I last
- * touched the FPU on this CPU" and check them.
+ * Must be run with preemption disabled: this clears the fpu_owner_task,
+ * on this CPU.
- * We don't do that yet, so "fpu_lazy_restore()" always returns
- * false, but some day..
+ * This will disable any lazy FPU state restore of the current FPU state,
+ * but if the current thread owns the FPU, it will still be saved by.
+static inline void __cpu_disable_lazy_restore(unsigned int cpu)
+	per_cpu(fpu_owner_task, cpu) = NULL;
 static inline int fpu_lazy_restore(struct task_struct *new, unsigned int cpu)
 	return new == this_cpu_read_stable(fpu_owner_task) &&
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 7bd8a08..6977453 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -66,6 +66,8 @@ 
 #include <asm/mwait.h>
 #include <asm/apic.h>
 #include <asm/io_apic.h>
+#include <asm/i387.h>
+#include <asm/fpu-internal.h>
 #include <asm/setup.h>
 #include <asm/uv/uv.h>
 #include <linux/mc146818rtc.h>
@@ -826,6 +828,9 @@  int __cpuinit native_cpu_up(unsigned int cpu, struct task_struct *tidle)

 	per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;

+	/* the FPU context is blank, nobody can own it */
+	__cpu_disable_lazy_restore(cpu);
 	err = do_boot_cpu(apicid, cpu, tidle);
 	if (err) {
 		pr_debug("do_boot_cpu failed %d\n", err);