Patchwork [071/150] tracing: Fix free of probe entry by calling call_rcu_sched()

mail settings
Submitter Luis Henriques
Date March 26, 2013, 3:19 p.m.
Message ID <>
Download mbox | patch
Permalink /patch/231329/
State New
Headers show


Luis Henriques - March 26, 2013, 3:19 p.m. -stable review patch.  If anyone has any objections, please let me know.


From: "Steven Rostedt (Red Hat)" <>

commit 740466bc89ad8bd5afcc8de220f715f62b21e365 upstream.

Because function tracing is very invasive, and can even trace
calls to rcu_read_lock(), RCU access in function tracing is done
with preempt_disable_notrace(). This requires a synchronize_sched()
for updates and not a synchronize_rcu().

Function probes (traceon, traceoff, etc) must be freed after
a synchronize_sched() after its entry has been removed from the
hash. But call_rcu() is used. Fix this by using call_rcu_sched().

Also fix the usage to use hlist_del_rcu() instead of hlist_del().

Cc: Paul McKenney <>
Signed-off-by: Steven Rostedt <>
Signed-off-by: Luis Henriques <>
 kernel/trace/ftrace.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index e5a77ba..1b6ec54 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -3002,8 +3002,8 @@  __unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
-			hlist_del(&entry->node);
-			call_rcu(&entry->rcu, ftrace_free_entry_rcu);
+			hlist_del_rcu(&entry->node);
+			call_rcu_sched(&entry->rcu, ftrace_free_entry_rcu);