Message ID | 157918586979.29301.15267608912757298568.stgit@devnote2 |
---|---|
State | RFC |
Headers | show |
Series | tracing: kprobes: Introduce async unregistration | expand |
diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 52b05ab9c323..a2c755e79be7 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -674,8 +674,6 @@ static void force_unoptimize_kprobe(struct optimized_kprobe *op) lockdep_assert_cpus_held(); arch_unoptimize_kprobe(op); op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED; - if (kprobe_disabled(&op->kp)) - arch_disarm_kprobe(&op->kp); } /* Unoptimize a kprobe if p is optimized */
Fix to remove redundant arch_disarm_kprobe() call in force_unoptimize_kprobe(). This arch_disarm_kprobe() will be done if the kprobe is optimized but disabled, but that means the kprobe (optprobe) is unused (unoptimizing) state. In that case, unoptimize_kprobe() puts it in freeing_list and kprobe_optimizer automatically disarm it. So this arch_disarm_kprobe() is redundant. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> --- kernel/kprobes.c | 2 -- 1 file changed, 2 deletions(-)