From patchwork Thu Mar 19 20:21:46 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Mostafa X-Patchwork-Id: 452259 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 4374714010F; Fri, 20 Mar 2015 07:23:42 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1YYgyf-0006eH-SM; Thu, 19 Mar 2015 20:23:37 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1YYgwx-0005de-0k for kernel-team@lists.ubuntu.com; Thu, 19 Mar 2015 20:21:51 +0000 Received: from [10.172.68.52] (helo=fourier) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1YYgwv-00024f-KJ; Thu, 19 Mar 2015 20:21:49 +0000 Received: from kamal by fourier with local (Exim 4.82) (envelope-from ) id 1YYgwt-00050t-75; Thu, 19 Mar 2015 13:21:47 -0700 From: Kamal Mostafa To: Peter Zijlstra Subject: [3.13.y-ckt stable] Patch "perf: Tighten (and fix) the grouping condition" has been added to staging queue Date: Thu, 19 Mar 2015 13:21:46 -0700 Message-Id: <1426796506-19241-1-git-send-email-kamal@canonical.com> X-Mailer: git-send-email 1.9.1 X-Extended-Stable: 3.13 Cc: Jiri Olsa , Kamal Mostafa , Arnaldo Carvalho de Melo , kernel-team@lists.ubuntu.com, Linus Torvalds , Ingo Molnar X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com This is a note to let you know that I have just added a patch titled perf: Tighten (and fix) the grouping condition to the linux-3.13.y-queue branch of the 3.13.y-ckt extended stable tree which can be found at: http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.13.y-queue This patch is scheduled to be released in version 3.13.11-ckt17. If you, or anyone else, feels it should not be added to this tree, please reply to this email. For more information about the 3.13.y-ckt tree, see https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable Thanks. -Kamal ------ From 69a57ffdc49e4febc0a25b2206099253a965efda Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 23 Jan 2015 11:19:48 +0100 Subject: perf: Tighten (and fix) the grouping condition commit c3c87e770458aa004bd7ed3f29945ff436fd6511 upstream. The fix from 9fc81d87420d ("perf: Fix events installation during moving group") was incomplete in that it failed to recognise that creating a group with events for different CPUs is semantically broken -- they cannot be co-scheduled. Furthermore, it leads to real breakage where, when we create an event for CPU Y and then migrate it to form a group on CPU X, the code gets confused where the counter is programmed -- triggered in practice as well by me via the perf fuzzer. Fix this by tightening the rules for creating groups. Only allow grouping of counters that can be co-scheduled in the same context. This means for the same task and/or the same cpu. Fixes: 9fc81d87420d ("perf: Fix events installation during moving group") Signed-off-by: Peter Zijlstra (Intel) Cc: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Linus Torvalds Link: http://lkml.kernel.org/r/20150123125834.090683288@infradead.org Signed-off-by: Ingo Molnar Signed-off-by: Kamal Mostafa --- include/linux/perf_event.h | 6 ------ kernel/events/core.c | 15 +++++++++++++-- 2 files changed, 13 insertions(+), 8 deletions(-) -- 1.9.1 diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 2e069d1..01249d9 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -439,11 +439,6 @@ struct perf_event { #endif /* CONFIG_PERF_EVENTS */ }; -enum perf_event_context_type { - task_context, - cpu_context, -}; - /** * struct perf_event_context - event context structure * @@ -451,7 +446,6 @@ enum perf_event_context_type { */ struct perf_event_context { struct pmu *pmu; - enum perf_event_context_type type; /* * Protect the states of the events in the list, * nr_active, and the list: diff --git a/kernel/events/core.c b/kernel/events/core.c index 5f06486..68105f2 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6499,7 +6499,6 @@ skip_type: __perf_event_init_context(&cpuctx->ctx); lockdep_set_class(&cpuctx->ctx.mutex, &cpuctx_mutex); lockdep_set_class(&cpuctx->ctx.lock, &cpuctx_lock); - cpuctx->ctx.type = cpu_context; cpuctx->ctx.pmu = pmu; __perf_cpu_hrtimer_init(cpuctx, cpu); @@ -7132,7 +7131,19 @@ SYSCALL_DEFINE5(perf_event_open, * task or CPU context: */ if (move_group) { - if (group_leader->ctx->type != ctx->type) + /* + * Make sure we're both on the same task, or both + * per-cpu events. + */ + if (group_leader->ctx->task != ctx->task) + goto err_context; + + /* + * Make sure we're both events for the same CPU; + * grouping events for different CPUs is broken; since + * you can never concurrently schedule them anyhow. + */ + if (group_leader->cpu != event->cpu) goto err_context; } else { if (group_leader->ctx != ctx)