From patchwork Thu Mar 11 19:07:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1451470 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DxJP05FxJz9sVw; Fri, 12 Mar 2021 06:07:47 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1lKQes-0001dG-CF; Thu, 11 Mar 2021 19:07:42 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1lKQer-0001d1-0h for kernel-team@lists.ubuntu.com; Thu, 11 Mar 2021 19:07:41 +0000 Received: from mail-pl1-f199.google.com ([209.85.214.199]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1lKQeq-0004lI-Iz for kernel-team@lists.ubuntu.com; Thu, 11 Mar 2021 19:07:40 +0000 Received: by mail-pl1-f199.google.com with SMTP id g7so11707524plj.16 for ; Thu, 11 Mar 2021 11:07:40 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=E1HHUySEO88sJLBo3xa08ab64oobIFndfyDzwanddcE=; b=Q5KRZ/BvAxbap2zpBCoT5yrdcFaEX9rDbDkjeTPFBNLxOojm5XbGn3bCYlq+uxeKbn /FaVFiwyg0IfEn6MN4NiMaiqMaft9VuVo23vaHGFN9FrPEyzhc+5QSyuSLi2wggKllIX usV8qBLMU6pUFmRSsUpaWn5GkuvLuwsvi96AlTV5o7ZPqf2CbXM1EqVAsg/YkZ+8xWpG hzeMuFtA6Fo3yhOB9FCP7hDmJ41AP5U1LYk451iNla/L+LuiV5NRKBj5co+Zpvy4hHEE Huszf1Wj2T2qptOVDrHbKOdMUodsJKGxwzEYK/omcpzoxW3MyRZEnipn4sLPb2TqryQC E5gw== X-Gm-Message-State: AOAM530oiT32SJPr/ETplhUAk9z/SmfkmCLouOX4X++5b1rNEcTlnIb0 gYiDBaNbgGuLqon4TiAQ6zzzfDRuzRea1qpXAXn7Itn9hreMAF62Z79Jfz0doz0Vgpvx+gnKHZC v1wcC03b2W3/JYJCI/2fwrDE8Bu4gY02rKIG9BYHCwg== X-Received: by 2002:a05:6a00:15c8:b029:1b7:d521:32e9 with SMTP id o8-20020a056a0015c8b02901b7d52132e9mr8917245pfu.22.1615489658771; Thu, 11 Mar 2021 11:07:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJw7srbPxtYxruZ/CNjfTdBWMBJpp8iSxBu5UfEnUG5WG4DPNvnzqpY0Ld+VJTCEmsOUx6areQ== X-Received: by 2002:a05:6a00:15c8:b029:1b7:d521:32e9 with SMTP id o8-20020a056a0015c8b02901b7d52132e9mr8917220pfu.22.1615489658413; Thu, 11 Mar 2021 11:07:38 -0800 (PST) Received: from localhost.localdomain ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id d11sm3080446pjz.47.2021.03.11.11.07.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Mar 2021 11:07:38 -0800 (PST) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH] perf/core: Fix race in the perf_mmap_close() function Date: Thu, 11 Mar 2021 12:07:34 -0700 Message-Id: <20210311190734.16430-2-tim.gardner@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210311190734.16430-1-tim.gardner@canonical.com> References: <20210311190734.16430-1-tim.gardner@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jiri Olsa CVE-2020-14351 There's a possible race in perf_mmap_close() when checking ring buffer's mmap_count refcount value. The problem is that the mmap_count check is not atomic because we call atomic_dec() and atomic_read() separately. perf_mmap_close: ... atomic_dec(&rb->mmap_count); ... if (atomic_read(&rb->mmap_count)) goto out_put; free_uid out_put: ring_buffer_put(rb); /* could be last */ The race can happen when we have two (or more) events sharing same ring buffer and they go through atomic_dec() and then they both see 0 as refcount value later in atomic_read(). Then both will go on and execute code which is meant to be run just once. The code that detaches ring buffer is probably fine to be executed more than once, but the problem is in calling free_uid(), which will later on demonstrate in related crashes and refcount warnings, like: refcount_t: addition on 0; use-after-free. ... RIP: 0010:refcount_warn_saturate+0x6d/0xf ... Call Trace: prepare_creds+0x190/0x1e0 copy_creds+0x35/0x172 copy_process+0x471/0x1a80 _do_fork+0x83/0x3a0 __do_sys_wait4+0x83/0x90 __do_sys_clone+0x85/0xa0 do_syscall_64+0x5b/0x1e0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Using atomic decrease and check instead of separated calls. Tested-by: Michael Petlan Signed-off-by: Jiri Olsa Signed-off-by: Ingo Molnar Acked-by: Peter Zijlstra Acked-by: Namhyung Kim Acked-by: Wade Mealing Fixes: 9bb5d40cd93c ("perf: Fix mmap() accounting hole"); Link: https://lore.kernel.org/r/20200916115311.GE2301783@krava (cherry picked from commit f91072ed1b7283b13ca57fcfbece5a3b92726143) Signed-off-by: Tim Gardner Acked-by: Stefan Bader --- kernel/events/core.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 6211d716c928..ba7a9879a22e 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5663,11 +5663,11 @@ static void perf_pmu_output_stop(struct perf_event *event); static void perf_mmap_close(struct vm_area_struct *vma) { struct perf_event *event = vma->vm_file->private_data; - struct perf_buffer *rb = ring_buffer_get(event); struct user_struct *mmap_user = rb->mmap_user; int mmap_locked = rb->mmap_locked; unsigned long size = perf_data_size(rb); + bool detach_rest = false; if (event->pmu->event_unmapped) event->pmu->event_unmapped(event, vma->vm_mm); @@ -5698,7 +5698,8 @@ static void perf_mmap_close(struct vm_area_struct *vma) mutex_unlock(&event->mmap_mutex); } - atomic_dec(&rb->mmap_count); + if (atomic_dec_and_test(&rb->mmap_count)) + detach_rest = true; if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) goto out_put; @@ -5707,7 +5708,7 @@ static void perf_mmap_close(struct vm_area_struct *vma) mutex_unlock(&event->mmap_mutex); /* If there's still other mmap()s of this buffer, we're done. */ - if (atomic_read(&rb->mmap_count)) + if (!detach_rest) goto out_put; /*