From patchwork Tue May 15 14:36:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manoj Iyer X-Patchwork-Id: 913682 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40lgBc3BNbz9s01; Wed, 16 May 2018 00:37:12 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1fIb4b-00009y-RF; Tue, 15 May 2018 14:37:05 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1fIb4X-00005a-2Y for kernel-team@lists.ubuntu.com; Tue, 15 May 2018 14:37:01 +0000 Received: from 1.general.manjo.us.vpn ([10.172.65.2] helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.76) (envelope-from ) id 1fIb4W-00045T-Jq for kernel-team@lists.ubuntu.com; Tue, 15 May 2018 14:37:00 +0000 From: Manoj Iyer To: kernel-team@lists.ubuntu.com Subject: [PATCH] init: fix false positives in W+X checking Date: Tue, 15 May 2018 09:36:51 -0500 Message-Id: <20180515143651.2657-2-manoj.iyer@canonical.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180515143651.2657-1-manoj.iyer@canonical.com> References: <20180515143651.2657-1-manoj.iyer@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jeffrey Hugo load_module() creates W+X mappings via __vmalloc_node_range() (from layout_and_allocate()->move_module()->module_alloc()) by using PAGE_KERNEL_EXEC. These mappings are later cleaned up via "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module(). This is a problem because call_rcu_sched() queues work, which can be run after debug_checkwx() is run, resulting in a race condition. If hit, the race results in a nasty splat about insecure W+X mappings, which results in a poor user experience as these are not the mappings that debug_checkwx() is intended to catch. This issue is observed on multiple arm64 platforms, and has been artificially triggered on an x86 platform. Address the race by flushing the queued work before running the arch-defined mark_rodata_ro() which then calls debug_checkwx(). Link: http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@codeaurora.org Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings") BugLink: https://launchpad.net/bugs/1769696 Signed-off-by: Jeffrey Hugo Reported-by: Timur Tabi Reported-by: Jan Glauber Acked-by: Kees Cook Acked-by: Ingo Molnar Acked-by: Will Deacon Acked-by: Laura Abbott Cc: Mark Rutland Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Stephen Smalley Cc: Thomas Gleixner Cc: Peter Zijlstra Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds (cherry picked from commit ae646f0b9ca135b87bc73ff606ef996c3029780a) Signed-off-by: Manoj Iyer Acked-by: Kleber Sacilotto de Souza --- init/main.c | 7 +++++++ kernel/module.c | 5 +++++ 2 files changed, 12 insertions(+) diff --git a/init/main.c b/init/main.c index b8b121c17ff1..44f88af9b191 100644 --- a/init/main.c +++ b/init/main.c @@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata); static void mark_readonly(void) { if (rodata_enabled) { + /* + * load_module() results in W+X mappings, which are cleaned up + * with call_rcu_sched(). Let's make sure that queued work is + * flushed so that we don't hit false positives looking for + * insecure pages which are W+X. + */ + rcu_barrier_sched(); mark_rodata_ro(); rodata_test(); } else diff --git a/kernel/module.c b/kernel/module.c index 2612f760df84..0da7f3468350 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module *mod) * walking this with preempt disabled. In all the failure paths, we * call synchronize_sched(), but we don't want to slow down the success * path, so use actual RCU here. + * Note that module_alloc() on most architectures creates W+X page + * mappings which won't be cleaned up until do_free_init() runs. Any + * code such as mark_rodata_ro() which depends on those mappings to + * be cleaned up needs to sync with the queued work - ie + * rcu_barrier_sched() */ call_rcu_sched(&freeinit->rcu, do_free_init); mutex_unlock(&module_mutex);