From patchwork Tue Aug 20 14:42:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kleber Sacilotto de Souza X-Patchwork-Id: 1150203 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46CYRf3QFJz9sP7; Wed, 21 Aug 2019 00:42:36 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1i05LD-0005uP-3R; Tue, 20 Aug 2019 14:42:31 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1i05LA-0005u5-8g for kernel-team@lists.ubuntu.com; Tue, 20 Aug 2019 14:42:28 +0000 Received: from mail-wm1-f69.google.com ([209.85.128.69]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1i05L9-0005FK-UM for kernel-team@lists.ubuntu.com; Tue, 20 Aug 2019 14:42:27 +0000 Received: by mail-wm1-f69.google.com with SMTP id x13so841055wmj.9 for ; Tue, 20 Aug 2019 07:42:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=WolW6Pxhg71NS/eFjvxD45EyY1RytEl5XZRVIXPq0Dc=; b=pCwqKIdBkMO+RgwsVrr9uoms5YI8dVUvEA6C3taFilGlIHUm3dX7cTcV/iFFml8gxc HaWOOZ4IhEljr9m9M++XGA1ti0tyrD+HvW9OIgn3LRt/NNFL8oKkO8VONIHIYWPoYAB1 7iLyQuRmWI3ZD3k6ewsIM7pnm+W0szr7VM8s+DBKMpU5JhXNaUHX/fPbxmbDLkSiBpWg zVIUuVAwlQRKeZ0LTTiM+JsWxsfnH7l5F4qRWOs6+ShfWJLVqVJCTJCojs1+AiI05PE6 EA70ptDxRwT7fQeMU0N+RyPBVIpq2dxQuUzeXphXlvmnm1FHkCQhYihzHypJy5VAK1pn vfjA== X-Gm-Message-State: APjAAAX2/QYSQbJ26kcwDaOddQDV3JaaQQe+97P3wPtE2w5MrxK6y/j5 KGRv3zQW3m4wJqQ+P9QFwVBOgk28r1jvG0dZNeg6qrqihC7bVKT1uR66x7aNIhY/qqvwnlyluOH /8vSzAtD8QldzY/V7+0kf7bAttpU1gEeSa0L3OWJmxg== X-Received: by 2002:a5d:4403:: with SMTP id z3mr23507836wrq.29.1566312147443; Tue, 20 Aug 2019 07:42:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqy6TVajEgvfV/3VuHI2+3ve8pJo7O6tFTWW7JQrdhpeIxXbnodTMSwcy5wuUKlIteaM3hup8g== X-Received: by 2002:a5d:4403:: with SMTP id z3mr23507795wrq.29.1566312147140; Tue, 20 Aug 2019 07:42:27 -0700 (PDT) Received: from localhost ([212.121.131.210]) by smtp.gmail.com with ESMTPSA id f17sm215837wmf.27.2019.08.20.07.42.26 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 20 Aug 2019 07:42:26 -0700 (PDT) From: Kleber Sacilotto de Souza To: kernel-team@lists.ubuntu.com Subject: [SRU][Disco][PATCH 1/1] x86/kprobes: Set instruction page as executable Date: Tue, 20 Aug 2019 16:42:27 +0200 Message-Id: <20190820144227.25380-2-kleber.souza@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190820144227.25380-1-kleber.souza@canonical.com> References: <20190820144227.25380-1-kleber.souza@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Nadav Amit BugLink: https://bugs.launchpad.net/bugs/1840750 Set the page as executable after allocation. This patch is a preparatory patch for a following patch that makes module allocated pages non-executable. While at it, do some small cleanup of what appears to be unnecessary masking. Signed-off-by: Nadav Amit Signed-off-by: Rick Edgecombe Signed-off-by: Peter Zijlstra (Intel) Cc: Cc: Cc: Cc: Cc: Cc: Cc: Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Dave Hansen Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Rik van Riel Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/20190426001143.4983-11-namit@vmware.com Signed-off-by: Ingo Molnar (cherry picked from commit 7298e24f904224fa79eb8fd7e0fbd78950ccf2db) Signed-off-by: Kleber Sacilotto de Souza Acked-by: Colin Ian King Acked-by: Stefan Bader --- arch/x86/kernel/kprobes/core.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index f4b954ff5b89..3bc4cc70f1e5 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -431,8 +431,20 @@ void *alloc_insn_page(void) void *page; page = module_alloc(PAGE_SIZE); - if (page) - set_memory_ro((unsigned long)page & PAGE_MASK, 1); + if (!page) + return NULL; + + /* + * First make the page read-only, and only then make it executable to + * prevent it from being W+X in between. + */ + set_memory_ro((unsigned long)page, 1); + + /* + * TODO: Once additional kernel code protection mechanisms are set, ensure + * that the page was not maliciously altered and it is still zeroed. + */ + set_memory_x((unsigned long)page, 1); return page; } @@ -440,8 +452,12 @@ void *alloc_insn_page(void) /* Recover page to RW mode before releasing it */ void free_insn_page(void *page) { - set_memory_nx((unsigned long)page & PAGE_MASK, 1); - set_memory_rw((unsigned long)page & PAGE_MASK, 1); + /* + * First make the page non-executable, and only then make it writable to + * prevent it from being W+X in between. + */ + set_memory_nx((unsigned long)page, 1); + set_memory_rw((unsigned long)page, 1); module_memfree(page); }