From patchwork Wed Nov 22 13:07:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kleber Sacilotto de Souza X-Patchwork-Id: 840409 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 3yhjRj4Zttz9s7f; Thu, 23 Nov 2017 00:07:45 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1eHUkf-0006B1-VE; Wed, 22 Nov 2017 13:07:41 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1eHUkd-0006A8-R7 for kernel-team@lists.ubuntu.com; Wed, 22 Nov 2017 13:07:39 +0000 Received: from mail-wr0-f198.google.com ([209.85.128.198]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1eHUkd-0000P3-Je for kernel-team@lists.ubuntu.com; Wed, 22 Nov 2017 13:07:39 +0000 Received: by mail-wr0-f198.google.com with SMTP id y41so9914787wrc.22 for ; Wed, 22 Nov 2017 05:07:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=55uQdzInjlO9gufI52GqHg8IBW9k5QTq9+NMNy/b618=; b=SUWfasPqmX5hfZJvXhtLsNoGpiXAxgXEEsbwqlWTWJa5ju0FD+1+X08ko45AFCVG0C DCfDF4Ckb2o7FQJMsq8CVk0ejQVa7NA/GRP6Ve5ecWGgR3M6E5x1n9biIIJE2jObxWgo +c8qSRtcMXwrwhB1/0jpkolKHV7Qhb7gv4AtVW/HfEW3rxRHOYce+KiucXbzSrZrpdAz +LkeP3AjGLk6fJm6jyHSAitfhbkTZ3Y6yIuHsAFSgsEPhXDawz0xigeAP2ZPvE3SJjSt cobDyuvRNkOlsiCcub0rFxW39+KkjovTJ7t9BzLCfdVwd4QsDbhbAQEzhMSJCXtQFfLZ tnPg== X-Gm-Message-State: AJaThX7OyAy+OA3OGukmq67bK9QdsWCQIbsk89NLmejmg5H8fXwMAoA+ zQHjC8lnu5qGMRm7XsSL2oNBFhKHsq25vl6rFs2Bp6eUOM/Jw71fRa9wandHi0/Ul0lOsf/P1iM WAnBbKDwNsRz26CHdgMsbo2LlxsP4D162cQH4gGWPWA== X-Received: by 10.80.177.1 with SMTP id k1mr29737207edd.41.1511356058776; Wed, 22 Nov 2017 05:07:38 -0800 (PST) X-Google-Smtp-Source: AGs4zMZEnJ5z7urxCpHTZd0Wxtgdtcn3HYEPSJHqOwbK8GuLTcemUtdDUGOnE8AVa/Wn+SiQe4Fg6g== X-Received: by 10.80.177.1 with SMTP id k1mr29737179edd.41.1511356058420; Wed, 22 Nov 2017 05:07:38 -0800 (PST) Received: from localhost ([2a02:8109:a540:7e8:d93c:6a88:7e3b:ea29]) by smtp.gmail.com with ESMTPSA id g3sm9319468edi.66.2017.11.22.05.07.37 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 22 Nov 2017 05:07:37 -0800 (PST) From: Kleber Sacilotto de Souza To: kernel-team@lists.ubuntu.com Subject: [Trusty SRU][PATCH 1/1] mm: Tighten x86 /dev/mem with zeroing reads Date: Wed, 22 Nov 2017 14:07:31 +0100 Message-Id: <20171122130731.18779-2-kleber.souza@canonical.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171122130731.18779-1-kleber.souza@canonical.com> References: <20171122130731.18779-1-kleber.souza@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Kees Cook commit a4866aa812518ed1a37d8ea0c881dc946409de94 upstream. Under CONFIG_STRICT_DEVMEM, reading System RAM through /dev/mem is disallowed. However, on x86, the first 1MB was always allowed for BIOS and similar things, regardless of it actually being System RAM. It was possible for heap to end up getting allocated in low 1MB RAM, and then read by things like x86info or dd, which would trip hardened usercopy: usercopy: kernel memory exposure attempt detected from ffff880000090000 (dma-kmalloc-256) (4096 bytes) This changes the x86 exception for the low 1MB by reading back zeros for System RAM areas instead of blindly allowing them. More work is needed to extend this to mmap, but currently mmap doesn't go through usercopy, so hardened usercopy won't Oops the kernel. Reported-by: Tommi Rantala Tested-by: Tommi Rantala Signed-off-by: Kees Cook [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings CVE-2017-7889 (cherry picked from commit 3cbd86d25eeb61e57cb3367fe302c271b0c70fb2 linux-stable) Signed-off-by: Kleber Sacilotto de Souza Acked-by: Stefan Bader Acked-by: Colin Ian King --- arch/x86/mm/init.c | 41 +++++++++++++++++++-------- drivers/char/mem.c | 82 ++++++++++++++++++++++++++++++++++-------------------- 2 files changed, 82 insertions(+), 41 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index f97130618113..89c43a1ce82b 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -573,21 +573,40 @@ void __init init_mem_mapping(void) * devmem_is_allowed() checks to see if /dev/mem access to a certain address * is valid. The argument is a physical page number. * - * - * On x86, access has to be given to the first megabyte of ram because that area - * contains bios code and data regions used by X and dosemu and similar apps. - * Access has to be given to non-kernel-ram areas as well, these contain the PCI - * mmio resources as well as potential bios/acpi data regions. + * On x86, access has to be given to the first megabyte of RAM because that + * area traditionally contains BIOS code and data regions used by X, dosemu, + * and similar apps. Since they map the entire memory range, the whole range + * must be allowed (for mapping), but any areas that would otherwise be + * disallowed are flagged as being "zero filled" instead of rejected. + * Access has to be given to non-kernel-ram areas as well, these contain the + * PCI mmio resources as well as potential bios/acpi data regions. */ int devmem_is_allowed(unsigned long pagenr) { - if (pagenr < 256) - return 1; - if (iomem_is_exclusive(pagenr << PAGE_SHIFT)) + if (page_is_ram(pagenr)) { + /* + * For disallowed memory regions in the low 1MB range, + * request that the page be shown as all zeros. + */ + if (pagenr < 256) + return 2; + + return 0; + } + + /* + * This must follow RAM test, since System RAM is considered a + * restricted resource under CONFIG_STRICT_IOMEM. + */ + if (iomem_is_exclusive(pagenr << PAGE_SHIFT)) { + /* Low 1MB bypasses iomem restrictions. */ + if (pagenr < 256) + return 1; + return 0; - if (!page_is_ram(pagenr)) - return 1; - return 0; + } + + return 1; } void free_init_pages(char *what, unsigned long begin, unsigned long end) diff --git a/drivers/char/mem.c b/drivers/char/mem.c index 917403fe10da..5c2b7c575c9d 100644 --- a/drivers/char/mem.c +++ b/drivers/char/mem.c @@ -59,6 +59,10 @@ static inline int valid_mmap_phys_addr_range(unsigned long pfn, size_t size) #endif #ifdef CONFIG_STRICT_DEVMEM +static inline int page_is_allowed(unsigned long pfn) +{ + return devmem_is_allowed(pfn); +} static inline int range_is_allowed(unsigned long pfn, unsigned long size) { u64 from = ((u64)pfn) << PAGE_SHIFT; @@ -78,6 +82,10 @@ static inline int range_is_allowed(unsigned long pfn, unsigned long size) return 1; } #else +static inline int page_is_allowed(unsigned long pfn) +{ + return 1; +} static inline int range_is_allowed(unsigned long pfn, unsigned long size) { return 1; @@ -122,23 +130,31 @@ static ssize_t read_mem(struct file *file, char __user *buf, while (count > 0) { unsigned long remaining; + int allowed; sz = size_inside_page(p, count); - if (!range_is_allowed(p >> PAGE_SHIFT, count)) + allowed = page_is_allowed(p >> PAGE_SHIFT); + if (!allowed) return -EPERM; + if (allowed == 2) { + /* Show zeros for restricted memory. */ + remaining = clear_user(buf, sz); + } else { + /* + * On ia64 if a page has been mapped somewhere as + * uncached, then it must also be accessed uncached + * by the kernel or data corruption may occur. + */ + ptr = xlate_dev_mem_ptr(p); + if (!ptr) + return -EFAULT; - /* - * On ia64 if a page has been mapped somewhere as uncached, then - * it must also be accessed uncached by the kernel or data - * corruption may occur. - */ - ptr = xlate_dev_mem_ptr(p); - if (!ptr) - return -EFAULT; + remaining = copy_to_user(buf, ptr, sz); + + unxlate_dev_mem_ptr(p, ptr); + } - remaining = copy_to_user(buf, ptr, sz); - unxlate_dev_mem_ptr(p, ptr); if (remaining) return -EFAULT; @@ -181,30 +197,36 @@ static ssize_t write_mem(struct file *file, const char __user *buf, #endif while (count > 0) { + int allowed; + sz = size_inside_page(p, count); - if (!range_is_allowed(p >> PAGE_SHIFT, sz)) + allowed = page_is_allowed(p >> PAGE_SHIFT); + if (!allowed) return -EPERM; - /* - * On ia64 if a page has been mapped somewhere as uncached, then - * it must also be accessed uncached by the kernel or data - * corruption may occur. - */ - ptr = xlate_dev_mem_ptr(p); - if (!ptr) { - if (written) - break; - return -EFAULT; - } + /* Skip actual writing when a page is marked as restricted. */ + if (allowed == 1) { + /* + * On ia64 if a page has been mapped somewhere as + * uncached, then it must also be accessed uncached + * by the kernel or data corruption may occur. + */ + ptr = xlate_dev_mem_ptr(p); + if (!ptr) { + if (written) + break; + return -EFAULT; + } - copied = copy_from_user(ptr, buf, sz); - unxlate_dev_mem_ptr(p, ptr); - if (copied) { - written += sz - copied; - if (written) - break; - return -EFAULT; + copied = copy_from_user(ptr, buf, sz); + unxlate_dev_mem_ptr(p, ptr); + if (copied) { + written += sz - copied; + if (written) + break; + return -EFAULT; + } } buf += sz;