{"id":816497,"url":"http://patchwork.ozlabs.org/api/patches/816497/?format=json","web_url":"http://patchwork.ozlabs.org/project/netdev/patch/1505940337-79069-3-git-send-email-keescook@chromium.org/","project":{"id":7,"url":"http://patchwork.ozlabs.org/api/projects/7/?format=json","name":"Linux network development","link_name":"netdev","list_id":"netdev.vger.kernel.org","list_email":"netdev@vger.kernel.org","web_url":null,"scm_url":null,"webscm_url":null,"list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<1505940337-79069-3-git-send-email-keescook@chromium.org>","list_archive_url":null,"date":"2017-09-20T20:45:08","name":"[v3,02/31] usercopy: Enforce slab cache usercopy region boundaries","commit_ref":null,"pull_url":null,"state":"not-applicable","archived":true,"hash":"8f0a27e6a2f3895b1a202763b27e2747844bb71e","submitter":{"id":10641,"url":"http://patchwork.ozlabs.org/api/people/10641/?format=json","name":"Kees Cook","email":"keescook@chromium.org"},"delegate":{"id":34,"url":"http://patchwork.ozlabs.org/api/users/34/?format=json","username":"davem","first_name":"David","last_name":"Miller","email":"davem@davemloft.net"},"mbox":"http://patchwork.ozlabs.org/project/netdev/patch/1505940337-79069-3-git-send-email-keescook@chromium.org/mbox/","series":[{"id":4231,"url":"http://patchwork.ozlabs.org/api/series/4231/?format=json","web_url":"http://patchwork.ozlabs.org/project/netdev/list/?series=4231","date":"2017-09-20T20:45:22","name":"Hardened usercopy whitelisting","version":3,"mbox":"http://patchwork.ozlabs.org/series/4231/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/816497/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/816497/checks/","tags":{},"related":[],"headers":{"Return-Path":"<netdev-owner@vger.kernel.org>","X-Original-To":"patchwork-incoming@ozlabs.org","Delivered-To":"patchwork-incoming@ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ozlabs.org; dkim=pass (1024-bit key;\n\tunprotected) header.d=chromium.org header.i=@chromium.org\n\theader.b=\"GonCaJgY\"; dkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xyBkC1JJHz9s83\n\tfor <patchwork-incoming@ozlabs.org>;\n\tThu, 21 Sep 2017 06:51:46 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751870AbdITUv2 (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tWed, 20 Sep 2017 16:51:28 -0400","from mail-pf0-f173.google.com ([209.85.192.173]:45381 \"EHLO\n\tmail-pf0-f173.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1751804AbdITUqB (ORCPT\n\t<rfc822;netdev@vger.kernel.org>); Wed, 20 Sep 2017 16:46:01 -0400","by mail-pf0-f173.google.com with SMTP id z84so2116673pfi.2\n\tfor <netdev@vger.kernel.org>; Wed, 20 Sep 2017 13:46:01 -0700 (PDT)","from www.outflux.net\n\t(173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133])\n\tby smtp.gmail.com with ESMTPSA id\n\tp77sm9841509pfa.92.2017.09.20.13.45.53\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tWed, 20 Sep 2017 13:45:54 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=chromium.org; s=google;\n\th=from:to:cc:subject:date:message-id:in-reply-to:references;\n\tbh=06/UKgH88HtDSL3N3wOyzrmsupGpyFbyWGxiyU/bLMQ=;\n\tb=GonCaJgYOv5X4N8JwHc3SAHw5vNPjq9BMc73m+3JcusDd9D+EKjFYBuNE8kctNr+53\n\tbxDi1jagAnOU0zne4i8GWQVfAzLIcZ07rJ2G7BHYGnSEW7h+QAytMsXuqLgHYW5vmpSO\n\tmGU9duzmZgV6ag5AXCOPuv6rDeKBzkv0AH2ww=","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to\n\t:references;\n\tbh=06/UKgH88HtDSL3N3wOyzrmsupGpyFbyWGxiyU/bLMQ=;\n\tb=hVflr1mF1sucYoCuq/xPP1iGBwuW0ZPoNcaWK1Kq4FVAw54xd8cetKyyjMgWnERlMX\n\t+K0BYkJrst6mZnbZU/MLAxRHTr6s8S4V8unA2RN5XrvGYCdZmGKYpi33H+d3x9Hbl2Ab\n\taBOcR9CwVZwnII00g9l7wTwkuLb+gPUFkkHS11I0YMXUSQmKAc7dXwuRdUsP/FvuQ+Bd\n\twPr9IUuiGk1oM1BmexYjSxBbW6b/6E1hNSE5LOSAMK07qU1EVmbxLZ/Jf4Eipp1MfWOk\n\twbJTL3Z9If1/h/e9OdKqgWU3J6/rp+0SqMbdNs28GIU1+fVCY99XwVNx2G6I34N+3Mhp\n\tRWsg==","X-Gm-Message-State":"AHPjjUhzArsokvOFUKffBTeZd2WSCe5fdP5gUBYh3DFtVj77HAzt8+9p\n\tankhMJGCJMbWRK3TR3BoQThAww==","X-Google-Smtp-Source":"AOwi7QC0I949laQ2Cy5BwqZCazQDWaMqClbP9hpbbm/tO/xKGHMUoQWl09Fh5qPPHZc6a69b7Q9eXA==","X-Received":"by 10.84.232.135 with SMTP id i7mr3358615plk.104.1505940360846; \n\tWed, 20 Sep 2017 13:46:00 -0700 (PDT)","From":"Kees Cook <keescook@chromium.org>","To":"linux-kernel@vger.kernel.org","Cc":"Kees Cook <keescook@chromium.org>, David Windsor <dave@nullcore.net>,\n\tChristoph Lameter <cl@linux.com>, Pekka Enberg <penberg@kernel.org>,\n\tDavid Rientjes <rientjes@google.com>,\n\tJoonsoo Kim <iamjoonsoo.kim@lge.com>,\n\tAndrew Morton <akpm@linux-foundation.org>,\n\tLaura Abbott <labbott@redhat.com>, Ingo Molnar <mingo@kernel.org>,\n\tMark Rutland <mark.rutland@arm.com>, linux-mm@kvack.org,\n\tlinux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,\n\tnetdev@vger.kernel.org, kernel-hardening@lists.openwall.com","Subject":"[PATCH v3 02/31] usercopy: Enforce slab cache usercopy region\n\tboundaries","Date":"Wed, 20 Sep 2017 13:45:08 -0700","Message-Id":"<1505940337-79069-3-git-send-email-keescook@chromium.org>","X-Mailer":"git-send-email 2.7.4","In-Reply-To":"<1505940337-79069-1-git-send-email-keescook@chromium.org>","References":"<1505940337-79069-1-git-send-email-keescook@chromium.org>","Sender":"netdev-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<netdev.vger.kernel.org>","X-Mailing-List":"netdev@vger.kernel.org"},"content":"From: David Windsor <dave@nullcore.net>\n\nThis patch adds the enforcement component of usercopy cache whitelisting,\nand is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting\ncode in the last public patch of grsecurity/PaX based on my understanding\nof the code. Changes or omissions from the original code are mine and\ndon't reflect the original grsecurity/PaX code.\n\nThe SLAB and SLUB allocators are modified to deny all copy operations\nin which the kernel heap memory being modified falls outside of the cache's\ndefined usercopy region.\n\nSigned-off-by: David Windsor <dave@nullcore.net>\n[kees: adjust commit log and comments]\nCc: Christoph Lameter <cl@linux.com>\nCc: Pekka Enberg <penberg@kernel.org>\nCc: David Rientjes <rientjes@google.com>\nCc: Joonsoo Kim <iamjoonsoo.kim@lge.com>\nCc: Andrew Morton <akpm@linux-foundation.org>\nCc: Laura Abbott <labbott@redhat.com>\nCc: Ingo Molnar <mingo@kernel.org>\nCc: Mark Rutland <mark.rutland@arm.com>\nCc: linux-mm@kvack.org\nCc: linux-xfs@vger.kernel.org\nSigned-off-by: Kees Cook <keescook@chromium.org>\n---\n mm/slab.c     | 16 +++++++++++-----\n mm/slub.c     | 18 +++++++++++-------\n mm/usercopy.c | 12 ++++++++++++\n 3 files changed, 34 insertions(+), 12 deletions(-)","diff":"diff --git a/mm/slab.c b/mm/slab.c\nindex 87b6e5e0cdaf..df268999cf02 100644\n--- a/mm/slab.c\n+++ b/mm/slab.c\n@@ -4408,7 +4408,9 @@ module_init(slab_proc_init);\n \n #ifdef CONFIG_HARDENED_USERCOPY\n /*\n- * Rejects objects that are incorrectly sized.\n+ * Rejects incorrectly sized objects and objects that are to be copied\n+ * to/from userspace but do not fall entirely within the containing slab\n+ * cache's usercopy region.\n  *\n  * Returns NULL if check passes, otherwise const char * to name of cache\n  * to indicate an error.\n@@ -4428,11 +4430,15 @@ const char *__check_heap_object(const void *ptr, unsigned long n,\n \t/* Find offset within object. */\n \toffset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);\n \n-\t/* Allow address range falling entirely within object size. */\n-\tif (offset <= cachep->object_size && n <= cachep->object_size - offset)\n-\t\treturn NULL;\n+\t/* Make sure object falls entirely within cache's usercopy region. */\n+\tif (offset < cachep->useroffset)\n+\t\treturn cachep->name;\n+\tif (offset - cachep->useroffset > cachep->usersize)\n+\t\treturn cachep->name;\n+\tif (n > cachep->useroffset - offset + cachep->usersize)\n+\t\treturn cachep->name;\n \n-\treturn cachep->name;\n+\treturn NULL;\n }\n #endif /* CONFIG_HARDENED_USERCOPY */\n \ndiff --git a/mm/slub.c b/mm/slub.c\nindex fae637726c44..bbf73024be3a 100644\n--- a/mm/slub.c\n+++ b/mm/slub.c\n@@ -3833,7 +3833,9 @@ EXPORT_SYMBOL(__kmalloc_node);\n \n #ifdef CONFIG_HARDENED_USERCOPY\n /*\n- * Rejects objects that are incorrectly sized.\n+ * Rejects incorrectly sized objects and objects that are to be copied\n+ * to/from userspace but do not fall entirely within the containing slab\n+ * cache's usercopy region.\n  *\n  * Returns NULL if check passes, otherwise const char * to name of cache\n  * to indicate an error.\n@@ -3843,11 +3845,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,\n {\n \tstruct kmem_cache *s;\n \tunsigned long offset;\n-\tsize_t object_size;\n \n \t/* Find object and usable object size. */\n \ts = page->slab_cache;\n-\tobject_size = slab_ksize(s);\n \n \t/* Reject impossible pointers. */\n \tif (ptr < page_address(page))\n@@ -3863,11 +3863,15 @@ const char *__check_heap_object(const void *ptr, unsigned long n,\n \t\toffset -= s->red_left_pad;\n \t}\n \n-\t/* Allow address range falling entirely within object size. */\n-\tif (offset <= object_size && n <= object_size - offset)\n-\t\treturn NULL;\n+\t/* Make sure object falls entirely within cache's usercopy region. */\n+\tif (offset < s->useroffset)\n+\t\treturn s->name;\n+\tif (offset - s->useroffset > s->usersize)\n+\t\treturn s->name;\n+\tif (n > s->useroffset - offset + s->usersize)\n+\t\treturn s->name;\n \n-\treturn s->name;\n+\treturn NULL;\n }\n #endif /* CONFIG_HARDENED_USERCOPY */\n \ndiff --git a/mm/usercopy.c b/mm/usercopy.c\nindex a9852b24715d..cbffde670c49 100644\n--- a/mm/usercopy.c\n+++ b/mm/usercopy.c\n@@ -58,6 +58,18 @@ static noinline int check_stack_object(const void *obj, unsigned long len)\n \treturn GOOD_STACK;\n }\n \n+/*\n+ * If this function is reached, then CONFIG_HARDENED_USERCOPY has found an\n+ * unexpected state during a copy_from_user() or copy_to_user() call.\n+ * There are several checks being performed on the buffer by the\n+ * __check_object_size() function. Normal stack buffer usage should never\n+ * trip the checks, and kernel text addressing will always trip the check.\n+ * For cache objects, it is checking that only the whitelisted range of\n+ * bytes for a given cache is being accessed (via the cache's usersize and\n+ * useroffset fields). To adjust a cache whitelist, use the usercopy-aware\n+ * kmem_cache_create_usercopy() function to create the cache (and\n+ * carefully audit the whitelist range).\n+ */\n static void report_usercopy(const void *ptr, unsigned long len,\n \t\t\t    bool to_user, const char *type)\n {\n","prefixes":["v3","02/31"]}