{"id":816499,"url":"http://patchwork.ozlabs.org/api/patches/816499/?format=json","web_url":"http://patchwork.ozlabs.org/project/netdev/patch/1505940337-79069-2-git-send-email-keescook@chromium.org/","project":{"id":7,"url":"http://patchwork.ozlabs.org/api/projects/7/?format=json","name":"Linux network development","link_name":"netdev","list_id":"netdev.vger.kernel.org","list_email":"netdev@vger.kernel.org","web_url":null,"scm_url":null,"webscm_url":null,"list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<1505940337-79069-2-git-send-email-keescook@chromium.org>","list_archive_url":null,"date":"2017-09-20T20:45:07","name":"[v3,01/31] usercopy: Prepare for usercopy whitelisting","commit_ref":null,"pull_url":null,"state":"not-applicable","archived":true,"hash":"d01d5b059c4aaec3d6be3da2ec3049ac44efeb8a","submitter":{"id":10641,"url":"http://patchwork.ozlabs.org/api/people/10641/?format=json","name":"Kees Cook","email":"keescook@chromium.org"},"delegate":{"id":34,"url":"http://patchwork.ozlabs.org/api/users/34/?format=json","username":"davem","first_name":"David","last_name":"Miller","email":"davem@davemloft.net"},"mbox":"http://patchwork.ozlabs.org/project/netdev/patch/1505940337-79069-2-git-send-email-keescook@chromium.org/mbox/","series":[{"id":4231,"url":"http://patchwork.ozlabs.org/api/series/4231/?format=json","web_url":"http://patchwork.ozlabs.org/project/netdev/list/?series=4231","date":"2017-09-20T20:45:22","name":"Hardened usercopy whitelisting","version":3,"mbox":"http://patchwork.ozlabs.org/series/4231/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/816499/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/816499/checks/","tags":{},"related":[],"headers":{"Return-Path":"<netdev-owner@vger.kernel.org>","X-Original-To":"patchwork-incoming@ozlabs.org","Delivered-To":"patchwork-incoming@ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ozlabs.org; dkim=pass (1024-bit key;\n\tunprotected) header.d=chromium.org header.i=@chromium.org\n\theader.b=\"eC2CuSgK\"; dkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xyBkj4Lxcz9s83\n\tfor <patchwork-incoming@ozlabs.org>;\n\tThu, 21 Sep 2017 06:52:13 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751462AbdITUwA (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tWed, 20 Sep 2017 16:52:00 -0400","from mail-pg0-f51.google.com ([74.125.83.51]:49732 \"EHLO\n\tmail-pg0-f51.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1751758AbdITUqA (ORCPT\n\t<rfc822;netdev@vger.kernel.org>); Wed, 20 Sep 2017 16:46:00 -0400","by mail-pg0-f51.google.com with SMTP id m30so2333170pgn.6\n\tfor <netdev@vger.kernel.org>; Wed, 20 Sep 2017 13:46:00 -0700 (PDT)","from www.outflux.net\n\t(173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133])\n\tby smtp.gmail.com with ESMTPSA id\n\ts27sm10347503pgo.59.2017.09.20.13.45.53\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tWed, 20 Sep 2017 13:45:54 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=chromium.org; s=google;\n\th=from:to:cc:subject:date:message-id:in-reply-to:references;\n\tbh=rVOIVjH905jykgte+wozdsu33pspW6N2DPcU8sibmrU=;\n\tb=eC2CuSgKOSOoekqraHMPdf1SVD8J653oEL8+Xbku/nPUaWvs12ViZy/I0CZvJLXH7J\n\tbpPWgRE8d53kN82EER0H41O2AtuTsDMAUSB7hXguVEOILIgMeLDykIDd3OQg4mkNP/c+\n\tQtOqxkayvM/wSKUILVLfg70qTSsEY97NYA6T8=","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to\n\t:references;\n\tbh=rVOIVjH905jykgte+wozdsu33pspW6N2DPcU8sibmrU=;\n\tb=t7+Fnh60smbwq3jtcYDmtKC0970DJw4xnRhIkzujsggbgC/0tUAPWVM7zbnEBGPE04\n\tgfqZ4i5ENqCr62R+gpw4xFjQZnlHxXMqMgqHBwhtTu5OynFpa7XJeLIIHfpoJ8xanqGO\n\t7QxzjOI13r7lVaFZvKCJD8bG70KCkEa7D6o1yH92l3WShpuxK65FjkZHT2Mry5dMzrko\n\t0uia0ZzGSdZvbxDnhhdnstsaZ3ER5FOfoVVFipAb4gaKwQfKq2PZ/4qndDG0xLEFKFk4\n\tBUXR3FmTGsYTq1Ol3MGedaTWld3TJcLX5L5igxB2M4K/53gNMPmAH8cJc8RwbSmKZGA9\n\tMzrQ==","X-Gm-Message-State":"AHPjjUj9M1FFi1N5UBe7YL/tg5j5SoS9ScoO2rBkOePnwX7gpLmLD9iu\n\tHaTTWpPIn1CAzKbzOG9OYYuZrg==","X-Google-Smtp-Source":"AOwi7QAHR0fuYSsbMq6GOLq5kgY0t4v6Kcjo/aIrV6w6OWHqPa5IT6OXJkZEl8lIl5n51y/wgnbUZQ==","X-Received":"by 10.84.245.2 with SMTP id i2mr2406198pll.377.1505940359916;\n\tWed, 20 Sep 2017 13:45:59 -0700 (PDT)","From":"Kees Cook <keescook@chromium.org>","To":"linux-kernel@vger.kernel.org","Cc":"Kees Cook <keescook@chromium.org>, David Windsor <dave@nullcore.net>,\n\tChristoph Lameter <cl@linux.com>, Pekka Enberg <penberg@kernel.org>,\n\tDavid Rientjes <rientjes@google.com>,\n\tJoonsoo Kim <iamjoonsoo.kim@lge.com>,\n\tAndrew Morton <akpm@linux-foundation.org>, linux-mm@kvack.org,\n\tlinux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,\n\tnetdev@vger.kernel.org, kernel-hardening@lists.openwall.com","Subject":"[PATCH v3 01/31] usercopy: Prepare for usercopy whitelisting","Date":"Wed, 20 Sep 2017 13:45:07 -0700","Message-Id":"<1505940337-79069-2-git-send-email-keescook@chromium.org>","X-Mailer":"git-send-email 2.7.4","In-Reply-To":"<1505940337-79069-1-git-send-email-keescook@chromium.org>","References":"<1505940337-79069-1-git-send-email-keescook@chromium.org>","Sender":"netdev-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<netdev.vger.kernel.org>","X-Mailing-List":"netdev@vger.kernel.org"},"content":"From: David Windsor <dave@nullcore.net>\n\nThis patch prepares the slab allocator to handle caches having annotations\n(useroffset and usersize) defining usercopy regions.\n\nThis patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY\nwhitelisting code in the last public patch of grsecurity/PaX based on\nmy understanding of the code. Changes or omissions from the original\ncode are mine and don't reflect the original grsecurity/PaX code.\n\nCurrently, hardened usercopy performs dynamic bounds checking on slab\ncache objects. This is good, but still leaves a lot of kernel memory\navailable to be copied to/from userspace in the face of bugs. To further\nrestrict what memory is available for copying, this creates a way to\nwhitelist specific areas of a given slab cache object for copying to/from\nuserspace, allowing much finer granularity of access control. Slab caches\nthat are never exposed to userspace can declare no whitelist for their\nobjects, thereby keeping them unavailable to userspace via dynamic copy\noperations. (Note, an implicit form of whitelisting is the use of constant\nsizes in usercopy operations and get_user()/put_user(); these bypass\nhardened usercopy checks since these sizes cannot change at runtime.)\n\nTo support this whitelist annotation, usercopy region offset and size\nmembers are added to struct kmem_cache. The slab allocator receives a\nnew function, kmem_cache_create_usercopy(), that creates a new cache\nwith a usercopy region defined, suitable for declaring spans of fields\nwithin the objects that get copied to/from userspace.\n\nIn this patch, the default kmem_cache_create() marks the entire allocation\nas whitelisted, leaving it semantically unchanged. Once all fine-grained\nwhitelists have been added (in subsequent patches), this will be changed\nto a usersize of 0, making caches created with kmem_cache_create() not\ncopyable to/from userspace.\n\nAfter the entire usercopy whitelist series is applied, less than 15%\nof the slab cache memory remains exposed to potential usercopy bugs\nafter a fresh boot:\n\nTotal Slab Memory:           48074720\nUsercopyable Memory:          6367532  13.2%\n         task_struct                    0.2%         4480/1630720\n         RAW                            0.3%            300/96000\n         RAWv6                          2.1%           1408/64768\n         ext4_inode_cache               3.0%       269760/8740224\n         dentry                        11.1%       585984/5273856\n         mm_struct                     29.1%         54912/188448\n         kmalloc-8                    100.0%          24576/24576\n         kmalloc-16                   100.0%          28672/28672\n         kmalloc-32                   100.0%          81920/81920\n         kmalloc-192                  100.0%          96768/96768\n         kmalloc-128                  100.0%        143360/143360\n         names_cache                  100.0%        163840/163840\n         kmalloc-64                   100.0%        167936/167936\n         kmalloc-256                  100.0%        339968/339968\n         kmalloc-512                  100.0%        350720/350720\n         kmalloc-96                   100.0%        455616/455616\n         kmalloc-8192                 100.0%        655360/655360\n         kmalloc-1024                 100.0%        812032/812032\n         kmalloc-4096                 100.0%        819200/819200\n         kmalloc-2048                 100.0%      1310720/1310720\n\nAfter some kernel build workloads, the percentage (mainly driven by\ndentry and inode caches expanding) drops under 10%:\n\nTotal Slab Memory:           95516184\nUsercopyable Memory:          8497452   8.8%\n         task_struct                    0.2%         4000/1456000\n         RAW                            0.3%            300/96000\n         RAWv6                          2.1%           1408/64768\n         ext4_inode_cache               3.0%     1217280/39439872\n         dentry                        11.1%     1623200/14608800\n         mm_struct                     29.1%         73216/251264\n         kmalloc-8                    100.0%          24576/24576\n         kmalloc-16                   100.0%          28672/28672\n         kmalloc-32                   100.0%          94208/94208\n         kmalloc-192                  100.0%          96768/96768\n         kmalloc-128                  100.0%        143360/143360\n         names_cache                  100.0%        163840/163840\n         kmalloc-64                   100.0%        245760/245760\n         kmalloc-256                  100.0%        339968/339968\n         kmalloc-512                  100.0%        350720/350720\n         kmalloc-96                   100.0%        563520/563520\n         kmalloc-8192                 100.0%        655360/655360\n         kmalloc-1024                 100.0%        794624/794624\n         kmalloc-4096                 100.0%        819200/819200\n         kmalloc-2048                 100.0%      1257472/1257472\n\nSigned-off-by: David Windsor <dave@nullcore.net>\n[kees: adjust commit log, split out a few extra kmalloc hunks]\n[kees: add field names to function declarations]\n[kees: convert BUGs to WARNs and fail closed]\n[kees: add attack surface reduction analysis to commit log]\nCc: Christoph Lameter <cl@linux.com>\nCc: Pekka Enberg <penberg@kernel.org>\nCc: David Rientjes <rientjes@google.com>\nCc: Joonsoo Kim <iamjoonsoo.kim@lge.com>\nCc: Andrew Morton <akpm@linux-foundation.org>\nCc: linux-mm@kvack.org\nCc: linux-xfs@vger.kernel.org\nSigned-off-by: Kees Cook <keescook@chromium.org>\n---\n include/linux/slab.h     | 27 +++++++++++++++++++++------\n include/linux/slab_def.h |  3 +++\n include/linux/slub_def.h |  3 +++\n include/linux/stddef.h   |  2 ++\n mm/slab.c                |  2 +-\n mm/slab.h                |  5 ++++-\n mm/slab_common.c         | 46 ++++++++++++++++++++++++++++++++++++++--------\n mm/slub.c                | 11 +++++++++--\n 8 files changed, 81 insertions(+), 18 deletions(-)","diff":"diff --git a/include/linux/slab.h b/include/linux/slab.h\nindex 41473df6dfb0..8b6cb384f8b6 100644\n--- a/include/linux/slab.h\n+++ b/include/linux/slab.h\n@@ -126,9 +126,13 @@ struct mem_cgroup;\n void __init kmem_cache_init(void);\n bool slab_is_available(void);\n \n-struct kmem_cache *kmem_cache_create(const char *, size_t, size_t,\n-\t\t\tunsigned long,\n-\t\t\tvoid (*)(void *));\n+struct kmem_cache *kmem_cache_create(const char *name, size_t size,\n+\t\t\tsize_t align, unsigned long flags,\n+\t\t\tvoid (*ctor)(void *));\n+struct kmem_cache *kmem_cache_create_usercopy(const char *name,\n+\t\t\tsize_t size, size_t align, unsigned long flags,\n+\t\t\tsize_t useroffset, size_t usersize,\n+\t\t\tvoid (*ctor)(void *));\n void kmem_cache_destroy(struct kmem_cache *);\n int kmem_cache_shrink(struct kmem_cache *);\n \n@@ -144,9 +148,20 @@ void memcg_destroy_kmem_caches(struct mem_cgroup *);\n  * f.e. add ____cacheline_aligned_in_smp to the struct declaration\n  * then the objects will be properly aligned in SMP configurations.\n  */\n-#define KMEM_CACHE(__struct, __flags) kmem_cache_create(#__struct,\\\n-\t\tsizeof(struct __struct), __alignof__(struct __struct),\\\n-\t\t(__flags), NULL)\n+#define KMEM_CACHE(__struct, __flags)\t\t\t\t\t\\\n+\t\tkmem_cache_create(#__struct, sizeof(struct __struct),\t\\\n+\t\t\t__alignof__(struct __struct), (__flags), NULL)\n+\n+/*\n+ * To whitelist a single field for copying to/from usercopy, use this\n+ * macro instead for KMEM_CACHE() above.\n+ */\n+#define KMEM_CACHE_USERCOPY(__struct, __flags, __field)\t\t\t\\\n+\t\tkmem_cache_create_usercopy(#__struct,\t\t\t\\\n+\t\t\tsizeof(struct __struct),\t\t\t\\\n+\t\t\t__alignof__(struct __struct), (__flags),\t\\\n+\t\t\toffsetof(struct __struct, __field),\t\t\\\n+\t\t\tsizeof_field(struct __struct, __field), NULL)\n \n /*\n  * Common kmalloc functions provided by all allocators\ndiff --git a/include/linux/slab_def.h b/include/linux/slab_def.h\nindex 4ad2c5a26399..03eef0df8648 100644\n--- a/include/linux/slab_def.h\n+++ b/include/linux/slab_def.h\n@@ -84,6 +84,9 @@ struct kmem_cache {\n \tunsigned int *random_seq;\n #endif\n \n+\tsize_t useroffset;\t\t/* Usercopy region offset */\n+\tsize_t usersize;\t\t/* Usercopy region size */\n+\n \tstruct kmem_cache_node *node[MAX_NUMNODES];\n };\n \ndiff --git a/include/linux/slub_def.h b/include/linux/slub_def.h\nindex 0783b622311e..62866a1a767c 100644\n--- a/include/linux/slub_def.h\n+++ b/include/linux/slub_def.h\n@@ -134,6 +134,9 @@ struct kmem_cache {\n \tstruct kasan_cache kasan_info;\n #endif\n \n+\tsize_t useroffset;\t\t/* Usercopy region offset */\n+\tsize_t usersize;\t\t/* Usercopy region size */\n+\n \tstruct kmem_cache_node *node[MAX_NUMNODES];\n };\n \ndiff --git a/include/linux/stddef.h b/include/linux/stddef.h\nindex 9c61c7cda936..f00355086fb2 100644\n--- a/include/linux/stddef.h\n+++ b/include/linux/stddef.h\n@@ -18,6 +18,8 @@ enum {\n #define offsetof(TYPE, MEMBER)\t((size_t)&((TYPE *)0)->MEMBER)\n #endif\n \n+#define sizeof_field(structure, field) sizeof((((structure *)0)->field))\n+\n /**\n  * offsetofend(TYPE, MEMBER)\n  *\ndiff --git a/mm/slab.c b/mm/slab.c\nindex 04dec48c3ed7..87b6e5e0cdaf 100644\n--- a/mm/slab.c\n+++ b/mm/slab.c\n@@ -1281,7 +1281,7 @@ void __init kmem_cache_init(void)\n \tcreate_boot_cache(kmem_cache, \"kmem_cache\",\n \t\toffsetof(struct kmem_cache, node) +\n \t\t\t\t  nr_node_ids * sizeof(struct kmem_cache_node *),\n-\t\t\t\t  SLAB_HWCACHE_ALIGN);\n+\t\t\t\t  SLAB_HWCACHE_ALIGN, 0, 0);\n \tlist_add(&kmem_cache->list, &slab_caches);\n \tslab_state = PARTIAL;\n \ndiff --git a/mm/slab.h b/mm/slab.h\nindex 073362816acc..044755ff9632 100644\n--- a/mm/slab.h\n+++ b/mm/slab.h\n@@ -21,6 +21,8 @@ struct kmem_cache {\n \tunsigned int size;\t/* The aligned/padded/added on size  */\n \tunsigned int align;\t/* Alignment as calculated */\n \tunsigned long flags;\t/* Active flags on the slab */\n+\tsize_t useroffset;\t/* Usercopy region offset */\n+\tsize_t usersize;\t/* Usercopy region size */\n \tconst char *name;\t/* Slab name for sysfs */\n \tint refcount;\t\t/* Use counter */\n \tvoid (*ctor)(void *);\t/* Called on object slot creation */\n@@ -97,7 +99,8 @@ extern int __kmem_cache_create(struct kmem_cache *, unsigned long flags);\n extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size,\n \t\t\tunsigned long flags);\n extern void create_boot_cache(struct kmem_cache *, const char *name,\n-\t\t\tsize_t size, unsigned long flags);\n+\t\t\tsize_t size, unsigned long flags, size_t useroffset,\n+\t\t\tsize_t usersize);\n \n int slab_unmergeable(struct kmem_cache *s);\n struct kmem_cache *find_mergeable(size_t size, size_t align,\ndiff --git a/mm/slab_common.c b/mm/slab_common.c\nindex 904a83be82de..36408f5f2a34 100644\n--- a/mm/slab_common.c\n+++ b/mm/slab_common.c\n@@ -272,6 +272,9 @@ int slab_unmergeable(struct kmem_cache *s)\n \tif (s->ctor)\n \t\treturn 1;\n \n+\tif (s->usersize)\n+\t\treturn 1;\n+\n \t/*\n \t * We may have set a slab to be unmergeable during bootstrap.\n \t */\n@@ -357,12 +360,16 @@ unsigned long calculate_alignment(unsigned long flags,\n \n static struct kmem_cache *create_cache(const char *name,\n \t\tsize_t object_size, size_t size, size_t align,\n-\t\tunsigned long flags, void (*ctor)(void *),\n+\t\tunsigned long flags, size_t useroffset,\n+\t\tsize_t usersize, void (*ctor)(void *),\n \t\tstruct mem_cgroup *memcg, struct kmem_cache *root_cache)\n {\n \tstruct kmem_cache *s;\n \tint err;\n \n+\tif (WARN_ON(useroffset + usersize > object_size))\n+\t\tuseroffset = usersize = 0;\n+\n \terr = -ENOMEM;\n \ts = kmem_cache_zalloc(kmem_cache, GFP_KERNEL);\n \tif (!s)\n@@ -373,6 +380,8 @@ static struct kmem_cache *create_cache(const char *name,\n \ts->size = size;\n \ts->align = align;\n \ts->ctor = ctor;\n+\ts->useroffset = useroffset;\n+\ts->usersize = usersize;\n \n \terr = init_memcg_params(s, memcg, root_cache);\n \tif (err)\n@@ -397,11 +406,13 @@ static struct kmem_cache *create_cache(const char *name,\n }\n \n /*\n- * kmem_cache_create - Create a cache.\n+ * kmem_cache_create_usercopy - Create a cache.\n  * @name: A string which is used in /proc/slabinfo to identify this cache.\n  * @size: The size of objects to be created in this cache.\n  * @align: The required alignment for the objects.\n  * @flags: SLAB flags\n+ * @useroffset: Usercopy region offset\n+ * @usersize: Usercopy region size\n  * @ctor: A constructor for the objects.\n  *\n  * Returns a ptr to the cache on success, NULL on failure.\n@@ -421,8 +432,9 @@ static struct kmem_cache *create_cache(const char *name,\n  * as davem.\n  */\n struct kmem_cache *\n-kmem_cache_create(const char *name, size_t size, size_t align,\n-\t\t  unsigned long flags, void (*ctor)(void *))\n+kmem_cache_create_usercopy(const char *name, size_t size, size_t align,\n+\t\t  unsigned long flags, size_t useroffset, size_t usersize,\n+\t\t  void (*ctor)(void *))\n {\n \tstruct kmem_cache *s = NULL;\n \tconst char *cache_name;\n@@ -453,7 +465,13 @@ kmem_cache_create(const char *name, size_t size, size_t align,\n \t */\n \tflags &= CACHE_CREATE_MASK;\n \n-\ts = __kmem_cache_alias(name, size, align, flags, ctor);\n+\t/* Fail closed on bad usersize of useroffset values. */\n+\tif (WARN_ON(!usersize && useroffset) ||\n+\t    WARN_ON(size < usersize || size - usersize < useroffset))\n+\t\tusersize = useroffset = 0;\n+\n+\tif (!usersize)\n+\t\ts = __kmem_cache_alias(name, size, align, flags, ctor);\n \tif (s)\n \t\tgoto out_unlock;\n \n@@ -465,7 +483,7 @@ kmem_cache_create(const char *name, size_t size, size_t align,\n \n \ts = create_cache(cache_name, size, size,\n \t\t\t calculate_alignment(flags, align, size),\n-\t\t\t flags, ctor, NULL, NULL);\n+\t\t\t flags, useroffset, usersize, ctor, NULL, NULL);\n \tif (IS_ERR(s)) {\n \t\terr = PTR_ERR(s);\n \t\tkfree_const(cache_name);\n@@ -491,6 +509,15 @@ kmem_cache_create(const char *name, size_t size, size_t align,\n \t}\n \treturn s;\n }\n+EXPORT_SYMBOL(kmem_cache_create_usercopy);\n+\n+struct kmem_cache *\n+kmem_cache_create(const char *name, size_t size, size_t align,\n+\t\tunsigned long flags, void (*ctor)(void *))\n+{\n+\treturn kmem_cache_create_usercopy(name, size, align, flags, 0, size,\n+\t\t\t\t\t  ctor);\n+}\n EXPORT_SYMBOL(kmem_cache_create);\n \n static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)\n@@ -603,6 +630,7 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg,\n \ts = create_cache(cache_name, root_cache->object_size,\n \t\t\t root_cache->size, root_cache->align,\n \t\t\t root_cache->flags & CACHE_CREATE_MASK,\n+\t\t\t root_cache->useroffset, root_cache->usersize,\n \t\t\t root_cache->ctor, memcg, root_cache);\n \t/*\n \t * If we could not create a memcg cache, do not complain, because\n@@ -870,13 +898,15 @@ bool slab_is_available(void)\n #ifndef CONFIG_SLOB\n /* Create a cache during boot when no slab services are available yet */\n void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t size,\n-\t\tunsigned long flags)\n+\t\tunsigned long flags, size_t useroffset, size_t usersize)\n {\n \tint err;\n \n \ts->name = name;\n \ts->size = s->object_size = size;\n \ts->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size);\n+\ts->useroffset = useroffset;\n+\ts->usersize = usersize;\n \n \tslab_init_memcg_params(s);\n \n@@ -897,7 +927,7 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,\n \tif (!s)\n \t\tpanic(\"Out of memory when creating slab %s\\n\", name);\n \n-\tcreate_boot_cache(s, name, size, flags);\n+\tcreate_boot_cache(s, name, size, flags, 0, size);\n \tlist_add(&s->list, &slab_caches);\n \tmemcg_link_cache(s);\n \ts->refcount = 1;\ndiff --git a/mm/slub.c b/mm/slub.c\nindex 163352c537ab..fae637726c44 100644\n--- a/mm/slub.c\n+++ b/mm/slub.c\n@@ -4201,7 +4201,7 @@ void __init kmem_cache_init(void)\n \tkmem_cache = &boot_kmem_cache;\n \n \tcreate_boot_cache(kmem_cache_node, \"kmem_cache_node\",\n-\t\tsizeof(struct kmem_cache_node), SLAB_HWCACHE_ALIGN);\n+\t\tsizeof(struct kmem_cache_node), SLAB_HWCACHE_ALIGN, 0, 0);\n \n \tregister_hotmemory_notifier(&slab_memory_callback_nb);\n \n@@ -4211,7 +4211,7 @@ void __init kmem_cache_init(void)\n \tcreate_boot_cache(kmem_cache, \"kmem_cache\",\n \t\t\toffsetof(struct kmem_cache, node) +\n \t\t\t\tnr_node_ids * sizeof(struct kmem_cache_node *),\n-\t\t       SLAB_HWCACHE_ALIGN);\n+\t\t       SLAB_HWCACHE_ALIGN, 0, 0);\n \n \tkmem_cache = bootstrap(&boot_kmem_cache);\n \n@@ -5081,6 +5081,12 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf)\n SLAB_ATTR_RO(cache_dma);\n #endif\n \n+static ssize_t usersize_show(struct kmem_cache *s, char *buf)\n+{\n+\treturn sprintf(buf, \"%zu\\n\", s->usersize);\n+}\n+SLAB_ATTR_RO(usersize);\n+\n static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf)\n {\n \treturn sprintf(buf, \"%d\\n\", !!(s->flags & SLAB_TYPESAFE_BY_RCU));\n@@ -5455,6 +5461,7 @@ static struct attribute *slab_attrs[] = {\n #ifdef CONFIG_FAILSLAB\n \t&failslab_attr.attr,\n #endif\n+\t&usersize_attr.attr,\n \n \tNULL\n };\n","prefixes":["v3","01/31"]}