diff mbox

[9/9] mm: SLUB hardened usercopy support

Message ID 1467843928-29351-10-git-send-email-keescook@chromium.org (mailing list archive)
State Not Applicable
Headers show

Commit Message

Kees Cook July 6, 2016, 10:25 p.m. UTC
Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slub.c    | 27 +++++++++++++++++++++++++++
 2 files changed, 28 insertions(+)

Comments

Kees Cook July 7, 2016, 6:56 p.m. UTC | #1
On Thu, Jul 7, 2016 at 12:35 AM, Michael Ellerman <mpe@ellerman.id.au> wrote:
> Kees Cook <keescook@chromium.org> writes:
>
>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>> SLUB allocator to catch any copies that may span objects.
>>
>> Based on code from PaX and grsecurity.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 825ff4505336..0c8ace04f075 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -3614,6 +3614,33 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>>  EXPORT_SYMBOL(__kmalloc_node);
>>  #endif
>>
>> +#ifdef CONFIG_HARDENED_USERCOPY
>> +/*
>> + * Rejects objects that are incorrectly sized.
>> + *
>> + * Returns NULL if check passes, otherwise const char * to name of cache
>> + * to indicate an error.
>> + */
>> +const char *__check_heap_object(const void *ptr, unsigned long n,
>> +                             struct page *page)
>> +{
>> +     struct kmem_cache *s;
>> +     unsigned long offset;
>> +
>> +     /* Find object. */
>> +     s = page->slab_cache;
>> +
>> +     /* Find offset within object. */
>> +     offset = (ptr - page_address(page)) % s->size;
>> +
>> +     /* Allow address range falling entirely within object size. */
>> +     if (offset <= s->object_size && n <= s->object_size - offset)
>> +             return NULL;
>> +
>> +     return s->name;
>> +}
>
> I gave this a quick spin on powerpc, it blew up immediately :)

Wheee :) This series is rather easy to test: blows up REALLY quickly
if it's wrong. ;)

FWIW, -next also has a bunch of additional lkdtm tests for the various
protections and directions.

>
>   Brought up 16 CPUs
>   usercopy: kernel memory overwrite attempt detected to c0000001fe023868 (kmalloc-16) (9 bytes)
>   CPU: 8 PID: 103 Comm: kdevtmpfs Not tainted 4.7.0-rc3-00098-g09d9556ae5d1 #55
>   Call Trace:
>   [c0000001fa0cfb40] [c0000000009bdbe8] dump_stack+0xb0/0xf0 (unreliable)
>   [c0000001fa0cfb80] [c00000000029cf44] __check_object_size+0x74/0x320
>   [c0000001fa0cfc00] [c00000000005d4d0] copy_from_user+0x60/0xd4
>   [c0000001fa0cfc40] [c00000000022b6cc] memdup_user+0x5c/0xf0
>   [c0000001fa0cfc80] [c00000000022b90c] strndup_user+0x7c/0x110
>   [c0000001fa0cfcc0] [c0000000002d6c28] SyS_mount+0x58/0x180
>   [c0000001fa0cfd10] [c0000000005ee908] devtmpfsd+0x98/0x210
>   [c0000001fa0cfd80] [c0000000000df810] kthread+0x110/0x130
>   [c0000001fa0cfe30] [c0000000000095e8] ret_from_kernel_thread+0x5c/0x74
>
> SLUB tracing says:
>
>   TRACE kmalloc-16 alloc 0xc0000001fe023868 inuse=186 fp=0x          (null)
>
> Which is not 16-byte aligned, which seems to be caused by the red zone?
> The following patch fixes it for me, but I don't know SLUB enough to say
> if it's always correct.
>
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 0c8ace04f075..66191ea4545a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3630,6 +3630,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
>         /* Find object. */
>         s = page->slab_cache;
>
> +       /* Subtract red zone if enabled */
> +       ptr = restore_red_left(s, ptr);
> +

Ah, interesting. Just to make sure: you've built with
CONFIG_SLUB_DEBUG and either CONFIG_SLUB_DEBUG_ON or booted with
either slub_debug or slub_debug=z ?

Thanks for the slub fix!

I wonder if this code should be using size_from_object() instead of s->size?

(It looks like slab is already handling this via the obj_offset() call.)

-Kees

>         /* Find offset within object. */
>         offset = (ptr - page_address(page)) % s->size;
>
> cheers
Michael Ellerman July 8, 2016, 10:19 a.m. UTC | #2
Kees Cook <keescook@chromium.org> writes:
> On Thu, Jul 7, 2016 at 12:35 AM, Michael Ellerman <mpe@ellerman.id.au> wrote:
>> I gave this a quick spin on powerpc, it blew up immediately :)
>
> Wheee :) This series is rather easy to test: blows up REALLY quickly
> if it's wrong. ;)

Better than subtle race conditions which is the usual :)

>> diff --git a/mm/slub.c b/mm/slub.c
>> index 0c8ace04f075..66191ea4545a 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -3630,6 +3630,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
>>         /* Find object. */
>>         s = page->slab_cache;
>>
>> +       /* Subtract red zone if enabled */
>> +       ptr = restore_red_left(s, ptr);
>> +
>
> Ah, interesting. Just to make sure: you've built with
> CONFIG_SLUB_DEBUG and either CONFIG_SLUB_DEBUG_ON or booted with
> either slub_debug or slub_debug=z ?

Yeah built with CONFIG_SLUB_DEBUG_ON, and booted with and without slub_debug
options.

> Thanks for the slub fix!
>
> I wonder if this code should be using size_from_object() instead of s->size?

Hmm, not sure. Who's SLUB maintainer? :)

I was modelling it on the logic in check_valid_pointer(), which also does the
restore_red_left(), and then checks for % s->size:

static inline int check_valid_pointer(struct kmem_cache *s,
				struct page *page, void *object)
{
	void *base;

	if (!object)
		return 1;

	base = page_address(page);
	object = restore_red_left(s, object);
	if (object < base || object >= base + page->objects * s->size ||
		(object - base) % s->size) {
		return 0;
	}

	return 1;
}

cheers
diff mbox

Patch

diff --git a/init/Kconfig b/init/Kconfig
index 798c2020ee7c..1c4711819dfd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1765,6 +1765,7 @@  config SLAB
 
 config SLUB
 	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	   SLUB is a slab allocator that minimizes cache line usage
 	   instead of managing queues of cached objects (SLAB approach).
diff --git a/mm/slub.c b/mm/slub.c
index 825ff4505336..0c8ace04f075 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,6 +3614,33 @@  void *__kmalloc_node(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *s;
+	unsigned long offset;
+
+	/* Find object. */
+	s = page->slab_cache;
+
+	/* Find offset within object. */
+	offset = (ptr - page_address(page)) % s->size;
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= s->object_size && n <= s->object_size - offset)
+		return NULL;
+
+	return s->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 static size_t __ksize(const void *object)
 {
 	struct page *page;