[{"id":1416480,"web_url":"http://patchwork.ozlabs.org/comment/1416480/","msgid":"<CAGXu5jJQQNDMK7vxKqhwQukL771uOT-2No7fnNreQVLWs8UBMA@mail.gmail.com>","date":"2016-07-25T20:45:50","subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","submitter":{"id":10641,"url":"http://patchwork.ozlabs.org/api/people/10641/","name":"Kees Cook","email":"keescook@chromium.org"},"content":"On Mon, Jul 25, 2016 at 12:16 PM, Laura Abbott <labbott@redhat.com> wrote:\n> On 07/20/2016 01:27 PM, Kees Cook wrote:\n>>\n>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the\n>> SLUB allocator to catch any copies that may span objects. Includes a\n>> redzone handling fix discovered by Michael Ellerman.\n>>\n>> Based on code from PaX and grsecurity.\n>>\n>> Signed-off-by: Kees Cook <keescook@chromium.org>\n>> Tested-by: Michael Ellerman <mpe@ellerman.id.au>\n>> ---\n>>  init/Kconfig |  1 +\n>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++\n>>  2 files changed, 37 insertions(+)\n>>\n>> diff --git a/init/Kconfig b/init/Kconfig\n>> index 798c2020ee7c..1c4711819dfd 100644\n>> --- a/init/Kconfig\n>> +++ b/init/Kconfig\n>> @@ -1765,6 +1765,7 @@ config SLAB\n>>\n>>  config SLUB\n>>         bool \"SLUB (Unqueued Allocator)\"\n>> +       select HAVE_HARDENED_USERCOPY_ALLOCATOR\n>>         help\n>>            SLUB is a slab allocator that minimizes cache line usage\n>>            instead of managing queues of cached objects (SLAB approach).\n>> diff --git a/mm/slub.c b/mm/slub.c\n>> index 825ff4505336..7dee3d9a5843 100644\n>> --- a/mm/slub.c\n>> +++ b/mm/slub.c\n>> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int\n>> node)\n>>  EXPORT_SYMBOL(__kmalloc_node);\n>>  #endif\n>>\n>> +#ifdef CONFIG_HARDENED_USERCOPY\n>> +/*\n>> + * Rejects objects that are incorrectly sized.\n>> + *\n>> + * Returns NULL if check passes, otherwise const char * to name of cache\n>> + * to indicate an error.\n>> + */\n>> +const char *__check_heap_object(const void *ptr, unsigned long n,\n>> +                               struct page *page)\n>> +{\n>> +       struct kmem_cache *s;\n>> +       unsigned long offset;\n>> +       size_t object_size;\n>> +\n>> +       /* Find object and usable object size. */\n>> +       s = page->slab_cache;\n>> +       object_size = slab_ksize(s);\n>> +\n>> +       /* Find offset within object. */\n>> +       offset = (ptr - page_address(page)) % s->size;\n>> +\n>> +       /* Adjust for redzone and reject if within the redzone. */\n>> +       if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {\n>> +               if (offset < s->red_left_pad)\n>> +                       return s->name;\n>> +               offset -= s->red_left_pad;\n>> +       }\n>> +\n>> +       /* Allow address range falling entirely within object size. */\n>> +       if (offset <= object_size && n <= object_size - offset)\n>> +               return NULL;\n>> +\n>> +       return s->name;\n>> +}\n>> +#endif /* CONFIG_HARDENED_USERCOPY */\n>> +\n>\n>\n> I compared this against what check_valid_pointer does for SLUB_DEBUG\n> checking. I was hoping we could utilize that function to avoid\n> duplication but a) __check_heap_object needs to allow accesses anywhere\n> in the object, not just the beginning b) accessing page->objects\n> is racy without the addition of locking in SLUB_DEBUG.\n>\n> Still, the ptr < page_address(page) check from __check_heap_object would\n> be good to add to avoid generating garbage large offsets and trying to\n> infer C math.\n>\n> diff --git a/mm/slub.c b/mm/slub.c\n> index 7dee3d9..5370e4f 100644\n> --- a/mm/slub.c\n> +++ b/mm/slub.c\n> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr,\n> unsigned long n,\n>         s = page->slab_cache;\n>         object_size = slab_ksize(s);\n>  +       if (ptr < page_address(page))\n> +               return s->name;\n> +\n>         /* Find offset within object. */\n>         offset = (ptr - page_address(page)) % s->size;\n>\n> With that, you can add\n>\n> Reviwed-by: Laura Abbott <labbott@redhat.com>\n\nCool, I'll add that.\n\nShould I add your reviewed-by for this patch only or for the whole series?\n\nThanks!\n\n-Kees\n\n>\n>>  static size_t __ksize(const void *object)\n>>  {\n>>         struct page *page;\n>>\n>\n> Thanks,\n> Laura","headers":{"Return-Path":"<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>","X-Original-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3rytc50wL4z9s5J\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 06:47:33 +1000 (AEST)","from ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 3rytc472q9zDqSM\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 06:47:32 +1000 (AEST)","from mail-wm0-x235.google.com (mail-wm0-x235.google.com\n\t[IPv6:2a00:1450:400c:c09::235])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128\n\tbits)) (No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 3rytZB6LBVzDqSM\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tTue, 26 Jul 2016 06:45:54 +1000 (AEST)","by mail-wm0-x235.google.com with SMTP id f65so149672190wmi.0\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tMon, 25 Jul 2016 13:45:54 -0700 (PDT)","by 10.28.167.78 with HTTP; Mon, 25 Jul 2016 13:45:50 -0700 (PDT)"],"Authentication-Results":["ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=google.com header.i=@google.com\n\theader.b=gNFRwsfe; \n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n\tunprotected) header.d=chromium.org header.i=@chromium.org\n\theader.b=hZSQaq8i; dkim-atps=neutral","lists.ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=google.com header.i=@google.com\n\theader.b=gNFRwsfe; \n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n\tunprotected) header.d=chromium.org header.i=@chromium.org\n\theader.b=hZSQaq8i; dkim-atps=neutral","lists.ozlabs.org; dkim=pass (2048-bit key;\n\tunprotected) header.d=google.com header.i=@google.com\n\theader.b=gNFRwsfe; dkim=pass (1024-bit key;\n\tunprotected) header.d=chromium.org header.i=@chromium.org\n\theader.b=hZSQaq8i; dkim-atps=neutral"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com;\n\ts=20120113; \n\th=mime-version:sender:in-reply-to:references:from:date:message-id\n\t:subject:to:cc;\n\tbh=26fQHTi8fctQ+6LATTAHudCts0nSg7qT1amj2sFhVWo=;\n\tb=gNFRwsfez9Fbvx5P0CACdzxfpnPZy/EKkVMFN+y/5ElB2DjdWSTwdoon+G1Dx69ZCY\n\tFJd6ZVXn3t0KkPtuZ1W29lprFkBMGDSPdojn1XmcxatpwMzEUR6OGGYTtRMslTGgeZVl\n\t2mCFzQjeLTaYueNF7dZZIBWEmFJ1wjApdBl2u57lYOk/RlwT+TRdH51mE6ITFYim2zkb\n\tBFXGOA7fN9oDkTf7EW48IQiXYF96m82r+Iy6/i74C3Whk+ku0T4+/u5uAZ/d+daXdjoL\n\tS8sMAIfEIcLZufQur+QK2NdIYQ4JbCyywjX/kRYbVeoJycrny4kmCff6Lgi6rVap8r8a\n\tBeAg==","v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org;\n\ts=google; \n\th=mime-version:sender:in-reply-to:references:from:date:message-id\n\t:subject:to:cc;\n\tbh=26fQHTi8fctQ+6LATTAHudCts0nSg7qT1amj2sFhVWo=;\n\tb=hZSQaq8iNZhCqMVVsxusjlCK8MlfXEzFKPWJB9Y7u+NlsrnKUGfDzY+B7AYndXt4FY\n\tfd4FokE73SmfikVFJsqgL/QJJ6OQ0F/ApQiizPKKv1Wefb+5Eze00sGX7gs2RlI/9f8Q\n\tOVnovU3ER3fbltbaFBgpT4AvmbtP1b/yqr1S4="],"X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20130820;\n\th=x-gm-message-state:mime-version:sender:in-reply-to:references:from\n\t:date:message-id:subject:to:cc;\n\tbh=26fQHTi8fctQ+6LATTAHudCts0nSg7qT1amj2sFhVWo=;\n\tb=F3hHtQDprWtHs2OXrsAh7iamFB8CmOXMFMx8gp5Mlw2Bt6B+basXpe4EnQUNvm3ukR\n\t2sy1rV1bWCiDnfBWvvOQranP+5sRUv0E5k7Lr6nLZ26k+6ShmQ7QqgcTQ1b9hDNHrG/K\n\tfvzBghWIV3ZnMh1XL5XfXu9y8722YKxQX+g6wtZWfDq6h2lF3g0ikuru2UFKQEwhIelu\n\t0SOSkbTEh3eRVsi6HlRudr2f/FJfMRo/pwAVBKbniQqSHzsvLk4AcE97Hv2+nGPH3sMy\n\tEZBiXstsNpUMB+aKeEIa8wXu+QvxzfhXBuU9jrbHFLyX4uBydRUYLqgVlTaZ3Uan4rzG\n\tKIFw==","X-Gm-Message-State":"ALyK8tLSOD9EPviYXKP6NcjzHEn684ICc9JfxzrtyG6ztVbCgHeQi3UDZiGakZzZPlY7F/hD9e1NcurHGtIsMK8j","X-Received":"by 10.28.196.136 with SMTP id u130mr39945573wmf.83.1469479551069;\n\tMon, 25 Jul 2016 13:45:51 -0700 (PDT)","MIME-Version":"1.0","In-Reply-To":"<0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com>","References":"<1469046427-12696-1-git-send-email-keescook@chromium.org>\n\t<1469046427-12696-13-git-send-email-keescook@chromium.org>\n\t<0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com>","From":"Kees Cook <keescook@chromium.org>","Date":"Mon, 25 Jul 2016 13:45:50 -0700","X-Google-Sender-Auth":"bjQl8fCcR7Vg_Jac0PaE6gdVl_I","Message-ID":"<CAGXu5jJQQNDMK7vxKqhwQukL771uOT-2No7fnNreQVLWs8UBMA@mail.gmail.com>","Subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","To":"Laura Abbott <labbott@redhat.com>","X-BeenThere":"linuxppc-dev@lists.ozlabs.org","X-Mailman-Version":"2.1.22","Precedence":"list","List-Id":"Linux on PowerPC Developers Mail List\n\t<linuxppc-dev.lists.ozlabs.org>","List-Unsubscribe":"<https://lists.ozlabs.org/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=unsubscribe>","List-Archive":"<http://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=help>","List-Subscribe":"<https://lists.ozlabs.org/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=subscribe>","Cc":"Jan Kara <jack@suse.cz>, \"kernel-hardening@lists.openwall.com\"\n\t<kernel-hardening@lists.openwall.com>,\n\tWill Deacon <will.deacon@arm.com>, \n\tLinux-MM <linux-mm@kvack.org>, sparclinux <sparclinux@vger.kernel.org>,\n\tlinux-ia64@vger.kernel.org, Christoph Lameter <cl@linux.com>,\n\tAndrea Arcangeli <aarcange@redhat.com>,\n\tlinux-arch <linux-arch@vger.kernel.org>,\n\t\"x86@kernel.org\" <x86@kernel.org>, Russell King <linux@armlinux.org.uk>,\n\t\"linux-arm-kernel@lists.infradead.org\"\n\t<linux-arm-kernel@lists.infradead.org>, \n\tCatalin Marinas <catalin.marinas@arm.com>,\n\tPaX Team <pageexec@freemail.hu>, \n\tBorislav Petkov <bp@suse.de>, Mathias Krause <minipli@googlemail.com>,\n\tFenghua Yu <fenghua.yu@intel.com>, Rik van Riel <riel@redhat.com>,\n\tDavid Rientjes <rientjes@google.com>, Tony Luck <tony.luck@intel.com>,\n\tAndy Lutomirski <luto@kernel.org>, Josh Poimboeuf <jpoimboe@redhat.com>, \n\tAndrew Morton <akpm@linux-foundation.org>,\n\tDmitry Vyukov <dvyukov@google.com>, \n\tLaura Abbott <labbott@fedoraproject.org>,\n\tBrad Spengler <spender@grsecurity.net>,\n\tArd Biesheuvel <ard.biesheuvel@linaro.org>,\n\tLKML <linux-kernel@vger.kernel.org>, Pekka Enberg <penberg@kernel.org>,\n\tDaniel Micay <danielmicay@gmail.com>,\n\tCasey Schaufler <casey@schaufler-ca.com>, \n\tJoonsoo Kim <iamjoonsoo.kim@lge.com>,\n\t\"linuxppc-dev@lists.ozlabs.org\" <linuxppc-dev@lists.ozlabs.org>,\n\t\"David S. Miller\" <davem@davemloft.net>","Content-Type":"text/plain; charset=\"utf-8\"","Content-Transfer-Encoding":"base64","Errors-To":"linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org","Sender":"\"Linuxppc-dev\"\n\t<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>"}},{"id":1416514,"web_url":"http://patchwork.ozlabs.org/comment/1416514/","msgid":"<1469482923.30053.122.camel@redhat.com>","date":"2016-07-25T21:42:03","subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","submitter":{"id":781,"url":"http://patchwork.ozlabs.org/api/people/781/","name":"Rik van Riel","email":"riel@redhat.com"},"content":"On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote:\n> On 07/20/2016 01:27 PM, Kees Cook wrote:\n> > Under CONFIG_HARDENED_USERCOPY, this adds object size checking to\n> > the\n> > SLUB allocator to catch any copies that may span objects. Includes\n> > a\n> > redzone handling fix discovered by Michael Ellerman.\n> > \n> > Based on code from PaX and grsecurity.\n> > \n> > Signed-off-by: Kees Cook <keescook@chromium.org>\n> > Tested-by: Michael Ellerman <mpe@ellerman.id.au>\n> > ---\n> >  init/Kconfig |  1 +\n> >  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++\n> >  2 files changed, 37 insertions(+)\n> > \n> > diff --git a/init/Kconfig b/init/Kconfig\n> > index 798c2020ee7c..1c4711819dfd 100644\n> > --- a/init/Kconfig\n> > +++ b/init/Kconfig\n> > @@ -1765,6 +1765,7 @@ config SLAB\n> > \n> >  config SLUB\n> >  \tbool \"SLUB (Unqueued Allocator)\"\n> > +\tselect HAVE_HARDENED_USERCOPY_ALLOCATOR\n> >  \thelp\n> >  \t   SLUB is a slab allocator that minimizes cache line\n> > usage\n> >  \t   instead of managing queues of cached objects (SLAB\n> > approach).\n> > diff --git a/mm/slub.c b/mm/slub.c\n> > index 825ff4505336..7dee3d9a5843 100644\n> > --- a/mm/slub.c\n> > +++ b/mm/slub.c\n> > @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t\n> > flags, int node)\n> >  EXPORT_SYMBOL(__kmalloc_node);\n> >  #endif\n> > \n> > +#ifdef CONFIG_HARDENED_USERCOPY\n> > +/*\n> > + * Rejects objects that are incorrectly sized.\n> > + *\n> > + * Returns NULL if check passes, otherwise const char * to name of\n> > cache\n> > + * to indicate an error.\n> > + */\n> > +const char *__check_heap_object(const void *ptr, unsigned long n,\n> > +\t\t\t\tstruct page *page)\n> > +{\n> > +\tstruct kmem_cache *s;\n> > +\tunsigned long offset;\n> > +\tsize_t object_size;\n> > +\n> > +\t/* Find object and usable object size. */\n> > +\ts = page->slab_cache;\n> > +\tobject_size = slab_ksize(s);\n> > +\n> > +\t/* Find offset within object. */\n> > +\toffset = (ptr - page_address(page)) % s->size;\n> > +\n> > +\t/* Adjust for redzone and reject if within the redzone. */\n> > +\tif (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {\n> > +\t\tif (offset < s->red_left_pad)\n> > +\t\t\treturn s->name;\n> > +\t\toffset -= s->red_left_pad;\n> > +\t}\n> > +\n> > +\t/* Allow address range falling entirely within object\n> > size. */\n> > +\tif (offset <= object_size && n <= object_size - offset)\n> > +\t\treturn NULL;\n> > +\n> > +\treturn s->name;\n> > +}\n> > +#endif /* CONFIG_HARDENED_USERCOPY */\n> > +\n> \n> I compared this against what check_valid_pointer does for SLUB_DEBUG\n> checking. I was hoping we could utilize that function to avoid\n> duplication but a) __check_heap_object needs to allow accesses\n> anywhere\n> in the object, not just the beginning b) accessing page->objects\n> is racy without the addition of locking in SLUB_DEBUG.\n> \n> Still, the ptr < page_address(page) check from __check_heap_object\n> would\n> be good to add to avoid generating garbage large offsets and trying\n> to\n> infer C math.\n> \n> diff --git a/mm/slub.c b/mm/slub.c\n> index 7dee3d9..5370e4f 100644\n> --- a/mm/slub.c\n> +++ b/mm/slub.c\n> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void\n> *ptr, unsigned long n,\n>          s = page->slab_cache;\n>          object_size = slab_ksize(s);\n>   \n> +       if (ptr < page_address(page))\n> +               return s->name;\n> +\n>          /* Find offset within object. */\n>          offset = (ptr - page_address(page)) % s->size;\n> \n\nI don't get it, isn't that already guaranteed because we\nlook for the page that ptr is in, before __check_heap_object\nis called?\n\nSpecifically, in patch 3/12:\n\n+       page = virt_to_head_page(ptr);\n+\n+       /* Check slab allocator for flags and size. */\n+       if (PageSlab(page))\n+               return __check_heap_object(ptr, n, page);\n\nHow can that generate a ptr that is not inside the page?\n\nWhat am I overlooking?  And, should it be in the changelog or\na comment? :)","headers":{"Return-Path":"<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>","X-Original-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3ryvrw5hptz9sdg\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 07:43:44 +1000 (AEST)","from ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 3ryvrw4twbzDrFK\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 07:43:44 +1000 (AEST)","from mx1.redhat.com (mx1.redhat.com [209.132.183.28])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 3ryvqD3QB9zDqf8\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tTue, 26 Jul 2016 07:42:16 +1000 (AEST)","from int-mx11.intmail.prod.int.phx2.redhat.com\n\t(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby mx1.redhat.com (Postfix) with ESMTPS id 8082720264;\n\tMon, 25 Jul 2016 21:42:13 +0000 (UTC)","from annuminas.surriel.com (ovpn-116-20.rdu2.redhat.com\n\t[10.10.116.20])\n\tby int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with\n\tESMTP id u6PLg5aJ025411\n\t(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256\n\tverify=NO); Mon, 25 Jul 2016 17:42:07 -0400"],"Message-ID":"<1469482923.30053.122.camel@redhat.com>","Subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","From":"Rik van Riel <riel@redhat.com>","To":"Laura Abbott <labbott@redhat.com>, Kees Cook <keescook@chromium.org>,\n\tkernel-hardening@lists.openwall.com","Date":"Mon, 25 Jul 2016 17:42:03 -0400","In-Reply-To":"<0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com>","References":"<1469046427-12696-1-git-send-email-keescook@chromium.org>\n\t<1469046427-12696-13-git-send-email-keescook@chromium.org>\n\t<0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com>","Mime-Version":"1.0","X-Scanned-By":"MIMEDefang 2.68 on 10.5.11.24","X-Greylist":"Sender IP whitelisted, not delayed by milter-greylist-4.5.16\n\t(mx1.redhat.com [10.5.110.29]); Mon, 25 Jul 2016 21:42:14 +0000 (UTC)","X-BeenThere":"linuxppc-dev@lists.ozlabs.org","X-Mailman-Version":"2.1.22","Precedence":"list","List-Id":"Linux on PowerPC Developers Mail List\n\t<linuxppc-dev.lists.ozlabs.org>","List-Unsubscribe":"<https://lists.ozlabs.org/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=unsubscribe>","List-Archive":"<http://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=help>","List-Subscribe":"<https://lists.ozlabs.org/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=subscribe>","Cc":"Jan Kara <jack@suse.cz>, Will Deacon <will.deacon@arm.com>,\n\tlinux-mm@kvack.org, sparclinux@vger.kernel.org,\n\tlinux-ia64@vger.kernel.org, Christoph Lameter <cl@linux.com>,\n\tAndrea Arcangeli <aarcange@redhat.com>, \n\tlinux-arch@vger.kernel.org, x86@kernel.org,\n\tRussell King <linux@armlinux.org.uk>,\n\tlinux-arm-kernel@lists.infradead.org, \n\tCatalin Marinas <catalin.marinas@arm.com>,\n\tPaX Team <pageexec@freemail.hu>, \n\tBorislav Petkov <bp@suse.de>, Mathias Krause <minipli@googlemail.com>,\n\tFenghua Yu <fenghua.yu@intel.com>, David Rientjes <rientjes@google.com>, \n\tTony Luck <tony.luck@intel.com>, Andy Lutomirski <luto@kernel.org>,\n\tJosh Poimboeuf <jpoimboe@redhat.com>,\n\tJoonsoo Kim <iamjoonsoo.kim@lge.com>, \n\tDmitry Vyukov <dvyukov@google.com>,\n\tLaura Abbott <labbott@fedoraproject.org>, \n\tBrad Spengler <spender@grsecurity.net>,\n\tArd Biesheuvel <ard.biesheuvel@linaro.org>,\n\tlinux-kernel@vger.kernel.org, Pekka Enberg <penberg@kernel.org>,\n\tDaniel Micay <danielmicay@gmail.com>, \n\tCasey Schaufler <casey@schaufler-ca.com>,\n\tAndrew Morton <akpm@linux-foundation.org>,\n\tlinuxppc-dev@lists.ozlabs.org, \"David S. Miller\" <davem@davemloft.net>","Content-Type":"multipart/mixed;\n\tboundary=\"===============4733140868638272219==\"","Errors-To":"linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org","Sender":"\"Linuxppc-dev\"\n\t<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>"}},{"id":1416582,"web_url":"http://patchwork.ozlabs.org/comment/1416582/","msgid":"<9fca8a3c-da82-d609-79bb-4f5a779cbc1b@redhat.com>","date":"2016-07-25T23:29:51","subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","submitter":{"id":66322,"url":"http://patchwork.ozlabs.org/api/people/66322/","name":"Laura Abbott","email":"labbott@redhat.com"},"content":"On 07/25/2016 02:42 PM, Rik van Riel wrote:\n> On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote:\n>> On 07/20/2016 01:27 PM, Kees Cook wrote:\n>>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to\n>>> the\n>>> SLUB allocator to catch any copies that may span objects. Includes\n>>> a\n>>> redzone handling fix discovered by Michael Ellerman.\n>>>\n>>> Based on code from PaX and grsecurity.\n>>>\n>>> Signed-off-by: Kees Cook <keescook@chromium.org>\n>>> Tested-by: Michael Ellerman <mpe@ellerman.id.au>\n>>> ---\n>>>  init/Kconfig |  1 +\n>>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++\n>>>  2 files changed, 37 insertions(+)\n>>>\n>>> diff --git a/init/Kconfig b/init/Kconfig\n>>> index 798c2020ee7c..1c4711819dfd 100644\n>>> --- a/init/Kconfig\n>>> +++ b/init/Kconfig\n>>> @@ -1765,6 +1765,7 @@ config SLAB\n>>>\n>>>  config SLUB\n>>>  \tbool \"SLUB (Unqueued Allocator)\"\n>>> +\tselect HAVE_HARDENED_USERCOPY_ALLOCATOR\n>>>  \thelp\n>>>  \t   SLUB is a slab allocator that minimizes cache line\n>>> usage\n>>>  \t   instead of managing queues of cached objects (SLAB\n>>> approach).\n>>> diff --git a/mm/slub.c b/mm/slub.c\n>>> index 825ff4505336..7dee3d9a5843 100644\n>>> --- a/mm/slub.c\n>>> +++ b/mm/slub.c\n>>> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t\n>>> flags, int node)\n>>>  EXPORT_SYMBOL(__kmalloc_node);\n>>>  #endif\n>>>\n>>> +#ifdef CONFIG_HARDENED_USERCOPY\n>>> +/*\n>>> + * Rejects objects that are incorrectly sized.\n>>> + *\n>>> + * Returns NULL if check passes, otherwise const char * to name of\n>>> cache\n>>> + * to indicate an error.\n>>> + */\n>>> +const char *__check_heap_object(const void *ptr, unsigned long n,\n>>> +\t\t\t\tstruct page *page)\n>>> +{\n>>> +\tstruct kmem_cache *s;\n>>> +\tunsigned long offset;\n>>> +\tsize_t object_size;\n>>> +\n>>> +\t/* Find object and usable object size. */\n>>> +\ts = page->slab_cache;\n>>> +\tobject_size = slab_ksize(s);\n>>> +\n>>> +\t/* Find offset within object. */\n>>> +\toffset = (ptr - page_address(page)) % s->size;\n>>> +\n>>> +\t/* Adjust for redzone and reject if within the redzone. */\n>>> +\tif (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {\n>>> +\t\tif (offset < s->red_left_pad)\n>>> +\t\t\treturn s->name;\n>>> +\t\toffset -= s->red_left_pad;\n>>> +\t}\n>>> +\n>>> +\t/* Allow address range falling entirely within object\n>>> size. */\n>>> +\tif (offset <= object_size && n <= object_size - offset)\n>>> +\t\treturn NULL;\n>>> +\n>>> +\treturn s->name;\n>>> +}\n>>> +#endif /* CONFIG_HARDENED_USERCOPY */\n>>> +\n>>\n>> I compared this against what check_valid_pointer does for SLUB_DEBUG\n>> checking. I was hoping we could utilize that function to avoid\n>> duplication but a) __check_heap_object needs to allow accesses\n>> anywhere\n>> in the object, not just the beginning b) accessing page->objects\n>> is racy without the addition of locking in SLUB_DEBUG.\n>>\n>> Still, the ptr < page_address(page) check from __check_heap_object\n>> would\n>> be good to add to avoid generating garbage large offsets and trying\n>> to\n>> infer C math.\n>>\n>> diff --git a/mm/slub.c b/mm/slub.c\n>> index 7dee3d9..5370e4f 100644\n>> --- a/mm/slub.c\n>> +++ b/mm/slub.c\n>> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void\n>> *ptr, unsigned long n,\n>>          s = page->slab_cache;\n>>          object_size = slab_ksize(s);\n>>\n>> +       if (ptr < page_address(page))\n>> +               return s->name;\n>> +\n>>          /* Find offset within object. */\n>>          offset = (ptr - page_address(page)) % s->size;\n>>\n>\n> I don't get it, isn't that already guaranteed because we\n> look for the page that ptr is in, before __check_heap_object\n> is called?\n>\n> Specifically, in patch 3/12:\n>\n> +       page = virt_to_head_page(ptr);\n> +\n> +       /* Check slab allocator for flags and size. */\n> +       if (PageSlab(page))\n> +               return __check_heap_object(ptr, n, page);\n>\n> How can that generate a ptr that is not inside the page?\n>\n> What am I overlooking?  And, should it be in the changelog or\n> a comment? :)\n>\n\n\nI ran into the subtraction issue when the vmalloc detection wasn't\nworking on ARM64, somehow virt_to_head_page turned into a page\nthat happened to have PageSlab set. I agree if everything is working\nproperly this is redundant but given the type of feature this is, a\nlittle bit of redundancy against a system running off into the weeds\nor bad patches might be warranted.\n\nI'm not super attached to the check if other maintainers think it\nis redundant. Updating the __check_heap_object header comment\nwith a note of what we are assuming could work\n\nThanks,\nLaura","headers":{"Return-Path":"<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>","X-Original-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3ryyDl1s3Kz9s9c\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 09:31:03 +1000 (AEST)","from ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 3ryyDl04dFzDrLL\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 09:31:03 +1000 (AEST)","from mail-it0-f47.google.com (mail-it0-f47.google.com\n\t[209.85.214.47])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128\n\tbits)) (No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 3ryyCY4bsGzDqgy\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tTue, 26 Jul 2016 09:29:59 +1000 (AEST)","by mail-it0-f47.google.com with SMTP id u186so98296532ita.0\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tMon, 25 Jul 2016 16:29:59 -0700 (PDT)","from ?IPv6:2601:602:9800:177f::337f? ([2601:602:9800:177f::337f])\n\tby smtp.gmail.com with ESMTPSA id\n\t140sm8207513itl.4.2016.07.25.16.29.52\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tMon, 25 Jul 2016 16:29:55 -0700 (PDT)"],"X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20130820;\n\th=x-gm-message-state:subject:to:references:cc:from:message-id:date\n\t:user-agent:mime-version:in-reply-to:content-transfer-encoding;\n\tbh=PIwzk+OsU9IfR8D7ECF9hyO6unfjxQ04xq/Gj8NMqqc=;\n\tb=acm6FdYzVla2F5vo1tip4pCw2VXsGv/ZPIIISeRM67UGJSTC2vedsInOn0TFmzZCGM\n\tLoHENaO9YLKLCi9eOi+SiJmh7QrAqKQ7D0sBFTGT8iPp1CG9Cqbpp1TMWRS58/4k4lol\n\tj7F2ExN17KrkloOCewJ7yzf02tatyHklfFqB47CtY0EF0bxRTP4Y87eKLfm7qrK6bKiG\n\tj0vCBPQYH1CKosAbsvM4VB4YMBEma4muTdsSyMoZ0hsyqRkqUzSxiMQ5KdJV9yo121pK\n\tGVDVVJZXi+ysu1nSb46LMYaJPbMOfTuNYN2OWGzV/dSgwly/4uaOiBhMJtPmFkD35skh\n\ttCLg==","X-Gm-Message-State":"ALyK8tINsLeHlnUTeEW1yIP62yS7Bd0uoTZ3ZN0o/yJ6dW4iUXYAHlEjx1o1lVVcTM/n9TSb","X-Received":"by 10.36.103.214 with SMTP id u205mr31606997itc.88.1469489397045;\n\tMon, 25 Jul 2016 16:29:57 -0700 (PDT)","Subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","To":"Rik van Riel <riel@redhat.com>, Kees Cook <keescook@chromium.org>,\n\tkernel-hardening@lists.openwall.com","References":"<1469046427-12696-1-git-send-email-keescook@chromium.org>\n\t<1469046427-12696-13-git-send-email-keescook@chromium.org>\n\t<0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com>\n\t<1469482923.30053.122.camel@redhat.com>","From":"Laura Abbott <labbott@redhat.com>","Message-ID":"<9fca8a3c-da82-d609-79bb-4f5a779cbc1b@redhat.com>","Date":"Mon, 25 Jul 2016 16:29:51 -0700","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101\n\tThunderbird/45.1.1","MIME-Version":"1.0","In-Reply-To":"<1469482923.30053.122.camel@redhat.com>","X-BeenThere":"linuxppc-dev@lists.ozlabs.org","X-Mailman-Version":"2.1.22","Precedence":"list","List-Id":"Linux on PowerPC Developers Mail List\n\t<linuxppc-dev.lists.ozlabs.org>","List-Unsubscribe":"<https://lists.ozlabs.org/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=unsubscribe>","List-Archive":"<http://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=help>","List-Subscribe":"<https://lists.ozlabs.org/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=subscribe>","Cc":"Jan Kara <jack@suse.cz>, Will Deacon <will.deacon@arm.com>,\n\tlinux-mm@kvack.org, sparclinux@vger.kernel.org,\n\tlinux-ia64@vger.kernel.org, Christoph Lameter <cl@linux.com>,\n\tAndrea Arcangeli <aarcange@redhat.com>, \n\tlinux-arch@vger.kernel.org, x86@kernel.org,\n\tRussell King <linux@armlinux.org.uk>,\n\tlinux-arm-kernel@lists.infradead.org, \n\tCatalin Marinas <catalin.marinas@arm.com>,\n\tPaX Team <pageexec@freemail.hu>, \n\tBorislav Petkov <bp@suse.de>, Mathias Krause <minipli@googlemail.com>,\n\tFenghua Yu <fenghua.yu@intel.com>, David Rientjes <rientjes@google.com>, \n\tTony Luck <tony.luck@intel.com>, Andy Lutomirski <luto@kernel.org>,\n\tJosh Poimboeuf <jpoimboe@redhat.com>,\n\tJoonsoo Kim <iamjoonsoo.kim@lge.com>, \n\tDmitry Vyukov <dvyukov@google.com>,\n\tLaura Abbott <labbott@fedoraproject.org>, \n\tBrad Spengler <spender@grsecurity.net>,\n\tArd Biesheuvel <ard.biesheuvel@linaro.org>,\n\tlinux-kernel@vger.kernel.org, Pekka Enberg <penberg@kernel.org>,\n\tDaniel Micay <danielmicay@gmail.com>, \n\tCasey Schaufler <casey@schaufler-ca.com>,\n\tAndrew Morton <akpm@linux-foundation.org>,\n\tlinuxppc-dev@lists.ozlabs.org, \"David S. Miller\" <davem@davemloft.net>","Content-Transfer-Encoding":"base64","Content-Type":"text/plain; charset=\"utf-8\"; Format=\"flowed\"","Errors-To":"linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org","Sender":"\"Linuxppc-dev\"\n\t<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>"}},{"id":1416629,"web_url":"http://patchwork.ozlabs.org/comment/1416629/","msgid":"<1469492520.30053.123.camel@redhat.com>","date":"2016-07-26T00:22:00","subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","submitter":{"id":781,"url":"http://patchwork.ozlabs.org/api/people/781/","name":"Rik van Riel","email":"riel@redhat.com"},"content":"On Mon, 2016-07-25 at 16:29 -0700, Laura Abbott wrote:\n> On 07/25/2016 02:42 PM, Rik van Riel wrote:\n> > On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote:\n> > > On 07/20/2016 01:27 PM, Kees Cook wrote:\n> > > > Under CONFIG_HARDENED_USERCOPY, this adds object size checking\n> > > > to\n> > > > the\n> > > > SLUB allocator to catch any copies that may span objects.\n> > > > Includes\n> > > > a\n> > > > redzone handling fix discovered by Michael Ellerman.\n> > > > \n> > > > Based on code from PaX and grsecurity.\n> > > > \n> > > > Signed-off-by: Kees Cook <keescook@chromium.org>\n> > > > Tested-by: Michael Ellerman <mpe@ellerman.id.au>\n> > > > ---\n> > > >  init/Kconfig |  1 +\n> > > >  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++\n> > > >  2 files changed, 37 insertions(+)\n> > > > \n> > > > diff --git a/init/Kconfig b/init/Kconfig\n> > > > index 798c2020ee7c..1c4711819dfd 100644\n> > > > --- a/init/Kconfig\n> > > > +++ b/init/Kconfig\n> > > > @@ -1765,6 +1765,7 @@ config SLAB\n> > > > \n> > > >  config SLUB\n> > > >  \tbool \"SLUB (Unqueued Allocator)\"\n> > > > +\tselect HAVE_HARDENED_USERCOPY_ALLOCATOR\n> > > >  \thelp\n> > > >  \t   SLUB is a slab allocator that minimizes cache line\n> > > > usage\n> > > >  \t   instead of managing queues of cached objects (SLAB\n> > > > approach).\n> > > > diff --git a/mm/slub.c b/mm/slub.c\n> > > > index 825ff4505336..7dee3d9a5843 100644\n> > > > --- a/mm/slub.c\n> > > > +++ b/mm/slub.c\n> > > > @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t\n> > > > flags, int node)\n> > > >  EXPORT_SYMBOL(__kmalloc_node);\n> > > >  #endif\n> > > > \n> > > > +#ifdef CONFIG_HARDENED_USERCOPY\n> > > > +/*\n> > > > + * Rejects objects that are incorrectly sized.\n> > > > + *\n> > > > + * Returns NULL if check passes, otherwise const char * to\n> > > > name of\n> > > > cache\n> > > > + * to indicate an error.\n> > > > + */\n> > > > +const char *__check_heap_object(const void *ptr, unsigned long\n> > > > n,\n> > > > +\t\t\t\tstruct page *page)\n> > > > +{\n> > > > +\tstruct kmem_cache *s;\n> > > > +\tunsigned long offset;\n> > > > +\tsize_t object_size;\n> > > > +\n> > > > +\t/* Find object and usable object size. */\n> > > > +\ts = page->slab_cache;\n> > > > +\tobject_size = slab_ksize(s);\n> > > > +\n> > > > +\t/* Find offset within object. */\n> > > > +\toffset = (ptr - page_address(page)) % s->size;\n> > > > +\n> > > > +\t/* Adjust for redzone and reject if within the\n> > > > redzone. */\n> > > > +\tif (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {\n> > > > +\t\tif (offset < s->red_left_pad)\n> > > > +\t\t\treturn s->name;\n> > > > +\t\toffset -= s->red_left_pad;\n> > > > +\t}\n> > > > +\n> > > > +\t/* Allow address range falling entirely within object\n> > > > size. */\n> > > > +\tif (offset <= object_size && n <= object_size -\n> > > > offset)\n> > > > +\t\treturn NULL;\n> > > > +\n> > > > +\treturn s->name;\n> > > > +}\n> > > > +#endif /* CONFIG_HARDENED_USERCOPY */\n> > > > +\n> > > \n> > > I compared this against what check_valid_pointer does for\n> > > SLUB_DEBUG\n> > > checking. I was hoping we could utilize that function to avoid\n> > > duplication but a) __check_heap_object needs to allow accesses\n> > > anywhere\n> > > in the object, not just the beginning b) accessing page->objects\n> > > is racy without the addition of locking in SLUB_DEBUG.\n> > > \n> > > Still, the ptr < page_address(page) check from\n> > > __check_heap_object\n> > > would\n> > > be good to add to avoid generating garbage large offsets and\n> > > trying\n> > > to\n> > > infer C math.\n> > > \n> > > diff --git a/mm/slub.c b/mm/slub.c\n> > > index 7dee3d9..5370e4f 100644\n> > > --- a/mm/slub.c\n> > > +++ b/mm/slub.c\n> > > @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void\n> > > *ptr, unsigned long n,\n> > >          s = page->slab_cache;\n> > >          object_size = slab_ksize(s);\n> > > \n> > > +       if (ptr < page_address(page))\n> > > +               return s->name;\n> > > +\n> > >          /* Find offset within object. */\n> > >          offset = (ptr - page_address(page)) % s->size;\n> > > \n> > \n> > I don't get it, isn't that already guaranteed because we\n> > look for the page that ptr is in, before __check_heap_object\n> > is called?\n> > \n> > Specifically, in patch 3/12:\n> > \n> > +       page = virt_to_head_page(ptr);\n> > +\n> > +       /* Check slab allocator for flags and size. */\n> > +       if (PageSlab(page))\n> > +               return __check_heap_object(ptr, n, page);\n> > \n> > How can that generate a ptr that is not inside the page?\n> > \n> > What am I overlooking?  And, should it be in the changelog or\n> > a comment? :)\n> > \n> \n> \n> I ran into the subtraction issue when the vmalloc detection wasn't\n> working on ARM64, somehow virt_to_head_page turned into a page\n> that happened to have PageSlab set. I agree if everything is working\n> properly this is redundant but given the type of feature this is, a\n> little bit of redundancy against a system running off into the weeds\n> or bad patches might be warranted.\n> \nThat's fair.  I have no objection to the check, but would\nlike to see it documented, since it does look a little out\nof place.","headers":{"Return-Path":"<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>","X-Original-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3ryzP320Zkz9ssP\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 10:23:19 +1000 (AEST)","from ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 3ryzP31Cr5zDrXS\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 10:23:19 +1000 (AEST)","from mx1.redhat.com (mx1.redhat.com [209.132.183.28])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 3ryzMp4xlWzDqYp\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tTue, 26 Jul 2016 10:22:13 +1000 (AEST)","from int-mx10.intmail.prod.int.phx2.redhat.com\n\t(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby mx1.redhat.com (Postfix) with ESMTPS id 97E1BF2620;\n\tTue, 26 Jul 2016 00:22:09 +0000 (UTC)","from annuminas.surriel.com (ovpn-116-20.rdu2.redhat.com\n\t[10.10.116.20])\n\tby int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with\n\tESMTP id u6Q0M28Q026144\n\t(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256\n\tverify=NO); Mon, 25 Jul 2016 20:22:03 -0400"],"Message-ID":"<1469492520.30053.123.camel@redhat.com>","Subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","From":"Rik van Riel <riel@redhat.com>","To":"Laura Abbott <labbott@redhat.com>, Kees Cook <keescook@chromium.org>,\n\tkernel-hardening@lists.openwall.com","Date":"Mon, 25 Jul 2016 20:22:00 -0400","In-Reply-To":"<9fca8a3c-da82-d609-79bb-4f5a779cbc1b@redhat.com>","References":"<1469046427-12696-1-git-send-email-keescook@chromium.org>\n\t<1469046427-12696-13-git-send-email-keescook@chromium.org>\n\t<0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com>\n\t<1469482923.30053.122.camel@redhat.com>\n\t<9fca8a3c-da82-d609-79bb-4f5a779cbc1b@redhat.com>","Mime-Version":"1.0","X-Scanned-By":"MIMEDefang 2.68 on 10.5.11.23","X-Greylist":"Sender IP whitelisted, not delayed by milter-greylist-4.5.16\n\t(mx1.redhat.com [10.5.110.39]); Tue, 26 Jul 2016 00:22:10 +0000 (UTC)","X-BeenThere":"linuxppc-dev@lists.ozlabs.org","X-Mailman-Version":"2.1.22","Precedence":"list","List-Id":"Linux on PowerPC Developers Mail List\n\t<linuxppc-dev.lists.ozlabs.org>","List-Unsubscribe":"<https://lists.ozlabs.org/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=unsubscribe>","List-Archive":"<http://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=help>","List-Subscribe":"<https://lists.ozlabs.org/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=subscribe>","Cc":"Jan Kara <jack@suse.cz>, Will Deacon <will.deacon@arm.com>,\n\tlinux-mm@kvack.org, sparclinux@vger.kernel.org,\n\tlinux-ia64@vger.kernel.org, Christoph Lameter <cl@linux.com>,\n\tAndrea Arcangeli <aarcange@redhat.com>, \n\tlinux-arch@vger.kernel.org, x86@kernel.org,\n\tRussell King <linux@armlinux.org.uk>,\n\tlinux-arm-kernel@lists.infradead.org, \n\tCatalin Marinas <catalin.marinas@arm.com>,\n\tPaX Team <pageexec@freemail.hu>, \n\tBorislav Petkov <bp@suse.de>, Mathias Krause <minipli@googlemail.com>,\n\tFenghua Yu <fenghua.yu@intel.com>, David Rientjes <rientjes@google.com>, \n\tTony Luck <tony.luck@intel.com>, Andy Lutomirski <luto@kernel.org>,\n\tJosh Poimboeuf <jpoimboe@redhat.com>,\n\tJoonsoo Kim <iamjoonsoo.kim@lge.com>, \n\tDmitry Vyukov <dvyukov@google.com>,\n\tLaura Abbott <labbott@fedoraproject.org>, \n\tBrad Spengler <spender@grsecurity.net>,\n\tArd Biesheuvel <ard.biesheuvel@linaro.org>,\n\tlinux-kernel@vger.kernel.org, Pekka Enberg <penberg@kernel.org>,\n\tDaniel Micay <danielmicay@gmail.com>, \n\tCasey Schaufler <casey@schaufler-ca.com>,\n\tAndrew Morton <akpm@linux-foundation.org>,\n\tlinuxppc-dev@lists.ozlabs.org, \"David S. Miller\" <davem@davemloft.net>","Content-Type":"multipart/mixed;\n\tboundary=\"===============0269575003597053217==\"","Errors-To":"linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org","Sender":"\"Linuxppc-dev\"\n\t<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>"}},{"id":1416680,"web_url":"http://patchwork.ozlabs.org/comment/1416680/","msgid":"<e19dd469-37c9-846f-43a6-f28ab0c6c547@redhat.com>","date":"2016-07-26T00:54:25","subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","submitter":{"id":66322,"url":"http://patchwork.ozlabs.org/api/people/66322/","name":"Laura Abbott","email":"labbott@redhat.com"},"content":"On 07/25/2016 01:45 PM, Kees Cook wrote:\n> On Mon, Jul 25, 2016 at 12:16 PM, Laura Abbott <labbott@redhat.com> wrote:\n>> On 07/20/2016 01:27 PM, Kees Cook wrote:\n>>>\n>>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the\n>>> SLUB allocator to catch any copies that may span objects. Includes a\n>>> redzone handling fix discovered by Michael Ellerman.\n>>>\n>>> Based on code from PaX and grsecurity.\n>>>\n>>> Signed-off-by: Kees Cook <keescook@chromium.org>\n>>> Tested-by: Michael Ellerman <mpe@ellerman.id.au>\n>>> ---\n>>>  init/Kconfig |  1 +\n>>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++\n>>>  2 files changed, 37 insertions(+)\n>>>\n>>> diff --git a/init/Kconfig b/init/Kconfig\n>>> index 798c2020ee7c..1c4711819dfd 100644\n>>> --- a/init/Kconfig\n>>> +++ b/init/Kconfig\n>>> @@ -1765,6 +1765,7 @@ config SLAB\n>>>\n>>>  config SLUB\n>>>         bool \"SLUB (Unqueued Allocator)\"\n>>> +       select HAVE_HARDENED_USERCOPY_ALLOCATOR\n>>>         help\n>>>            SLUB is a slab allocator that minimizes cache line usage\n>>>            instead of managing queues of cached objects (SLAB approach).\n>>> diff --git a/mm/slub.c b/mm/slub.c\n>>> index 825ff4505336..7dee3d9a5843 100644\n>>> --- a/mm/slub.c\n>>> +++ b/mm/slub.c\n>>> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int\n>>> node)\n>>>  EXPORT_SYMBOL(__kmalloc_node);\n>>>  #endif\n>>>\n>>> +#ifdef CONFIG_HARDENED_USERCOPY\n>>> +/*\n>>> + * Rejects objects that are incorrectly sized.\n>>> + *\n>>> + * Returns NULL if check passes, otherwise const char * to name of cache\n>>> + * to indicate an error.\n>>> + */\n>>> +const char *__check_heap_object(const void *ptr, unsigned long n,\n>>> +                               struct page *page)\n>>> +{\n>>> +       struct kmem_cache *s;\n>>> +       unsigned long offset;\n>>> +       size_t object_size;\n>>> +\n>>> +       /* Find object and usable object size. */\n>>> +       s = page->slab_cache;\n>>> +       object_size = slab_ksize(s);\n>>> +\n>>> +       /* Find offset within object. */\n>>> +       offset = (ptr - page_address(page)) % s->size;\n>>> +\n>>> +       /* Adjust for redzone and reject if within the redzone. */\n>>> +       if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {\n>>> +               if (offset < s->red_left_pad)\n>>> +                       return s->name;\n>>> +               offset -= s->red_left_pad;\n>>> +       }\n>>> +\n>>> +       /* Allow address range falling entirely within object size. */\n>>> +       if (offset <= object_size && n <= object_size - offset)\n>>> +               return NULL;\n>>> +\n>>> +       return s->name;\n>>> +}\n>>> +#endif /* CONFIG_HARDENED_USERCOPY */\n>>> +\n>>\n>>\n>> I compared this against what check_valid_pointer does for SLUB_DEBUG\n>> checking. I was hoping we could utilize that function to avoid\n>> duplication but a) __check_heap_object needs to allow accesses anywhere\n>> in the object, not just the beginning b) accessing page->objects\n>> is racy without the addition of locking in SLUB_DEBUG.\n>>\n>> Still, the ptr < page_address(page) check from __check_heap_object would\n>> be good to add to avoid generating garbage large offsets and trying to\n>> infer C math.\n>>\n>> diff --git a/mm/slub.c b/mm/slub.c\n>> index 7dee3d9..5370e4f 100644\n>> --- a/mm/slub.c\n>> +++ b/mm/slub.c\n>> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr,\n>> unsigned long n,\n>>         s = page->slab_cache;\n>>         object_size = slab_ksize(s);\n>>  +       if (ptr < page_address(page))\n>> +               return s->name;\n>> +\n>>         /* Find offset within object. */\n>>         offset = (ptr - page_address(page)) % s->size;\n>>\n>> With that, you can add\n>>\n>> Reviwed-by: Laura Abbott <labbott@redhat.com>\n>\n> Cool, I'll add that.\n>\n> Should I add your reviewed-by for this patch only or for the whole series?\n>\n> Thanks!\n>\n> -Kees\n>\n\nJust this patch for now, I'm working through a couple of others\n\n>>\n>>>  static size_t __ksize(const void *object)\n>>>  {\n>>>         struct page *page;\n>>>\n>>\n>> Thanks,\n>> Laura\n>\n>\n>","headers":{"Return-Path":"<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>","X-Original-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3rz06J6KLVz9stc\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 10:55:36 +1000 (AEST)","from ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 3rz06J5CxczDrPk\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 26 Jul 2016 10:55:36 +1000 (AEST)","from mail-io0-f173.google.com (mail-io0-f173.google.com\n\t[209.85.223.173])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128\n\tbits)) (No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 3rz0545j5tzDqQw\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tTue, 26 Jul 2016 10:54:32 +1000 (AEST)","by mail-io0-f173.google.com with SMTP id b62so185632969iod.3\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tMon, 25 Jul 2016 17:54:32 -0700 (PDT)","from ?IPv6:2601:602:9800:177f::337f? ([2601:602:9800:177f::337f])\n\tby smtp.gmail.com with ESMTPSA id\n\t81sm12588814ios.38.2016.07.25.17.54.26\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tMon, 25 Jul 2016 17:54:29 -0700 (PDT)"],"X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20130820;\n\th=x-gm-message-state:subject:to:references:cc:from:message-id:date\n\t:user-agent:mime-version:in-reply-to:content-transfer-encoding;\n\tbh=HZt3kazABYj76FAZ2KDNLblTGRJ/5qIqMufsNYH6n/E=;\n\tb=dbYgr44+9c0b9YaoAyumgDX8USTH2nOvpy6EN+NUdSmpBhfTlAyEYJ7MV44NQ8T+91\n\thtTw06Cgi+jxSsllGrmDsKkwobSd1mrDUbA9Rg91NDC2BBoXUj373s2rlmdQUYvz8Iod\n\ts6WP0ioSX0MMjxvfrmXsFPMyr8MUn473v6FSYsUjr1emAh53TR0QPhJTY6iYLjnXv1CR\n\t/NaQcAhEElfwHsqvjo1QEdKdQJdVLcFvtoAb5dKDAeHlEgJo8bxaT0voKVleor0/ebCg\n\tEa5REmSCq4DX+y4ihLhH8XUAf0LwiKKNDrJuhSuno+YddPIqxP5xrlLmG9P93XW6Ay+o\n\tdAEA==","X-Gm-Message-State":"AEkoousCy6V8Xx774UNwyHX3kG7uYy6CNUBiqMNu8iGMdCXSPI83t5d5pvxtylf8Eq1QoMad","X-Received":"by 10.107.180.5 with SMTP id d5mr26501836iof.62.1469494470683;\n\tMon, 25 Jul 2016 17:54:30 -0700 (PDT)","Subject":"Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support","To":"Kees Cook <keescook@chromium.org>","References":"<1469046427-12696-1-git-send-email-keescook@chromium.org>\n\t<1469046427-12696-13-git-send-email-keescook@chromium.org>\n\t<0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com>\n\t<CAGXu5jJQQNDMK7vxKqhwQukL771uOT-2No7fnNreQVLWs8UBMA@mail.gmail.com>","From":"Laura Abbott <labbott@redhat.com>","Message-ID":"<e19dd469-37c9-846f-43a6-f28ab0c6c547@redhat.com>","Date":"Mon, 25 Jul 2016 17:54:25 -0700","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101\n\tThunderbird/45.1.1","MIME-Version":"1.0","In-Reply-To":"<CAGXu5jJQQNDMK7vxKqhwQukL771uOT-2No7fnNreQVLWs8UBMA@mail.gmail.com>","X-BeenThere":"linuxppc-dev@lists.ozlabs.org","X-Mailman-Version":"2.1.22","Precedence":"list","List-Id":"Linux on PowerPC Developers Mail List\n\t<linuxppc-dev.lists.ozlabs.org>","List-Unsubscribe":"<https://lists.ozlabs.org/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=unsubscribe>","List-Archive":"<http://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=help>","List-Subscribe":"<https://lists.ozlabs.org/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=subscribe>","Cc":"Jan Kara <jack@suse.cz>, \"kernel-hardening@lists.openwall.com\"\n\t<kernel-hardening@lists.openwall.com>,\n\tWill Deacon <will.deacon@arm.com>, \n\tLinux-MM <linux-mm@kvack.org>, sparclinux <sparclinux@vger.kernel.org>,\n\tlinux-ia64@vger.kernel.org, Christoph Lameter <cl@linux.com>,\n\tAndrea Arcangeli <aarcange@redhat.com>,\n\tlinux-arch <linux-arch@vger.kernel.org>,\n\t\"x86@kernel.org\" <x86@kernel.org>, Russell King <linux@armlinux.org.uk>,\n\t\"linux-arm-kernel@lists.infradead.org\"\n\t<linux-arm-kernel@lists.infradead.org>, \n\tCatalin Marinas <catalin.marinas@arm.com>,\n\tPaX Team <pageexec@freemail.hu>, \n\tBorislav Petkov <bp@suse.de>, Mathias Krause <minipli@googlemail.com>,\n\tFenghua Yu <fenghua.yu@intel.com>, Rik van Riel <riel@redhat.com>,\n\tDavid Rientjes <rientjes@google.com>, Tony Luck <tony.luck@intel.com>,\n\tAndy Lutomirski <luto@kernel.org>, Josh Poimboeuf <jpoimboe@redhat.com>, \n\tAndrew Morton <akpm@linux-foundation.org>,\n\tDmitry Vyukov <dvyukov@google.com>, \n\tLaura Abbott <labbott@fedoraproject.org>,\n\tBrad Spengler <spender@grsecurity.net>,\n\tArd Biesheuvel <ard.biesheuvel@linaro.org>,\n\tLKML <linux-kernel@vger.kernel.org>, Pekka Enberg <penberg@kernel.org>,\n\tDaniel Micay <danielmicay@gmail.com>,\n\tCasey Schaufler <casey@schaufler-ca.com>, \n\tJoonsoo Kim <iamjoonsoo.kim@lge.com>,\n\t\"linuxppc-dev@lists.ozlabs.org\" <linuxppc-dev@lists.ozlabs.org>,\n\t\"David S. Miller\" <davem@davemloft.net>","Content-Transfer-Encoding":"base64","Content-Type":"text/plain; charset=\"utf-8\"; Format=\"flowed\"","Errors-To":"linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org","Sender":"\"Linuxppc-dev\"\n\t<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>"}}]