From patchwork Thu Feb 13 12:01:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eyal Itkin X-Patchwork-Id: 1237443 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=uclibc-ng.org (client-ip=2a00:1828:2000:679::23; helo=helium.openadk.org; envelope-from=devel-bounces@uclibc-ng.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Ej3qKSe6; dkim-atps=neutral Received: from helium.openadk.org (helium.openadk.org [IPv6:2a00:1828:2000:679::23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48JFTy711Fz9sPJ for ; Thu, 13 Feb 2020 23:01:24 +1100 (AEDT) Received: from helium.openadk.org (localhost [IPv6:::1]) by helium.openadk.org (Postfix) with ESMTP id 796BC10074; Thu, 13 Feb 2020 13:01:17 +0100 (CET) X-Original-To: devel@uclibc-ng.org Delivered-To: devel@helium.openadk.org Received: from mail-io1-f65.google.com (mail-io1-f65.google.com [209.85.166.65]) by helium.openadk.org (Postfix) with ESMTPS id 8421210074 for ; Thu, 13 Feb 2020 13:01:14 +0100 (CET) Received: by mail-io1-f65.google.com with SMTP id x1so6134312iop.7 for ; Thu, 13 Feb 2020 04:01:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=8h2id0I0MNd2/APnsdd79+EY46YioqdWo8ZI4j68Kpg=; b=Ej3qKSe6I79PpEKC3mvyMrken16XPLlg6ulkOwMPXwvVstb2xmBbgX/unWwEA+K8Na 26HQ/ivkd/lEQLbp4eEtIv/p4QAEj/V4rmmqtZ57i2P01Y25NwclQGK2wQgEOnaawzpS xs9Mu08iAA3QtbH0K3qINpdXCg9eoNHZ86XntwUCDmJprf6Liqk/V0oJCDK9ePa1gwZG eX3lg2N5yu/wRmGUY6AZ3XfNdyYGyIoBtEUrStUMuDOSvt8Fm0SgY6GYsoge++st50s7 7XIh1K5RwxYnfKFeoA8MjgAAmHL9YELgAZp+p+5mjZRFSYO1bEjGsrIbKTkRQz01zN6Q 7Geg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=8h2id0I0MNd2/APnsdd79+EY46YioqdWo8ZI4j68Kpg=; b=TPRAgIYEn7w60Eq+PMQFjoPjppboX7P7Q5CsLnbcq8IMcmj5/XCwc2UaxA4HTHh8Eg AzDJ52qLldsIDLgoKJy7ghsVcmVwhL4bJ35LmDSjD8VCeknWYylcr8DMZDg5uYhOCE9+ IlUTvAf/vZkJ1e4gYbrf69SpQTsb7SkMMS5aiaUF2w3cuq58y0UjoK/64WTF7b3PAJ3n jEkh6yZz5GtV6qaPf+ACCFeTigxmzvV0bZbzCT5VfkynpYVfXIoTMhDtspDpaYJPMylf PF38NO2O0vIfADeBgIEuySeO/cwrpTIrA6IFCI80QJsF6bcUhCgeYDnBPFO4UCX/vPqj HF1w== X-Gm-Message-State: APjAAAWoUS+pZsNnWAkCfSKYnlpkj/wHr8ePUte+vqSiKthOOQPzVhC5 tsyZHaeI4+KO6Yc/x8YI4TVke+NjJw/VB+omcVj7MM2I X-Google-Smtp-Source: APXvYqy/JiLiN2lPrAK9LUtL1KZosrI9CqP9vA7zeh+NJzqL4dsVSG6WGkUqDZ8cqr0B2Pp47q/sRK4qTeO+dMo8ioU= X-Received: by 2002:a6b:dc09:: with SMTP id s9mr20666595ioc.185.1581595273218; Thu, 13 Feb 2020 04:01:13 -0800 (PST) MIME-Version: 1.0 From: Eyal Itkin Date: Thu, 13 Feb 2020 14:01:02 +0200 Message-ID: To: devel@uclibc-ng.org Subject: [uclibc-ng-devel] [PATCH] Add Safe-Linking to Fast-Bins X-BeenThere: devel@uclibc-ng.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: uClibc-ng Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: devel-bounces@uclibc-ng.org Sender: "devel" Safe-Linking is a security mechanism that protects single-linked lists (such as the fastbins) from being tampered by attackers. The mechanism makes use of randomness from ASLR (mmap_base), and when combined with chunk alignment integrity checks, it protects the pointers from being hijacked by an attacker. While Safe-Unlinking protects double-linked lists (such as the small bins), there wasn't any similar protection for attacks against single-linked lists. This solution protects against 3 common attacks: * Partial pointer override: modifies the lower bytes (Little Endian) * Full pointer override: hijacks the pointer to an attacker's location * Unaligned chunks: pointing the list to an unaligned address The design assumes an attacker doesn't know where the heap is located, and uses the ASLR randomness to "sign" the single-linked pointers. We mark the pointer as P and the location in which it is stored as L, and the calculation will be: * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P) * *L = PROTECT(P) This way, the random bits from the address L (which start at the bits in the PAGE_SHIFT position), will be merged with the LSB of the stored protected pointer. This protection layer prevents an attacker from modifying the pointer into a controlled value. An additional check that the chunks are MALLOC_ALIGNed adds an important layer: * Attackers can't point to illegal (unaligned) memory addresses * Attackers must guess correctly the alignment bits On standard 32 bit Linux machines, an attacker will directly fail 7 out of 8 times, and on 64 bit machines it will fail 15 out of 16 times. The proposed solution adds 3-4 asm instructions per malloc()/free() and therefore has only minor performance implications if it has any. A similar protection was added to Chromium's version of TCMalloc in 2013, and according to their documentation the performance overhead was less than 2%. For more information, feel free to check out our White Paper which can be found here: https://github.com/gperftools/gperftools/files/4023520/Safe-Linking-White-Paper.txt --- libc/stdlib/malloc-standard/free.c | 5 +++-- libc/stdlib/malloc-standard/mallinfo.c | 3 ++- libc/stdlib/malloc-standard/malloc.c | 6 ++++-- libc/stdlib/malloc-standard/malloc.h | 12 ++++++++++++ 4 files changed, 21 insertions(+), 5 deletions(-) morecore_properties is a status word holding dynamically discovered -- 2.17.1 diff --git a/libc/stdlib/malloc-standard/free.c b/libc/stdlib/malloc-standard/free.c index a2d765d41..f3602cf48 100644 --- a/libc/stdlib/malloc-standard/free.c +++ b/libc/stdlib/malloc-standard/free.c @@ -214,8 +214,9 @@ void attribute_hidden __malloc_consolidate(mstate av) *fb = 0; do { + CHECK_PTR(p); check_inuse_chunk(p); - nextp = p->fd; + nextp = REVEAL_PTR(&p->fd, p->fd); /* Slightly streamlined version of consolidation code in free() */ size = p->size & ~PREV_INUSE; @@ -308,7 +309,7 @@ void free(void* mem) set_fastchunks(av); fb = &(av->fastbins[fastbin_index(size)]); - p->fd = *fb; + p->fd = PROTECT_PTR(&p->fd, *fb); *fb = p; } diff --git a/libc/stdlib/malloc-standard/mallinfo.c b/libc/stdlib/malloc-standard/mallinfo.c index dbe4d49b8..992322341 100644 --- a/libc/stdlib/malloc-standard/mallinfo.c +++ b/libc/stdlib/malloc-standard/mallinfo.c @@ -49,7 +49,8 @@ struct mallinfo mallinfo(void) fastavail = 0; for (i = 0; i < NFASTBINS; ++i) { - for (p = av->fastbins[i]; p != 0; p = p->fd) { + for (p = av->fastbins[i]; p != 0; p = REVEAL_PTR(&p->fd, p->fd)) { + CHECK_PTR(p); ++nfastblocks; fastavail += chunksize(p); } diff --git a/libc/stdlib/malloc-standard/malloc.c b/libc/stdlib/malloc-standard/malloc.c index 1a6d4dc1c..1f898eb29 100644 --- a/libc/stdlib/malloc-standard/malloc.c +++ b/libc/stdlib/malloc-standard/malloc.c @@ -260,12 +260,13 @@ void __do_check_malloc_state(void) assert(p == 0); while (p != 0) { + CHECK_PTR(p); /* each chunk claims to be inuse */ __do_check_inuse_chunk(p); total += chunksize(p); /* chunk belongs in this bin */ assert(fastbin_index(chunksize(p)) == i); - p = p->fd; + p = REVEAL_PTR(&p->fd, p->fd); } } @@ -855,7 +856,8 @@ void* malloc(size_t bytes) if ((unsigned long)(nb) <= (unsigned long)(av->max_fast)) { fb = &(av->fastbins[(fastbin_index(nb))]); if ( (victim = *fb) != 0) { - *fb = victim->fd; + CHECK_PTR(victim); + *fb = REVEAL_PTR(&victim->fd, victim->fd); check_remalloced_chunk(victim, nb); retval = chunk2mem(victim); goto DONE; diff --git a/libc/stdlib/malloc-standard/malloc.h b/libc/stdlib/malloc-standard/malloc.h index 44120d388..30a696e5a 100644 --- a/libc/stdlib/malloc-standard/malloc.h +++ b/libc/stdlib/malloc-standard/malloc.h @@ -839,6 +839,18 @@ typedef struct malloc_chunk* mfastbinptr; #define get_max_fast(M) \ ((M)->max_fast & ~(FASTCHUNKS_BIT | ANYCHUNKS_BIT)) +/* + Safe-Linking: + Use randomness from ASLR (mmap_base) to protect single-linked lists + of fastbins. Together with allocation alignment checks, this mechanism + reduces the risk of pointer hijacking, as was done with Safe-Unlinking + in the double-linked lists of smallbins. +*/ +#define PROTECT_PTR(pos, ptr) ((mchunkptr)((((size_t)pos) >> PAGE_SHIFT) ^ ((size_t)ptr))) +#define REVEAL_PTR(pos, ptr) PROTECT_PTR(pos, ptr) +#define CHECK_PTR(P) \ + if (!aligned_OK(P)) \ + abort(); /*