From patchwork Tue Oct 17 19:01:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Moritz Eckert X-Patchwork-Id: 827249 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=sourceware.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=libc-alpha-return-85961-incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.b="fzbbTn8/"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yGl08437Qz9t30 for ; Wed, 18 Oct 2017 06:01:12 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:subject:to:cc:message-id:date :mime-version:content-type:content-transfer-encoding; q=dns; s= default; b=CnE2Rr5oRSKDWB0pEYL89SKjVZey8AlmTpwo8a56zd3nXwhWHi1dI EsA2kAFM+amVHscBaCIwuFA5dgZVJsWIAuz6S7OFG0kqSuTD5GsrxqTdeVkhKHDx 5jexEHbGI8OlDkxNW6CZuZ3mQtEwwpu4OJzZDr+ydJVjTXQPXdE2X8= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:subject:to:cc:message-id:date :mime-version:content-type:content-transfer-encoding; s=default; bh=Q1jarJ+R6EMDHu4CcBeMZQncBQY=; b=fzbbTn8/Ab+2dPnIu4lvk2FWtNmi Ua49r+1up6Ir7giqf5NcwiUoAVRJ5fZke22eakHRRjVJeQ58JpjS+ECyyWmBwIUJ f7Vqtt89c51WzL7ArrgSr8mcSnSXRtRxYxupHSz3ROb4QHrBVXwT0xxuvNQwD0Mh NLiSs1VAk0AnqCk= Received: (qmail 85187 invoked by alias); 17 Oct 2017 19:01:05 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 84322 invoked by uid 89); 17 Oct 2017 19:01:04 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-26.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=attacks, evans, Evans, evan X-HELO: mail-pf0-f169.google.com X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:message-id:date:user-agent :mime-version:content-language:content-transfer-encoding; bh=JBWtz0zIYDGKR7hb6rOR3lFcinB7ifuXbwGHZAVSMSM=; b=ZCTDlmZNjX1lgMFqaUDsH+MSa+TNAuxcfpD5kvoR4crKEqR8Denjubl86rqS1vfJhO RysLvbbFuFRfQjCPPOTVx1iTS2tGpmsH4CAiBhhd3j2RtDItclW+7cqD7DQAMC/HVCLF dG/TRZEvS0zejz+hdnitNFuOgEd9y4gOPE8VkTDgIcQKpYqdHbvb6IKuKnPG+mkATAj9 mfzSIAVgwyZjrD3CNdrzbAPBLoZwXQvlZ933kPELdTacCN7YDlg6X10dn4vDKXlZI4z9 qpOKR98TT7FzwBTc7pTqvkBErbPBB5GaWi1lj1rhRb3DBk7Va9DEZ+Sy6mfLeBA1WVlB 4R3g== X-Gm-Message-State: AMCzsaW38DVc9WkUcjAG9x5zmKOSpMBwB0kW1dGGiK+IlfEnLXye9PFh M+pFvx6FW7ubz2shE5aQVBQdjQ== X-Google-Smtp-Source: AOwi7QDkxVtL9lcmdUEqBpaT2Q1maGO4Lt2+QzTnXlUD28b8Im8ODtWVySQIMQKNkjeo+t56PEphlQ== X-Received: by 10.159.253.71 with SMTP id b7mr13192709plx.169.1508266860908; Tue, 17 Oct 2017 12:01:00 -0700 (PDT) From: Moritz Eckert Subject: [PATCH] malloc/malloc.c: Mitigate null-byte overflow attacks To: libc-alpha@sourceware.org Cc: scarybeasts@gmail.com, fweimer@redhat.com Message-ID: <84f0d6ec-0e51-2f7a-db9c-3d3a8971fbc4@cs.ucsb.edu> Date: Tue, 17 Oct 2017 12:01:10 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 Hi, I want to propose a new check against one-null-byte-null overflow attacks in the malloc implementation. I believe that the patch I am proposing completely fix this issue, without incurring in any additionally overhead. Here are the technical details: Multiple security checks have been introduced to assert metadata consistency in the past. One particular committed by Chris Evans (17f487b7afa7cd6c316040f3e6c86dc96b2eec30) is targeting one-null-byte overflows [1]. This patch tries to prevent malicious chunk consolidation by verifying the a chunk's prev_size is equal to the corresponding previous chunk's size. Specifically, this check was added into the unlink macro and it asserts that size and prev_size are equal. This check is absolutely correct and certainly adds additional constraints to any unlink attack. However, one has to consider that it only aims for one-null-byte overflow. That is because any unlink against a fake chunk in controlled memory gives the attacker enough control to circumvent this and any similar check. (It's important to note that an attacker needs to control a particular memory location to set the prev_size, however it's relatively common to be able to craft the heap layout accordingly. Furthermore, already having an one-null-byte overflow, it's very likely that there is enough controlled memory to place the correct prev_size at the particular offset). Given a one-null-byte overflow, someone already figured out how to circumvent Chris Evan’s patch by setting the correct prev_size [4] Ultimately, this means the current patch fails to prevent any currently known attack exploiting a one-null-byte overflow. I tried to analyze why the patch failed to prevent this attack completely. The reason boils down to the fact that the unlink procedure will use the incorrect (malicious) prev_size to go back in memory and call unlink on the wrong (not previous anymore) chunk. The check now tries to verify that size and prev_size starting from the wrong chunk match, which will always hold true. The problem occurs because there is no check to verify prev_size and size before going back in memory. That means the check is correct, but happens when it's already too late. The patch I propose contains a similar check, but before the unlink call. Since, this his highly performance critical code, I assumed people would refrain from adding more checks, so instead I placed my check as a replacement of the old one. The new check prevents any one-null-byte overflow attack and only falls short to unlink against fake chunks. Since the old check aims for the exact same goal, but fails to prevent it completely, the new check could be seen as a superset of the old one. However, if the performance impact is not considered too high, my check would also be a good addition to the current one. To sum up: I think my patch protects completely against one-null-byte overflows, while Chris Evan’s one is bypassable. At the same time, I don’t think it increases performance overhead. Cheers, Moritz [1] https://scarybeastsecurity.blogspot.com/2017/05/further-hardening-glibc-malloc-against.html [2] https://googleprojectzero.blogspot.com/2014/08/the-poisoned-nul-byte-2014-edition.html [3] https://github.com/shellphish/how2heap/blob/master/poison_null_byte.c [4] https://github.com/shellphish/how2heap/commit/a666abbd1306fb3311144ca7ccfcdba939e77744#diff-196ce929986c83c72e376d5612656d3b      BK = P->bk;                                      \      if (__builtin_expect (FD->bk != P || BK->fd != P, 0))           \ @@ -4227,6 +4225,8 @@ _int_free (mstate av, mchunkptr p, int have_lock)        prevsize = prev_size (p);        size += prevsize;        p = chunk_at_offset(p, -((long) prevsize)); +      if (__builtin_expect (chunksize(p) != prevsize, 0)) +        malloc_printerr ("corrupted size vs. prev_size");        unlink(av, p, bck, fwd);      } @@ -4392,6 +4392,8 @@ static void malloc_consolidate(mstate av)          prevsize = prev_size (p);          size += prevsize;          p = chunk_at_offset(p, -((long) prevsize)); +        if (__builtin_expect (chunksize(p) != prevsize, 0)) +          malloc_printerr ("corrupted size vs. prev_size");          unlink(av, p, bck, fwd);        } diff --git a/ChangeLog b/ChangeLog index 5effbd7956..f7a76220d2 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,12 @@ +2017-10-13  Moritz Eckert  + +    * malloc/malloc.c: (_int_free): Compare prev_size and prev->size +    before performing the backward unlink, instead of going back in +    memory first. This prevents forgotten update type of attacks. + +    * malloc/malloc.c: (unlink): Remove the current check to have no +    performance impact. +  2017-10-13  James Clarke       * sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_rela): diff --git a/malloc/malloc.c b/malloc/malloc.c index d3fcadd20e..c421e778da 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -1395,8 +1395,6 @@ typedef struct malloc_chunk *mbinptr;  /* Take a chunk off a bin list */  #define unlink(AV, P, BK, FD) {                                            \ -    if (__builtin_expect (chunksize(P) != prev_size (next_chunk(P)), 0))      \ -      malloc_printerr ("corrupted size vs. prev_size");       \      FD = P->fd;                                      \