From patchwork Fri May 20 09:00:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 624424 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3rB2304f00z9t7G for ; Fri, 20 May 2016 19:00:44 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.b=pDd9j3Sq; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:subject:to:references:cc:from:message-id:date :mime-version:in-reply-to:content-type; q=dns; s=default; b=Pbko 7jWB7NtQnFUyZtEf5U374emXmHuvkfHHPW6jDOKjMKDkkbKvWGV14WIerNE/rKyW nQ8nQTZvDxZH2MvuGURGbUbpjroO57HK8q4gH1evrRqQxWcJMBFv8OzA6ISPLQ57 w6OWdhaLTsQM/ELxSBOyU62iTHc502fuHxD8SEI= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:subject:to:references:cc:from:message-id:date :mime-version:in-reply-to:content-type; s=default; bh=kTH6jxk1pS GaD7BpyOf+d/AfMq4=; b=pDd9j3Sqdek/xjot1gVLy6+QazSvX3IZ9oaxvvj0oO ndT7aub0N3gvYew25CiNisD8bJr3/4puQ4i/Iro0KAagPu7fKKwihSCxW4/LWzD0 hY9ndukL+qMpFJJzoHtuhFkNDP0oAqR0X9kiR38Kk+3N+oS8a1TgdHxN7PQia3Mn 8= Received: (qmail 120423 invoked by alias); 20 May 2016 09:00:35 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 116224 invoked by uid 89); 20 May 2016 09:00:29 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.3 required=5.0 tests=BAYES_00, RP_MATCHES_RCVD, SPF_HELO_PASS autolearn=ham version=3.3.2 spammy=HTo:U*dj, among X-HELO: mx1.redhat.com Subject: Re: [PATCH] malloc: Correct malloc alignment on 32-bit architectures [BZ #6527] To: DJ Delorie References: <20160513183544.9694240223A68@oldenburg.str.redhat.com> Cc: libc-alpha@sourceware.org From: Florian Weimer Message-ID: <57cad7c2-2da1-342d-75cf-b0a9075ee8ca@redhat.com> Date: Fri, 20 May 2016 11:00:24 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.0 MIME-Version: 1.0 In-Reply-To: On 05/13/2016 11:31 PM, DJ Delorie wrote: > > fweimer@redhat.com (Florian Weimer) writes: >> -#define MALLOC_STATE_VERSION (0 * 0x100l + 4l) /* major*0x100 + minor */ >> +#define MALLOC_STATE_VERSION (0 * 0x100l + 5l) /* major*0x100 + minor */ > > Do we record a history of what changed for each version bump, in case we > need to go back and somehow support an older one? I see no reason to start this practice now, when the dumping functionality is rapidly approaching removal. (Undumping will still be supported, of course.) If necessary, we can consult the Git history. >> -#ifndef MALLOC_ALIGNMENT >> -# if !SHLIB_COMPAT (libc, GLIBC_2_0, GLIBC_2_16) >> -/* This is the correct definition when there is no past ABI to constrain it. >> - >> - Among configurations with a past ABI constraint, it differs from >> - 2*SIZE_SZ only on powerpc32. For the time being, changing this is >> - causing more compatibility problems due to malloc_get_state and >> - malloc_set_state than will returning blocks not adequately aligned for >> - long double objects under -mlong-double-128. */ >> - >> -# define MALLOC_ALIGNMENT (2 *SIZE_SZ < __alignof__ (long double) \ >> - ? __alignof__ (long double) : 2 *SIZE_SZ) >> -# else >> -# define MALLOC_ALIGNMENT (2 *SIZE_SZ) >> -# endif >> -#endif >> +#define MALLOC_ALIGNMENT (2 * SIZE_SZ < __alignof__ (long double) \ >> + ? __alignof__ (long double) : 2 * SIZE_SZ) > > This drops the outer '#ifndef' which I think should remain. The > ChangeLog mentions an override for HPPA. No such override exists anymore. > If there is some platform-specific overrides for ppc32, we should take > those out instead. The patch is doing that. I've restored the override possibility for now. New version of the patch is attached. Thanks, Florian malloc: Correct malloc alignment on 32-bit architectures [BZ #6527] After the heap rewriting added in commit 4cf6c72fd2a482e7499c29162349810029632c3f (malloc: Rewrite dumped heap for compatibility in __malloc_set_state), we can change malloc alignment for new allocations because the alignment of old allocations no longer matters. We need to increase the malloc state version number, so that binaries containing dumped heaps of the new layout will not try to run on previous versions of glibc, resulting in obscure crashes. This commit addresses a failure of tst-malloc-thread-fail on the affected architectures (32-bit ppc and mips) because the test checks pointer alignment. 2016-05-20 Florian Weimer [BZ #6527] * malloc/malloc.c (MALLOC_ALIGNMENT): Use correct alignment unconditionally. * malloc/hooks.c (MALLOC_STATE_VERSION): Increase state version. diff --git a/malloc/hooks.c b/malloc/hooks.c index 45241f2..caa1e70 100644 --- a/malloc/hooks.c +++ b/malloc/hooks.c @@ -465,7 +465,7 @@ memalign_check (size_t alignment, size_t bytes, const void *caller) then the hooks are reset to 0. */ #define MALLOC_STATE_MAGIC 0x444c4541l -#define MALLOC_STATE_VERSION (0 * 0x100l + 4l) /* major*0x100 + minor */ +#define MALLOC_STATE_VERSION (0 * 0x100l + 5l) /* major*0x100 + minor */ struct malloc_save_state { diff --git a/malloc/malloc.c b/malloc/malloc.c index 44524ff..ead9a21 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -352,20 +352,8 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line, #ifndef MALLOC_ALIGNMENT -# if !SHLIB_COMPAT (libc, GLIBC_2_0, GLIBC_2_16) -/* This is the correct definition when there is no past ABI to constrain it. - - Among configurations with a past ABI constraint, it differs from - 2*SIZE_SZ only on powerpc32. For the time being, changing this is - causing more compatibility problems due to malloc_get_state and - malloc_set_state than will returning blocks not adequately aligned for - long double objects under -mlong-double-128. */ - -# define MALLOC_ALIGNMENT (2 *SIZE_SZ < __alignof__ (long double) \ - ? __alignof__ (long double) : 2 *SIZE_SZ) -# else -# define MALLOC_ALIGNMENT (2 *SIZE_SZ) -# endif +# define MALLOC_ALIGNMENT (2 * SIZE_SZ < __alignof__ (long double) \ + ? __alignof__ (long double) : 2 * SIZE_SZ) #endif /* The corresponding bit mask value */