From patchwork Thu May 24 00:10:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ken Milmore X-Patchwork-Id: 919458 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=sourceware.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=libc-alpha-return-92731-incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.b="GcA3+9mv"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40rqYR2p78z9s1B for ; Thu, 24 May 2018 10:11:23 +1000 (AEST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:subject:to:message-id:date:mime-version :content-type:content-transfer-encoding; q=dns; s=default; b=WbQ 0E1fktnVIn2SwRiwPPZlNLJCMygSAfBTSS4hOp2Ru7WFSVtW0ZjFcd16qmz13Rkl 3DrUG/9f/WUwDuRkjoEdVSGANDGCJ5VYj1s1aXKbKnwqGxKvcj8MXr7BY+Xi/FAV 6YWleo6yevy0nREoIQpoW/jsZIf3mUutXTvRsXwg= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:subject:to:message-id:date:mime-version :content-type:content-transfer-encoding; s=default; bh=FGPEQJx1C gSJ1tS6zERNs9R06wU=; b=GcA3+9mvEi3L1ZZxshqvUDMNnmn02ibTUjWRBKio0 tOrq3iEg4SDQwOcNS6XoonY01xnr4KmG798pNNr5qta/EaoO5U1WkL4+WMHzk3YU RX6+SJQqcTZXINpxskjPHfssO+/Psicv+mhJemvjln5ahYiuh4Dk8WBNf1NY28z4 4s= Received: (qmail 104271 invoked by alias); 24 May 2018 00:10:45 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 104139 invoked by uid 89); 24 May 2018 00:10:44 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-26.3 required=5.0 tests=AWL, BAYES_00, FREEMAIL_FROM, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=directed X-HELO: mail-wr0-f178.google.com X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:message-id:date:user-agent :mime-version:content-language:content-transfer-encoding; bh=AW0g9WQnLHdfn+KlKE7tXqtN8+UeytdeUEzO2mjhEHk=; b=fUtfDujO0Fpuo3jFX55ZvLe3yaX5ir7aqeqVyb5g3Q2vW8MVqnSbwwreiqVmijmk9y HveO53u5LQB0mlH0r9uJXWqJ9oWIpxfEzaIr1bNvgLUaKJJpyIqmipPNAEMUdIUQ3Ici fUNYzgVzvyo/mU7AADVslXp0F8ntqG3gsgAt4oSv5VvlEVPtAPLHS5mw9r5dSgWNp7Vl F99ur6aK1YYw7Tzq5Wovqn+M7QPbxRbXR6xPw/5KwFyeIQBHi7aYjVMdSWVxvQ31kvxH oVM4qc8tDFmHrW/keY9ZzCKKHzeZTg0UM22BgTUVH9Xq8vEDi3bczwtksEl/Vs54Yzq9 7KIA== X-Gm-Message-State: ALKqPwdiXD2fcMNckxTt68STq3YZ68YyHCJVBBUAiuKP52kMyK1tFEVN FxcvSauYi2lpPu3mMTqhgfU= X-Google-Smtp-Source: AB8JxZotxYNLxW5gFnNvooPaJ/JfbzaZqunHoSUFJjzm+99isfwtw9V0K//ZO5wuSZxikQcADDvoCw== X-Received: by 2002:adf:f652:: with SMTP id x18-v6mr577615wrp.214.1527120638792; Wed, 23 May 2018 17:10:38 -0700 (PDT) From: Ken Milmore Subject: [RFC][PATCH 4/4] malloc_info: add tst-malloc_info-vs-mallinfo To: libc-alpha@sourceware.org, Carlos O'Donell Message-ID: <3f0ea068-8053-7d0c-6de0-5d295bf564a8@gmail.com> Date: Thu, 24 May 2018 01:10:37 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 Add a unit test to verify consistency between the output of malloc_info() and mallinfo(). The test is based on tst-malloc_info. It creates several threads, then tries to exercise the allocator with small block allocations and frees to produce some heap fragmentation. A few large blocks are also allocated to force the use of mmap. At several points, the secondary threads are stopped and malloc_info() and mallinfo() are both called from the main thread. The malloc_info output is directed to a temporary file and the "XML" is parsed back in using only standard C library functions. The various byte totals and chunk counts reported by mallinfo() are then compared against corresponding totals obtained from the malloc_info() output. The test will fail should any of these totals differ. * malloc/Makefile: Add tst-malloc_info-vs-mallinfo. * malloc/tst-malloc_info-vs-mallinfo.c: New file. --- malloc/Makefile | 2 + malloc/tst-malloc_info-vs-mallinfo.c | 258 +++++++++++++++++++++++++++++++++++ 2 files changed, 260 insertions(+) create mode 100644 malloc/tst-malloc_info-vs-mallinfo.c diff --git a/malloc/Makefile b/malloc/Makefile index 7d54bad866..abcb7f847d 100644 --- a/malloc/Makefile +++ b/malloc/Makefile @@ -36,6 +36,7 @@ tests := mallocbug tst-malloc tst-valloc tst-calloc tst-obstack \ tst-alloc_buffer \ tst-malloc-tcache-leak \ tst-malloc_info \ + tst-malloc_info-vs-mallinfo \ tst-malloc-too-large \ tst-malloc-stats-cancellation \ @@ -251,3 +252,4 @@ $(objpfx)tst-dynarray-fail-mem.out: $(objpfx)tst-dynarray-fail.out $(objpfx)tst-malloc-tcache-leak: $(shared-thread-library) $(objpfx)tst-malloc_info: $(shared-thread-library) +$(objpfx)tst-malloc_info-vs-mallinfo: $(shared-thread-library) diff --git a/malloc/tst-malloc_info-vs-mallinfo.c b/malloc/tst-malloc_info-vs-mallinfo.c new file mode 100644 index 0000000000..9215cf2a97 --- /dev/null +++ b/malloc/tst-malloc_info-vs-mallinfo.c @@ -0,0 +1,258 @@ +/* Test for consistency of malloc_info with mallinfo. + Copyright (C) 2017-2018 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* This test checks for agreement between the amounts of free and + allocated space reported by the malloc_info and mallinfo functions. + It has been adapted from tst-malloc_info.c. */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* This barrier is used to have the main thread wait until the helper + threads have performed their allocations. */ +static pthread_barrier_t barrier; + +enum + { + /* Number of threads performing allocations. */ + thread_count = 4, + + /* Amount of small block memory allocation per thread. + We keep this modest so there can be no possiblity of the mallinfo + totals wrapping around. */ + per_thread_allocations = 16 * 1024 * 1024, + + /* Size of the largest small block to allocate. + We will allocate a range of sizes up to this maximum. */ + max_alloc_size = 512, + + /* Size of large block to allocate. + We allocate only one of these per thread. It should be large enough + to ensure use of mmap. */ + large_block_size = 1024 * 1024, + }; + +static void * +allocation_thread_function (void *closure) +{ + struct list + { + struct list *next; + }; + + struct list *head = NULL; + size_t allocated = 0; + unsigned int seed = 1; + + xpthread_barrier_wait (&barrier); + /* Before allocation */ + xpthread_barrier_wait (&barrier); + + /* Allocate a single large block and then a range of small block sizes: */ + void *large_block = xmalloc (large_block_size); + + while (allocated < per_thread_allocations) + { + size_t size = rand_r (&seed) % max_alloc_size; + struct list *new_head = xmalloc (size); + allocated += size; + new_head->next = head; + head = new_head; + } + + xpthread_barrier_wait (&barrier); + /* After allocation */ + xpthread_barrier_wait (&barrier); + + /* Force some memory fragmentation by freeing every other block. */ + struct list *pos = head; + while (pos != NULL && pos->next != NULL) + { + struct list *next_pos = pos->next->next; + free (pos->next); + pos->next = next_pos; + pos = next_pos; + } + + xpthread_barrier_wait (&barrier); + /* After partial deallocation */ + xpthread_barrier_wait (&barrier); + + while (head != NULL) + { + struct list *next_head = head->next; + free (head); + head = next_head; + } + + free (large_block); + + return NULL; +} + +static void +check_malloc_info (const char *context) +{ + printf ("\ninfo: %s:\n", context); + + FILE *f = tmpfile (); + if (f == NULL) + FAIL_EXIT1 ("tmpfile failed: %m"); + + /* Perform a dummy seek, to allocate the stream buffer. */ + if(fseek(f, 0L, SEEK_SET) != 0) + FAIL_EXIT1 ("fseek failed: %m"); + + /* We call mallinfo twice, i.e. before and after malloc_info, so we can + check that nothing has changed in the meantime which could lead to + false comparisons. */ + struct mallinfo mib4 = mallinfo (); + malloc_info (0, f); + struct mallinfo mi = mallinfo (); + + TEST_VERIFY_EXIT (memcmp (&mi, &mib4, sizeof (mi)) == 0); + + if (fseek (f, 0L, SEEK_SET) != 0) + FAIL_EXIT1 ("fseek failed: %m"); + + size_t total_fast_count = 0; + size_t total_fast_size = 0; + size_t total_rest_count = 0; + size_t total_rest_size = 0; + size_t total_top_count = 0; + size_t total_top_size = 0; + size_t total_mmap_count = 0; + size_t total_mmap_size = 0; + size_t system_current_size = 0; + size_t heap0_top_size = 0; + int heap_nr = -1; + + for (;;) + { + char tag[100]; + if (fscanf (f, " <%99[^>]> ", tag) != 1) + FAIL_EXIT1 ("fscanf failed (or EOF)"); + + if (test_verbose > 0) + printf ("<%s>\n", tag); + + if (strcmp (tag, "/malloc") == 0) + { + TEST_VERIFY_EXIT (heap_nr == -1); + break; + } + else if (strcmp (tag, "/heap") == 0) + { + TEST_VERIFY_EXIT (heap_nr >= 0); + heap_nr = -1; + } + else if (heap_nr == 0) + { + sscanf (tag, "total type=\"top\" count=\"1\" size=\"%zu\"", + &heap0_top_size); + } + else if (heap_nr == -1) + { + sscanf (tag, "heap nr=\"%d\"", &heap_nr); + sscanf (tag, "total type=\"fast\" count=\"%zu\" size=\"%zu\"", + &total_fast_count, &total_fast_size); + sscanf (tag, "total type=\"rest\" count=\"%zu\" size=\"%zu\"", + &total_rest_count, &total_rest_size); + sscanf (tag, "total type=\"top\" count=\"%zu\" size=\"%zu\"", + &total_top_count, &total_top_size); + sscanf (tag, "total type=\"mmap\" count=\"%zu\" size=\"%zu\"", + &total_mmap_count, &total_mmap_size); + sscanf (tag, "system type=\"current\" size=\"%zu\"", + &system_current_size); + } + } + + fclose (f); + + size_t total_nonfast_count = total_rest_count + total_top_count; + size_t total_free_size = total_fast_size + total_rest_size + total_top_size; + size_t total_alloc_size = system_current_size - total_free_size; + + printf ("arena=%d/%zu\n", mi.arena, system_current_size); + TEST_COMPARE (mi.arena, system_current_size); + + printf ("ordblks=%d/%zu\n", mi.ordblks, total_nonfast_count); + TEST_COMPARE (mi.ordblks, total_nonfast_count); + + printf ("smblks=%d/%zu\n", mi.smblks, total_fast_count); + TEST_COMPARE (mi.smblks, total_fast_count); + + printf ("hblks=%d/%zu\n", mi.hblks, total_mmap_count); + TEST_COMPARE (mi.hblks, total_mmap_count); + + printf ("hblkhd=%d/%zu\n", mi.hblkhd, total_mmap_size); + TEST_COMPARE (mi.hblkhd, total_mmap_size); + + printf ("fsmblks=%d/%zu\n", mi.fsmblks, total_fast_size); + TEST_COMPARE (mi.fsmblks, total_fast_size); + + printf ("uordblks=%d/%zu\n", mi.uordblks, total_alloc_size); + TEST_COMPARE (mi.uordblks, total_alloc_size); + + printf ("fordblks=%d/%zu\n", mi.fordblks, total_free_size); + TEST_COMPARE (mi.fordblks, total_free_size); + + printf ("keepcost=%d/%zu\n", mi.keepcost, heap0_top_size); + TEST_COMPARE (mi.keepcost, heap0_top_size); +} + +static int +do_test (void) +{ + xpthread_barrier_init (&barrier, NULL, thread_count + 1); + + pthread_t threads[thread_count]; + for (size_t i = 0; i < array_length (threads); ++i) + threads[i] = xpthread_create (NULL, allocation_thread_function, NULL); + + xpthread_barrier_wait (&barrier); + check_malloc_info ("Before allocation"); + xpthread_barrier_wait (&barrier); + + xpthread_barrier_wait (&barrier); + check_malloc_info ("After allocation"); + xpthread_barrier_wait (&barrier); + + xpthread_barrier_wait (&barrier); + check_malloc_info ("After partial deallocation"); + xpthread_barrier_wait (&barrier); + + for (size_t i = 0; i < array_length (threads); ++i) + xpthread_join (threads[i]); + + check_malloc_info ("After full deallocation"); + + malloc_trim (0); + + check_malloc_info ("After malloc_trim"); + + return 0; +} + +#include