From patchwork Fri Oct 21 05:52:49 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andi Kleen X-Patchwork-Id: 120930 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id F144D1007D1 for ; Fri, 21 Oct 2011 16:53:31 +1100 (EST) Received: (qmail 25706 invoked by alias); 21 Oct 2011 05:53:25 -0000 Received: (qmail 25673 invoked by uid 22791); 21 Oct 2011 05:53:22 -0000 X-SWARE-Spam-Status: No, hits=-2.3 required=5.0 tests=AWL, BAYES_00, RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from one.firstfloor.org (HELO one.firstfloor.org) (213.235.205.2) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 21 Oct 2011 05:52:57 +0000 Received: by one.firstfloor.org (Postfix, from userid 503) id 4C4831A98058; Fri, 21 Oct 2011 07:52:56 +0200 (CEST) From: Andi Kleen To: gcc-patches@gcc.gnu.org Cc: Andi Kleen Subject: [PATCH 2/3] Free large chunks in ggc Date: Fri, 21 Oct 2011 07:52:49 +0200 Message-Id: <1319176370-26071-3-git-send-email-andi@firstfloor.org> In-Reply-To: <1319176370-26071-1-git-send-email-andi@firstfloor.org> References: <1319176370-26071-1-git-send-email-andi@firstfloor.org> Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org From: Andi Kleen This implements the freeing back of large chunks in the ggc madvise path Richard Guenther asked for. This way on systems with limited address space malloc() and other allocators still have a chance to get back at some of the memory ggc freed. The fragmented pages are still just given back, but the address space stays allocated. I tried freeing only aligned 2MB areas to optimize for 2MB huge pages, but the hit rate was quite low, so I switched to 1MB+ unaligned areas. The target size is a param now. Passed bootstrap and testing on x86_64-linux gcc/: 2011-10-18 Andi Kleen * ggc-page (release_pages): First free large continuous chunks in the madvise path. * params.def (GGC_FREE_UNIT): Add. * doc/invoke.texi (ggc-free-unit): Add. --- gcc/doc/invoke.texi | 5 +++++ gcc/ggc-page.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ gcc/params.def | 5 +++++ 3 files changed, 58 insertions(+), 0 deletions(-) diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index 4f55dbc..e622552 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -8858,6 +8858,11 @@ very large effectively disables garbage collection. Setting this parameter and @option{ggc-min-expand} to zero causes a full collection to occur at every opportunity. +@item ggc-free-unit + +Continuous areas in OS pages to free back to OS immediately. Default is 256 +pages, which is 1MB with 4K pages. + @item max-reload-search-insns The maximum number of instruction reload should look backward for equivalent register. Increasing values mean more aggressive optimization, making the diff --git a/gcc/ggc-page.c b/gcc/ggc-page.c index ba88e3f..eb0eeef 100644 --- a/gcc/ggc-page.c +++ b/gcc/ggc-page.c @@ -972,6 +972,54 @@ release_pages (void) page_entry *p, *start_p; char *start; size_t len; + size_t mapped_len; + page_entry *next, *prev, *newprev; + size_t free_unit = PARAM_VALUE (GGC_FREE_UNIT) * G.pagesize; + + /* First free larger continuous areas to the OS. + This allows other allocators to grab these areas if needed. + This is only done on larger chunks to avoid fragmentation. + This does not always work because the free_pages list is only + sorted over a single GC cycle. */ + + p = G.free_pages; + prev = NULL; + while (p) + { + start = p->page; + start_p = p; + len = 0; + mapped_len = 0; + newprev = prev; + while (p && p->page == start + len) + { + len += p->bytes; + if (!p->discarded) + mapped_len += p->bytes; + newprev = p; + p = p->next; + } + if (len >= free_unit) + { + while (start_p != p) + { + next = start_p->next; + free (start_p); + start_p = next; + } + munmap (start, len); + if (prev) + prev->next = p; + else + G.free_pages = p; + G.bytes_mapped -= mapped_len; + continue; + } + prev = newprev; + } + + /* Now give back the fragmented pages to the OS, but keep the address + space to reuse it next time. */ for (p = G.free_pages; p; ) { diff --git a/gcc/params.def b/gcc/params.def index 5e49c48..edbf0de 100644 --- a/gcc/params.def +++ b/gcc/params.def @@ -561,6 +561,11 @@ DEFPARAM(GGC_MIN_HEAPSIZE, #undef GGC_MIN_EXPAND_DEFAULT #undef GGC_MIN_HEAPSIZE_DEFAULT +DEFPARAM(GGC_FREE_UNIT, + "ggc-free-unit", + "Continuous areas in OS pages to free back immediately", + 256, 0, 0) + DEFPARAM(PARAM_MAX_RELOAD_SEARCH_INSNS, "max-reload-search-insns", "The maximum number of instructions to search backward when looking for equivalent reload",