From patchwork Fri Oct 21 05:52:50 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andi Kleen X-Patchwork-Id: 120931 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id A95531007D1 for ; Fri, 21 Oct 2011 16:53:35 +1100 (EST) Received: (qmail 25844 invoked by alias); 21 Oct 2011 05:53:26 -0000 Received: (qmail 25669 invoked by uid 22791); 21 Oct 2011 05:53:22 -0000 X-SWARE-Spam-Status: No, hits=-2.3 required=5.0 tests=AWL, BAYES_00, RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from one.firstfloor.org (HELO one.firstfloor.org) (213.235.205.2) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 21 Oct 2011 05:52:57 +0000 Received: by one.firstfloor.org (Postfix, from userid 503) id 53D841A9805E; Fri, 21 Oct 2011 07:52:56 +0200 (CEST) From: Andi Kleen To: gcc-patches@gcc.gnu.org Cc: Andi Kleen Subject: [PATCH 3/3] Add a fragmentation fallback in ggc-page Date: Fri, 21 Oct 2011 07:52:50 +0200 Message-Id: <1319176370-26071-4-git-send-email-andi@firstfloor.org> In-Reply-To: <1319176370-26071-1-git-send-email-andi@firstfloor.org> References: <1319176370-26071-1-git-send-email-andi@firstfloor.org> Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org From: Andi Kleen There were some concerns that the earlier munmap patch could lead to address space being freed that cannot be allocated again by ggc due to fragmentation. This patch adds a fragmentation fallback to solve this: when a GGC_QUIRE_SIZE sized allocation fails, try again with a page sized allocation. Passes bootstrap and testing on x86_64-linux with the fallback forced artificially. gcc/: 2011-10-20 Andi Kleen * ggc-page (alloc_anon): Add check argument. (alloc_page): Add fallback to 1 page allocation. Adjust alloc_anon calls to new argument. --- gcc/ggc-page.c | 23 +++++++++++++++-------- 1 files changed, 15 insertions(+), 8 deletions(-) diff --git a/gcc/ggc-page.c b/gcc/ggc-page.c index eb0eeef..91cd450 100644 --- a/gcc/ggc-page.c +++ b/gcc/ggc-page.c @@ -482,7 +482,7 @@ static int ggc_allocated_p (const void *); static page_entry *lookup_page_table_entry (const void *); static void set_page_table_entry (void *, page_entry *); #ifdef USING_MMAP -static char *alloc_anon (char *, size_t); +static char *alloc_anon (char *, size_t, bool check); #endif #ifdef USING_MALLOC_PAGE_GROUPS static size_t page_group_index (char *, char *); @@ -661,7 +661,7 @@ debug_print_page_list (int order) compile error unless exactly one of the HAVE_* is defined. */ static inline char * -alloc_anon (char *pref ATTRIBUTE_UNUSED, size_t size) +alloc_anon (char *pref ATTRIBUTE_UNUSED, size_t size, bool check) { #ifdef HAVE_MMAP_ANON char *page = (char *) mmap (pref, size, PROT_READ | PROT_WRITE, @@ -674,6 +674,8 @@ alloc_anon (char *pref ATTRIBUTE_UNUSED, size_t size) if (page == (char *) MAP_FAILED) { + if (!check) + return NULL; perror ("virtual memory exhausted"); exit (FATAL_EXIT_CODE); } @@ -776,13 +778,18 @@ alloc_page (unsigned order) extras on the freelist. (Can only do this optimization with mmap for backing store.) */ struct page_entry *e, *f = G.free_pages; - int i; + int i, entries; - page = alloc_anon (NULL, G.pagesize * GGC_QUIRE_SIZE); + page = alloc_anon (NULL, G.pagesize * GGC_QUIRE_SIZE, false); + if (page == NULL) + { + page = alloc_anon(NULL, G.pagesize, true); + entries = 1; + } /* This loop counts down so that the chain will be in ascending memory order. */ - for (i = GGC_QUIRE_SIZE - 1; i >= 1; i--) + for (i = entries - 1; i >= 1; i--) { e = XCNEWVAR (struct page_entry, page_entry_size); e->order = order; @@ -795,7 +802,7 @@ alloc_page (unsigned order) G.free_pages = f; } else - page = alloc_anon (NULL, entry_size); + page = alloc_anon (NULL, entry_size, true); #endif #ifdef USING_MALLOC_PAGE_GROUPS else @@ -1648,14 +1655,14 @@ init_ggc (void) believe, is an unaligned page allocation, which would cause us to hork badly if we tried to use it. */ { - char *p = alloc_anon (NULL, G.pagesize); + char *p = alloc_anon (NULL, G.pagesize, true); struct page_entry *e; if ((size_t)p & (G.pagesize - 1)) { /* How losing. Discard this one and try another. If we still can't get something useful, give up. */ - p = alloc_anon (NULL, G.pagesize); + p = alloc_anon (NULL, G.pagesize, true); gcc_assert (!((size_t)p & (G.pagesize - 1))); }