From patchwork Thu Nov 22 04:47:51 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herton Ronaldo Krzesinski X-Patchwork-Id: 201028 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 4E5092C0094 for ; Thu, 22 Nov 2012 20:26:04 +1100 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TbT2i-0005oq-El; Thu, 22 Nov 2012 09:25:56 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TbOhg-00048T-2D for kernel-team@lists.ubuntu.com; Thu, 22 Nov 2012 04:47:56 +0000 Received: from 200.146.81.226.dynamic.adsl.gvt.net.br ([200.146.81.226] helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1TbOhe-00045f-W8; Thu, 22 Nov 2012 04:47:55 +0000 From: Herton Ronaldo Krzesinski To: Dave Young Subject: [ 3.5.yuz extended stable ] Patch "Revert "x86/mm: Fix the size calculation of mapping tables"" has been added to staging queue Date: Thu, 22 Nov 2012 02:47:51 -0200 Message-Id: <1353559671-1510-1-git-send-email-herton.krzesinski@canonical.com> X-Mailer: git-send-email 1.7.9.5 X-Mailman-Approved-At: Thu, 22 Nov 2012 09:25:53 +0000 Cc: Linus Torvalds , ianfang.cn@gmail.com, kernel-team@lists.ubuntu.com, Vivek Goyal , Tejun Heo , Cong Wang , Andrew Morton , Flavio Leitner , Yinghai Lu , Ingo Molnar , Dan Carpenter X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com This is a note to let you know that I have just added a patch titled Revert "x86/mm: Fix the size calculation of mapping tables" to the linux-3.5.y-queue branch of the 3.5.yuz extended stable tree which can be found at: http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.5.y-queue If you, or anyone else, feels it should not be added to this tree, please reply to this email. For more information about the 3.5.yuz tree, see https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable Thanks. -Herton ------ From be07a4610cb5bf4619cbf99724924285da53b8ed Mon Sep 17 00:00:00 2001 From: Dave Young Date: Thu, 18 Oct 2012 14:33:23 +0800 Subject: [PATCH] Revert "x86/mm: Fix the size calculation of mapping tables" commit 7b16bbf97375d9fb7fc107b3f80afeb94a204e44 upstream. Commit: 722bc6b16771 x86/mm: Fix the size calculation of mapping tables Tried to address the issue that the first 2/4M should use 4k pages if PSE enabled, but extra counts should only be valid for x86_32. This commit caused a kdump regression: the kdump kernel hangs. Work is in progress to fundamentally fix the various page table initialization issues that we have, via the design suggested by H. Peter Anvin, but it's not ready yet to be merged. So, to get a working kdump revert to the last known working version, which is the revert of this commit and of a followup fix (which was incomplete): bd2753b2dda7 x86/mm: Only add extra pages count for the first memory range during pre-allocation Tested kdump on physical and virtual machines. Signed-off-by: Dave Young Acked-by: Yinghai Lu Acked-by: Cong Wang Acked-by: Flavio Leitner Tested-by: Flavio Leitner Cc: Dan Carpenter Cc: Cong Wang Cc: Flavio Leitner Cc: Tejun Heo Cc: ianfang.cn@gmail.com Cc: Vivek Goyal Cc: Linus Torvalds Cc: Andrew Morton Signed-off-by: Ingo Molnar Signed-off-by: Herton Ronaldo Krzesinski --- arch/x86/mm/init.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) -- 1.7.9.5 diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index bc4e9d8..37b2e6a 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -29,14 +29,8 @@ int direct_gbpages #endif ; -struct map_range { - unsigned long start; - unsigned long end; - unsigned page_size_mask; -}; - -static void __init find_early_table_space(struct map_range *mr, unsigned long end, - int use_pse, int use_gbpages) +static void __init find_early_table_space(unsigned long end, int use_pse, + int use_gbpages) { unsigned long puds, pmds, ptes, tables, start = 0, good_end = end; phys_addr_t base; @@ -61,10 +55,6 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en #ifdef CONFIG_X86_32 extra += PMD_SIZE; #endif - /* The first 2/4M doesn't use large pages. */ - if (mr->start < PMD_SIZE) - extra += mr->end - mr->start; - ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT; } else ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT; @@ -95,6 +85,12 @@ void __init native_pagetable_reserve(u64 start, u64 end) memblock_reserve(start, end - start); } +struct map_range { + unsigned long start; + unsigned long end; + unsigned page_size_mask; +}; + #ifdef CONFIG_X86_32 #define NR_RANGE_MR 3 #else /* CONFIG_X86_64 */ @@ -267,7 +263,7 @@ unsigned long __init_refok init_memory_mapping(unsigned long start, * nodes are discovered. */ if (!after_bootmem) - find_early_table_space(&mr[0], end, use_pse, use_gbpages); + find_early_table_space(end, use_pse, use_gbpages); for (i = 0; i < nr_range; i++) ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,