From patchwork Fri Jul 18 09:30:19 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Luis Henriques X-Patchwork-Id: 371445 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 3956114010F; Fri, 18 Jul 2014 20:50:11 +1000 (EST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1X85jn-00019k-KV; Fri, 18 Jul 2014 10:50:03 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1X84Ue-0001ZM-P0 for kernel-team@lists.ubuntu.com; Fri, 18 Jul 2014 09:30:20 +0000 Received: from bl15-101-232.dsl.telepac.pt ([188.80.101.232] helo=localhost) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1X84Ue-0006Ky-5U; Fri, 18 Jul 2014 09:30:20 +0000 From: Luis Henriques To: Michal Nazarewicz Subject: [3.11.y.z extended stable] Patch "mm: page_alloc: fix CMA area initialisation when pageblock > MAX_ORDER" has been added to staging queue Date: Fri, 18 Jul 2014 10:30:19 +0100 Message-Id: <1405675819-14893-1-git-send-email-luis.henriques@canonical.com> X-Mailer: git-send-email 1.9.1 X-Extended-Stable: 3.11 MIME-Version: 1.0 X-Mailman-Approved-At: Fri, 18 Jul 2014 10:50:02 +0000 Cc: Catalin Marinas , Mark Salter , kernel-team@lists.ubuntu.com, Mel Gorman , David Rientjes , Andrew Morton , Linus Torvalds , Marek Szyprowski X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com This is a note to let you know that I have just added a patch titled mm: page_alloc: fix CMA area initialisation when pageblock > MAX_ORDER to the linux-3.11.y-queue branch of the 3.11.y.z extended stable tree which can be found at: http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.11.y-queue If you, or anyone else, feels it should not be added to this tree, please reply to this email. For more information about the 3.11.y.z tree, see https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable Thanks. -Luis ------ From 27910d2c2732f94eb416f510227d29d47decd93e Mon Sep 17 00:00:00 2001 From: Michal Nazarewicz Date: Wed, 2 Jul 2014 15:22:35 -0700 Subject: mm: page_alloc: fix CMA area initialisation when pageblock > MAX_ORDER MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit dc78327c0ea7da5186d8cbc1647bd6088c5c9fa5 upstream. With a kernel configured with ARM64_64K_PAGES && !TRANSPARENT_HUGEPAGE, the following is triggered at early boot: SMP: Total of 8 processors activated. devtmpfs: initialized Unable to handle kernel NULL pointer dereference at virtual address 00000008 pgd = fffffe0000050000 [00000008] *pgd=00000043fba00003, *pmd=00000043fba00003, *pte=00e0000078010407 Internal error: Oops: 96000006 [#1] SMP Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.15.0-rc864k+ #44 task: fffffe03bc040000 ti: fffffe03bc080000 task.ti: fffffe03bc080000 PC is at __list_add+0x10/0xd4 LR is at free_one_page+0x270/0x638 ... Call trace: __list_add+0x10/0xd4 free_one_page+0x26c/0x638 __free_pages_ok.part.52+0x84/0xbc __free_pages+0x74/0xbc init_cma_reserved_pageblock+0xe8/0x104 cma_init_reserved_areas+0x190/0x1e4 do_one_initcall+0xc4/0x154 kernel_init_freeable+0x204/0x2a8 kernel_init+0xc/0xd4 This happens because init_cma_reserved_pageblock() calls __free_one_page() with pageblock_order as page order but it is bigger than MAX_ORDER. This in turn causes accesses past zone->free_list[]. Fix the problem by changing init_cma_reserved_pageblock() such that it splits pageblock into individual MAX_ORDER pages if pageblock is bigger than a MAX_ORDER page. In cases where !CONFIG_HUGETLB_PAGE_SIZE_VARIABLE, which is all architectures expect for ia64, powerpc and tile at the moment, the “pageblock_order > MAX_ORDER” condition will be optimised out since both sides of the operator are constants. In cases where pageblock size is variable, the performance degradation should not be significant anyway since init_cma_reserved_pageblock() is called only at boot time at most MAX_CMA_AREAS times which by default is eight. Signed-off-by: Michal Nazarewicz Reported-by: Mark Salter Tested-by: Mark Salter Tested-by: Christopher Covington Cc: Mel Gorman Cc: David Rientjes Cc: Marek Szyprowski Cc: Catalin Marinas Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Luis Henriques --- mm/page_alloc.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) -- 1.9.1 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6a2b267a521f..1b5e9fb49ed4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -780,9 +780,21 @@ void __init init_cma_reserved_pageblock(struct page *page) set_page_count(p, 0); } while (++p, --i); - set_page_refcounted(page); set_pageblock_migratetype(page, MIGRATE_CMA); - __free_pages(page, pageblock_order); + + if (pageblock_order >= MAX_ORDER) { + i = pageblock_nr_pages; + p = page; + do { + set_page_refcounted(p); + __free_pages(p, MAX_ORDER - 1); + p += MAX_ORDER_NR_PAGES; + } while (i -= MAX_ORDER_NR_PAGES); + } else { + set_page_refcounted(page); + __free_pages(page, pageblock_order); + } + adjust_managed_page_count(page, pageblock_nr_pages); } #endif