From patchwork Wed Jan 23 04:44:10 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herton Ronaldo Krzesinski X-Patchwork-Id: 214775 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 0E4812C007C for ; Wed, 23 Jan 2013 15:44:54 +1100 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TxsCa-0006bb-0b; Wed, 23 Jan 2013 04:44:44 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TxsC6-0006KN-Ho for kernel-team@lists.ubuntu.com; Wed, 23 Jan 2013 04:44:14 +0000 Received: from 189.58.23.113.dynamic.adsl.gvt.net.br ([189.58.23.113] helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1TxsC5-0007a4-RL; Wed, 23 Jan 2013 04:44:14 +0000 From: Herton Ronaldo Krzesinski To: Laura Abbott Subject: [ 3.5.y.z extended stable ] Patch "mm: use aligned zone start for pfn_to_bitidx calculation" has been added to staging queue Date: Wed, 23 Jan 2013 02:44:10 -0200 Message-Id: <1358916250-24128-1-git-send-email-herton.krzesinski@canonical.com> X-Mailer: git-send-email 1.7.9.5 X-Extended-Stable: 3.5 Cc: Andrew Morton , Linus Torvalds , Mel Gorman , kernel-team@lists.ubuntu.com X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com This is a note to let you know that I have just added a patch titled mm: use aligned zone start for pfn_to_bitidx calculation to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree which can be found at: http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.5.y-queue If you, or anyone else, feels it should not be added to this tree, please reply to this email. For more information about the 3.5.y.z tree, see https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable Thanks. -Herton ------ From 31850224192469af970b7101df540acaf50d1081 Mon Sep 17 00:00:00 2001 From: Laura Abbott Date: Fri, 11 Jan 2013 14:31:51 -0800 Subject: [PATCH] mm: use aligned zone start for pfn_to_bitidx calculation commit c060f943d0929f3e429c5d9522290584f6281d6e upstream. The current calculation in pfn_to_bitidx assumes that (pfn - zone->zone_start_pfn) >> pageblock_order will return the same bit for all pfn in a pageblock. If zone_start_pfn is not aligned to pageblock_nr_pages, this may not always be correct. Consider the following with pageblock order = 10, zone start 2MB: pfn | pfn - zone start | (pfn - zone start) >> page block order ---------------------------------------------------------------- 0x26000 | 0x25e00 | 0x97 0x26100 | 0x25f00 | 0x97 0x26200 | 0x26000 | 0x98 0x26300 | 0x26100 | 0x98 This means that calling {get,set}_pageblock_migratetype on a single page will not set the migratetype for the full block. Fix this by rounding down zone_start_pfn when doing the bitidx calculation. For our use case, the effects of this bug were mostly tied to the fact that CMA allocations would either take a long time or fail to happen. Depending on the driver using CMA, this could result in anything from visual glitches to application failures. Signed-off-by: Laura Abbott Acked-by: Mel Gorman Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Herton Ronaldo Krzesinski --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 1.7.9.5 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9bd526c..007bf3b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5405,7 +5405,7 @@ static inline int pfn_to_bitidx(struct zone *zone, unsigned long pfn) pfn &= (PAGES_PER_SECTION-1); return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS; #else - pfn = pfn - zone->zone_start_pfn; + pfn = pfn - round_down(zone->zone_start_pfn, pageblock_nr_pages); return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS; #endif /* CONFIG_SPARSEMEM */ }