From patchwork Tue Jun 24 22:48:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Michal Nazarewicz X-Patchwork-Id: 363758 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2001:1868:205::9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3E31A1400BB for ; Wed, 25 Jun 2014 08:51:11 +1000 (EST) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WzZW2-0007Lk-Tu; Tue, 24 Jun 2014 22:48:38 +0000 Received: from mail-wg0-f73.google.com ([74.125.82.73]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WzZVw-0007CZ-El for linux-arm-kernel@lists.infradead.org; Tue, 24 Jun 2014 22:48:33 +0000 Received: by mail-wg0-f73.google.com with SMTP id b13so121396wgh.2 for ; Tue, 24 Jun 2014 15:48:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-type:content-transfer-encoding; bh=Ifkr7qdvb0ReP91f5f6OGuRIZ+fa99oV1Us1p4ETAJQ=; b=VVR/D/rtT4wMrb5UCf4AmFmEtpbX59at+Uh8Tf+5sJunmGbsuLeSFxBnxHJuuptkZi vnJFUXkXts6hePE0shkkGfjtJ4pH6SwQKQF5ypifKPpDNdVgsc8+XnRxR+8bdpapLLCn jbde0Y6I8F5BCmCVwG17yr0V/zLf0Wz0ZhgVYYjZrRecibMygl1QQHL3WCBAefMN1GM5 874tPqFM8httP7KZjOs1nPGVBG2Sfvw0cjrQ5XGHNvWFr9odM4jNrWZqNAcqE06pvEiT n5GPeY1SKzwjnc/ubN61GFhbT+d9zV/0eb6BXRUyn/2nEDy9qj5VbZlqKq/oTIJlnxht yYZA== X-Gm-Message-State: ALoCoQlY9//rTto6jqc6GscD/rJdG7vSlZQSk6YQY1IIKHW8Ni3CQQg8YsnCkHFmwCIlsuvjA4rq X-Received: by 10.112.149.162 with SMTP id ub2mr504305lbb.18.1403650089018; Tue, 24 Jun 2014 15:48:09 -0700 (PDT) Received: from corp2gmr1-1.eem.corp.google.com (corp2gmr1-1.eem.corp.google.com [172.25.138.99]) by gmr-mx.google.com with ESMTPS id bz15si192213wib.2.2014.06.24.15.48.08 for (version=TLSv1.1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Jun 2014 15:48:09 -0700 (PDT) Received: from mpn.zrh.corp.google.com (mpn.zrh.corp.google.com [172.16.113.113]) by corp2gmr1-1.eem.corp.google.com (Postfix) with ESMTP id D06921CA20B; Tue, 24 Jun 2014 15:48:08 -0700 (PDT) Received: by mpn.zrh.corp.google.com (Postfix, from userid 126942) id 915B41E00EC; Wed, 25 Jun 2014 00:48:08 +0200 (CEST) From: Michal Nazarewicz To: Mark Salter , Joonsoo Kim , Marek Szyprowski Subject: [RFC] mm: cma: move init_cma_reserved_pageblock to cma.c Date: Wed, 25 Jun 2014 00:48:02 +0200 Message-Id: <1403650082-10056-1-git-send-email-mina86@mina86.com> X-Mailer: git-send-email 2.0.0.526.g5318336 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140624_154832_811419_F539D1FE X-CRM114-Status: GOOD ( 20.34 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.73 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.73 listed in wl.mailspike.net] -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record 0.0 HEADER_FROM_DIFFERENT_DOMAINS From and EnvelopeFrom 2nd level mail domains are different -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Michal Nazarewicz , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org With [f495d26: “generalize CMA reserved area management functionality”] patch CMA has its place under mm directory now so there is no need to shoehorn a highly CMA specific functions inside of page_alloc.c. As such move init_cma_reserved_pageblock from mm/page_alloc.c to mm/cma.c, rename it to cma_init_reserved_pageblock and refactor a little. Most importantly, if a !pfn_valid(pfn) is encountered, just return -EINVAL instead of warning and trying to continue the initialisation of the area. It's not clear, to me at least, what good is continuing the work on a PFN that is known to be invalid. Signed-off-by: Michal Nazarewicz Acked-by: Joonsoo Kim --- include/linux/gfp.h | 3 -- mm/cma.c | 85 +++++++++++++++++++++++++++++++++++++++++------------ mm/page_alloc.c | 31 ------------------- 3 files changed, 66 insertions(+), 53 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 5e7219d..107793e9 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -415,9 +415,6 @@ extern int alloc_contig_range(unsigned long start, unsigned long end, unsigned migratetype); extern void free_contig_range(unsigned long pfn, unsigned nr_pages); -/* CMA stuff */ -extern void init_cma_reserved_pageblock(struct page *page); - #endif #endif /* __LINUX_GFP_H */ diff --git a/mm/cma.c b/mm/cma.c index c17751c..843b2b6 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -28,11 +28,14 @@ #include #include #include +#include #include #include #include #include +#include "internal.h" + struct cma { unsigned long base_pfn; unsigned long count; @@ -83,37 +86,81 @@ static void cma_clear_bitmap(struct cma *cma, unsigned long pfn, int count) mutex_unlock(&cma->lock); } +/* Free whole pageblock and set its migration type to MIGRATE_CMA. */ +static int __init cma_init_reserved_pageblock(struct zone *zone, + unsigned long pageblock_pfn) +{ + unsigned long pfn, nr_pages, i; + struct page *page, *p; + unsigned order; + + pfn = pageblock_pfn; + if (!pfn_valid(pfn)) + goto invalid_pfn; + page = pfn_to_page(pfn); + + p = page; + i = pageblock_nr_pages; + do { + if (!pfn_valid(pfn)) + goto invalid_pfn; + + /* + * alloc_contig_range requires the pfn range specified to be + * in the same zone. Make this simple by forcing the entire + * CMA resv range to be in the same zone. + */ + if (page_zone(p) != zone) { + pr_err("pfn %lu belongs to %s, expecting %s\n", + pfn, page_zone(p)->name, zone->name); + return -EINVAL; + } + + __ClearPageReserved(p); + set_page_count(p, 0); + } while (++p, ++pfn, --i); + + /* Return all the pages to buddy allocator as MIGRATE_CMA. */ + set_pageblock_migratetype(page, MIGRATE_CMA); + + order = min_t(unsigned, pageblock_order, MAX_ORDER - 1); + nr_pages = min_t(unsigned long, pageblock_nr_pages, MAX_ORDER_NR_PAGES); + + p = page; + i = pageblock_nr_pages; + do { + set_page_refcounted(p); + __free_pages(p, order); + p += nr_pages; + } while (i -= nr_pages); + + adjust_managed_page_count(page, pageblock_nr_pages); + return 0; + +invalid_pfn: + pr_err("invalid pfn: %lu\n", pfn); + return -EINVAL; +} + static int __init cma_activate_area(struct cma *cma) { int bitmap_size = BITS_TO_LONGS(cma_bitmap_maxno(cma)) * sizeof(long); - unsigned long base_pfn = cma->base_pfn, pfn = base_pfn; unsigned i = cma->count >> pageblock_order; + unsigned long pfn = cma->base_pfn; struct zone *zone; - cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL); + if (WARN_ON(!pfn_valid(pfn))) + return -EINVAL; + cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL); if (!cma->bitmap) return -ENOMEM; - WARN_ON_ONCE(!pfn_valid(pfn)); zone = page_zone(pfn_to_page(pfn)); - do { - unsigned j; - - base_pfn = pfn; - for (j = pageblock_nr_pages; j; --j, pfn++) { - WARN_ON_ONCE(!pfn_valid(pfn)); - /* - * alloc_contig_range requires the pfn range - * specified to be in the same zone. Make this - * simple by forcing the entire CMA resv range - * to be in the same zone. - */ - if (page_zone(pfn_to_page(pfn)) != zone) - goto err; - } - init_cma_reserved_pageblock(pfn_to_page(base_pfn)); + if (cma_init_reserved_pageblock(zone, pfn) < 0) + goto err; + pfn += pageblock_nr_pages; } while (--i); mutex_init(&cma->lock); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fef9614..d47f83f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -804,37 +804,6 @@ void __init __free_pages_bootmem(struct page *page, unsigned int order) __free_pages(page, order); } -#ifdef CONFIG_CMA -/* Free whole pageblock and set its migration type to MIGRATE_CMA. */ -void __init init_cma_reserved_pageblock(struct page *page) -{ - unsigned i = pageblock_nr_pages; - struct page *p = page; - - do { - __ClearPageReserved(p); - set_page_count(p, 0); - } while (++p, --i); - - set_pageblock_migratetype(page, MIGRATE_CMA); - - if (pageblock_order >= MAX_ORDER) { - i = pageblock_nr_pages; - p = page; - do { - set_page_refcounted(p); - __free_pages(p, MAX_ORDER - 1); - p += MAX_ORDER_NR_PAGES; - } while (i -= MAX_ORDER_NR_PAGES); - } else { - set_page_refcounted(page); - __free_pages(page, pageblock_order); - } - - adjust_managed_page_count(page, pageblock_nr_pages); -} -#endif - /* * The order of subdivision here is critical for the IO subsystem. * Please do not alter this order without good reasons and regression