From patchwork Mon Jun 16 05:40:45 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 359934 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id A5EC41400D6 for ; Mon, 16 Jun 2014 15:38:22 +1000 (EST) Received: from ozlabs.org (ozlabs.org [103.22.144.67]) by lists.ozlabs.org (Postfix) with ESMTP id 9651B1A0ACC for ; Mon, 16 Jun 2014 15:38:22 +1000 (EST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from lgeamrelo01.lge.com (lgeamrelo01.lge.com [156.147.1.125]) by lists.ozlabs.org (Postfix) with ESMTP id 2DCBD1A0371 for ; Mon, 16 Jun 2014 15:36:41 +1000 (EST) Received: from unknown (HELO js1304-P5Q-DELUXE.LGE.NET) (10.177.220.145) by 156.147.1.125 with ESMTP; 16 Jun 2014 14:36:40 +0900 X-Original-SENDERIP: 10.177.220.145 X-Original-MAILFROM: iamjoonsoo.kim@lge.com From: Joonsoo Kim To: Andrew Morton , "Aneesh Kumar K.V" , Marek Szyprowski , Michal Nazarewicz Subject: [PATCH v3 -next 3/9] DMA, CMA: support alignment constraint on CMA region Date: Mon, 16 Jun 2014 14:40:45 +0900 Message-Id: <1402897251-23639-4-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1402897251-23639-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1402897251-23639-1-git-send-email-iamjoonsoo.kim@lge.com> Cc: Russell King - ARM Linux , kvm@vger.kernel.org, linux-mm@kvack.org, Gleb Natapov , Greg Kroah-Hartman , Alexander Graf , kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, Minchan Kim , Paul Mackerras , Paolo Bonzini , Joonsoo Kim , Zhang Yanfei , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" PPC KVM's CMA area management needs alignment constraint on CMA region. So support it to prepare generalization of CMA area management functionality. Additionally, add some comments which tell us why alignment constraint is needed on CMA region. v3: fix wrongly spelled word, align_order->alignment (Minchan) clear code documentation by Minchan's comment (Minchan) Acked-by: Michal Nazarewicz Reviewed-by: Aneesh Kumar K.V Signed-off-by: Joonsoo Kim diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c index 9021762..5f62c28 100644 --- a/drivers/base/dma-contiguous.c +++ b/drivers/base/dma-contiguous.c @@ -32,6 +32,7 @@ #include #include #include +#include struct cma { unsigned long base_pfn; @@ -215,17 +216,16 @@ core_initcall(cma_init_reserved_areas); static int __init __dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base, phys_addr_t limit, + phys_addr_t alignment, struct cma **res_cma, bool fixed) { struct cma *cma = &cma_areas[cma_area_count]; - phys_addr_t alignment; int ret = 0; - pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__, - (unsigned long)size, (unsigned long)base, - (unsigned long)limit); + pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", + __func__, (unsigned long)size, (unsigned long)base, + (unsigned long)limit, (unsigned long)alignment); - /* Sanity checks */ if (cma_area_count == ARRAY_SIZE(cma_areas)) { pr_err("Not enough slots for CMA reserved regions!\n"); return -ENOSPC; @@ -234,8 +234,17 @@ static int __init __dma_contiguous_reserve_area(phys_addr_t size, if (!size) return -EINVAL; - /* Sanitise input arguments */ - alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + if (alignment && !is_power_of_2(alignment)) + return -EINVAL; + + /* + * Sanitise input arguments. + * Pages both ends in CMA area could be merged into adjacent unmovable + * migratetype page by page allocator's buddy algorithm. In the case, + * you couldn't get a contiguous memory, which is not what we want. + */ + alignment = max(alignment, + (phys_addr_t)PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order)); base = ALIGN(base, alignment); size = ALIGN(size, alignment); limit &= ~(alignment - 1); @@ -299,7 +308,8 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base, { int ret; - ret = __dma_contiguous_reserve_area(size, base, limit, res_cma, fixed); + ret = __dma_contiguous_reserve_area(size, base, limit, 0, + res_cma, fixed); if (ret) return ret;