From patchwork Fri Jun 16 07:11:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 776585 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3wps4Q4YGFz9sCX for ; Fri, 16 Jun 2017 17:11:50 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="H0mqtzsk"; dkim-atps=neutral DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=vkAirlbhQJxYEFNCSRZvFsQmWTmGHQ4VuJjn1jf4NvE=; b=H0m qtzskCpVCqyKYRFzRoy4c5FgrLp/9MSUh32GxEZIF/HWYD023FxXDPBpizKmzKYy6IzPxmlFUxBmB tBMZiRFSmVYJqPnidXXO9Z4hZwyPCjGWwGv12jcJf1dAncDsRbN2z4RsmFZzkI0SYaihmGBiE24lx 3QPYwijQV/pAXu/kq+acNYGn70bFw5KuW5e9b83RvDU955uYlv12t6/KLekfryBCuAt3aIPj3Z9OK CDLkrcVtXd8cG7mUyAkKboq9N/5M7imgPlU7AvhnV8GAL6neEt0OY74MRRrWFhafcC4PLIHizdzJs I5ftMUqfNopfXPKpkuO+KWvv3+4XYZg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dLlQ5-00068P-6h; Fri, 16 Jun 2017 07:11:49 +0000 Received: from 178.114.185.122.wireless.dyn.drei.com ([178.114.185.122] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1dLlQ3-000685-Pk; Fri, 16 Jun 2017 07:11:48 +0000 From: Christoph Hellwig To: vgupta@synopsys.com, linux-snps-arc@lists.infradead.org Subject: [PATCH] arc: implement DMA_ATTR_NO_KERNEL_MAPPING Date: Fri, 16 Jun 2017 09:11:43 +0200 Message-Id: <20170616071143.16878-1-hch@lst.de> X-Mailer: git-send-email 2.11.0 X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org This way allocations like the NVMe HMB don't consume iomap space Signed-off-by: Christoph Hellwig --- Note: compile tested only, I stumbled over this when researching dma api quirks for HMB support. arch/arc/mm/dma.c | 42 +++++++++++++++++++++++++++++------------- 1 file changed, 29 insertions(+), 13 deletions(-) diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 2a07e6ecafbd..d8999ac88879 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -28,13 +28,22 @@ static void *arc_dma_alloc(struct device *dev, size_t size, struct page *page; phys_addr_t paddr; void *kvaddr; - int need_coh = 1, need_kvaddr = 0; + bool need_cache_sync = true, need_kvaddr = false; page = alloc_pages(gfp, order); if (!page) return NULL; /* + * No-kernel mapping memoery doesn't need a kernel virtual address. + * But we do the initial cache flush to make sure we don't write back + * to from a previous mapping into the now device owned memory. + */ + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + need_cache_sync = true; + need_kvaddr = false; + + /* * IOC relies on all data (even coherent DMA data) being in cache * Thus allocate normal cached memory * @@ -45,17 +54,19 @@ static void *arc_dma_alloc(struct device *dev, size_t size, * -For coherent data, Read/Write to buffers terminate early in cache * (vs. always going to memory - thus are faster) */ - if ((is_isa_arcv2() && ioc_enable) || - (attrs & DMA_ATTR_NON_CONSISTENT)) - need_coh = 0; + } else if ((is_isa_arcv2() && ioc_enable) || + (attrs & DMA_ATTR_NON_CONSISTENT)) { + need_cache_sync = false; + need_kvaddr = true; /* * - A coherent buffer needs MMU mapping to enforce non-cachability * - A highmem page needs a virtual handle (hence MMU mapping) * independent of cachability */ - if (PageHighMem(page) || need_coh) - need_kvaddr = 1; + } else if (PageHighMem(page)) { + need_kvaddr = true; + } /* This is linear addr (0x8000_0000 based) */ paddr = page_to_phys(page); @@ -83,7 +94,7 @@ static void *arc_dma_alloc(struct device *dev, size_t size, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - if (need_coh) + if (need_cache_sync) dma_cache_wback_inv(paddr, size); return kvaddr; @@ -93,14 +104,19 @@ static void arc_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { phys_addr_t paddr = plat_dma_to_phys(dev, dma_handle); - struct page *page = virt_to_page(paddr); - int is_non_coh = 1; - is_non_coh = (attrs & DMA_ATTR_NON_CONSISTENT) || - (is_isa_arcv2() && ioc_enable); + if (!(attrs & DMA_ATTR_NO_KERNEL_MAPPING)) { + struct page *page = virt_to_page(paddr); + bool need_iounmap = true; + + if (!PageHighMem(page) && + ((is_isa_arcv2() && ioc_enable) || + (attrs & DMA_ATTR_NON_CONSISTENT))) + need_iounmap = false; - if (PageHighMem(page) || !is_non_coh) - iounmap((void __force __iomem *)vaddr); + if (need_iounmap) + iounmap((void __force __iomem *)vaddr); + } __free_pages(page, get_order(size)); }