From patchwork Fri Mar 25 06:50:06 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin Herrenschmidt X-Patchwork-Id: 88334 X-Patchwork-Delegate: benh@kernel.crashing.org Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from ozlabs.org (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id F3939100A3E for ; Fri, 25 Mar 2011 17:50:28 +1100 (EST) Received: from gate.crashing.org (gate.crashing.org [63.228.1.57]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 3B445B6F87 for ; Fri, 25 Mar 2011 17:50:16 +1100 (EST) Received: from [IPv6:::1] (localhost.localdomain [127.0.0.1]) by gate.crashing.org (8.14.1/8.13.8) with ESMTP id p2P6o6YY012297; Fri, 25 Mar 2011 01:50:07 -0500 Subject: [PATCH] powerpc: Implement dma_mmap_coherent() From: Benjamin Herrenschmidt To: linuxppc-dev Date: Fri, 25 Mar 2011 17:50:06 +1100 Message-ID: <1301035806.2402.470.camel@pasglop> Mime-Version: 1.0 X-Mailer: Evolution 2.30.3 Cc: Takashi Iwai X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org This is used by Alsa to mmap buffers allocated with dma_alloc_coherent() into userspace. We need a special variant to handle machines with non-coherent DMAs as those buffers have "special" virt addresses and require non-cachable mappings Signed-off-by: Benjamin Herrenschmidt --- Dunno if anybody with CONFIG_NOT_COHERENT_CACHE has some audio device that uses dma buffers (ie not usb-audio) and wants to try that out... should fix a long standing problem. arch/powerpc/include/asm/dma-mapping.h | 7 +++++++ arch/powerpc/kernel/dma.c | 18 ++++++++++++++++++ arch/powerpc/mm/dma-noncoherent.c | 20 ++++++++++++++++++++ 3 files changed, 45 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h index 6d2416a..41e0eb7 100644 --- a/arch/powerpc/include/asm/dma-mapping.h +++ b/arch/powerpc/include/asm/dma-mapping.h @@ -42,6 +42,7 @@ extern void __dma_free_coherent(size_t size, void *vaddr); extern void __dma_sync(void *vaddr, size_t size, int direction); extern void __dma_sync_page(struct page *page, unsigned long offset, size_t size, int direction); +extern unsigned long __dma_get_coherent_pfn(void *cpu_addr); #else /* ! CONFIG_NOT_COHERENT_CACHE */ /* @@ -52,6 +53,7 @@ extern void __dma_sync_page(struct page *page, unsigned long offset, #define __dma_free_coherent(size, addr) ((void)0) #define __dma_sync(addr, size, rw) ((void)0) #define __dma_sync_page(pg, off, sz, rw) ((void)0) +#define __dma_get_coherent_pfn(cpu_addr) (0) #endif /* ! CONFIG_NOT_COHERENT_CACHE */ @@ -198,6 +200,11 @@ static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr) #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) +extern int dma_mmap_coherent(struct device *, struct vm_area_struct *, + void *, dma_addr_t, size_t); +#define ARCH_HAS_DMA_MMAP_COHERENT + + static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, enum dma_data_direction direction) { diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c index cf02cad..0f52235 100644 --- a/arch/powerpc/kernel/dma.c +++ b/arch/powerpc/kernel/dma.c @@ -179,3 +179,21 @@ static int __init dma_init(void) return 0; } fs_initcall(dma_init); + +int dma_mmap_coherent(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t handle, size_t size) +{ + unsigned long pfn; + +#ifdef CONFIG_NOT_COHERENT_CACHE + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + pfn = __dma_get_coherent_pfn(cpu_addr); +#else + pfn = page_to_pfn(virt_to_page(cpu_addr)); +#endif + return remap_pfn_range(vma, vma->vm_start, + pfn + vma->vm_pgoff, + vma->vm_end - vma->vm_start, + vma->vm_page_prot); +} + diff --git a/arch/powerpc/mm/dma-noncoherent.c b/arch/powerpc/mm/dma-noncoherent.c index 757c0be..174b10a 100644 --- a/arch/powerpc/mm/dma-noncoherent.c +++ b/arch/powerpc/mm/dma-noncoherent.c @@ -399,3 +399,23 @@ void __dma_sync_page(struct page *page, unsigned long offset, #endif } EXPORT_SYMBOL(__dma_sync_page); + +/* + * Return the PFN for a given cpu virtual address returned by + * __dma_alloc_coherent. This is used by dma_mmap_coherent() + */ +unsigned long __dma_get_coherent_pfn(void *cpu_addr) +{ + /* This should always be populated, so we don't test every + * level. If that fails, we'll have a nice crash which + * will be as good as a BUG_ON() + */ + pgd_t *pgd = pgd_offset_k(cpu_addr); + pud_t *pud = pud_offset(pgd, cpu_addr); + pmd_t *pmd = pmd_offset(pud, cpu_addr); + pte_t *ptep = pte_offset_kernel(pmd, cpu_addr); + + if (pte_none(*ptep) || !pte_present(*ptep)) + return 0; + return pte_pfn(*ptep); +}