From patchwork Mon Jul 27 21:08:46 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Becky Bruce X-Patchwork-Id: 30281 Return-Path: X-Original-To: patchwork-incoming@bilbo.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from ozlabs.org (ozlabs.org [203.10.76.45]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx.ozlabs.org", Issuer "CA Cert Signing Authority" (verified OK)) by bilbo.ozlabs.org (Postfix) with ESMTPS id 94632B7093 for ; Tue, 28 Jul 2009 07:10:05 +1000 (EST) Received: by ozlabs.org (Postfix) id 89C28DDD0B; Tue, 28 Jul 2009 07:10:05 +1000 (EST) Delivered-To: patchwork-incoming@ozlabs.org Received: from bilbo.ozlabs.org (bilbo.ozlabs.org [203.10.76.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "bilbo.ozlabs.org", Issuer "CAcert Class 3 Root" (verified OK)) by ozlabs.org (Postfix) with ESMTPS id 7DC3BDDD04 for ; Tue, 28 Jul 2009 07:10:05 +1000 (EST) Received: from bilbo.ozlabs.org (localhost [127.0.0.1]) by bilbo.ozlabs.org (Postfix) with ESMTP id E43E4B7E1E for ; Tue, 28 Jul 2009 07:09:00 +1000 (EST) Received: from gate.crashing.org (gate.crashing.org [63.228.1.57]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by bilbo.ozlabs.org (Postfix) with ESMTPS id EA9FAB70AD for ; Tue, 28 Jul 2009 07:08:53 +1000 (EST) Received: from [IPv6:::1] (localhost.localdomain [127.0.0.1]) by gate.crashing.org (8.14.1/8.13.8) with ESMTP id n6RL8keT016358; Mon, 27 Jul 2009 16:08:46 -0500 Message-Id: From: Becky Bruce To: FUJITA Tomonori In-Reply-To: <1248405855-15546-6-git-send-email-fujita.tomonori@lab.ntt.co.jp> Mime-Version: 1.0 (Apple Message framework v935.3) Subject: Re: [PATCH 5/5] powerpc: use asm-generic/dma-mapping-common.h Date: Mon, 27 Jul 2009 16:08:46 -0500 References: <1248405855-15546-1-git-send-email-fujita.tomonori@lab.ntt.co.jp> <1248405855-15546-6-git-send-email-fujita.tomonori@lab.ntt.co.jp> X-Mailer: Apple Mail (2.935.3) Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org On Jul 23, 2009, at 10:24 PM, FUJITA Tomonori wrote: > Signed-off-by: FUJITA Tomonori Fujita, Since you're removing all the uses of it, you should probably remove PPC_NEED_DMA_SYNC_OPS from arch/powerpc/Kconfig: PPC_PMAC) Otherwise, this looks good to me. I also think you want an ACK from Ben - making this switch does add slight overhead to platforms that don't need sync ops, but I think it's worth it. IIRC, it was Ben who asked for the optimization of NEED_DMA_SYNC_OPS, so I'd like him to weigh in here. Cheers, Becky > > --- > arch/powerpc/Kconfig | 2 +- > arch/powerpc/include/asm/dma-mapping.h | 242 > +------------------------------- > 2 files changed, 7 insertions(+), 237 deletions(-) > > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig > index d00131c..0603b6c 100644 > --- a/arch/powerpc/Kconfig > +++ b/arch/powerpc/Kconfig > @@ -120,7 +120,7 @@ config PPC > select HAVE_KRETPROBES > select HAVE_ARCH_TRACEHOOK > select HAVE_LMB > - select HAVE_DMA_ATTRS if PPC64 > + select HAVE_DMA_ATTRS > select USE_GENERIC_SMP_HELPERS if SMP > select HAVE_OPROFILE > select HAVE_SYSCALL_WRAPPERS if PPC64 > diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/ > include/asm/dma-mapping.h > index 8ca2b51..91217e4 100644 > --- a/arch/powerpc/include/asm/dma-mapping.h > +++ b/arch/powerpc/include/asm/dma-mapping.h > @@ -14,6 +14,7 @@ > #include > #include > #include > +#include > #include > #include > > @@ -89,6 +90,11 @@ static inline void set_dma_ops(struct device > *dev, struct dma_map_ops *ops) > dev->archdata.dma_ops = ops; > } > > +/* this will be removed soon */ > +#define flush_write_buffers() > + > +#include > + > static inline int dma_supported(struct device *dev, u64 mask) > { > struct dma_map_ops *dma_ops = get_dma_ops(dev); > @@ -117,87 +123,6 @@ static inline int dma_set_mask(struct device > *dev, u64 dma_mask) > return 0; > } > > -/* > - * map_/unmap_single actually call through to map/unmap_page now > that all the > - * dma_map_ops have been converted over. We just have to get the > page and > - * offset to pass through to map_page > - */ > -static inline dma_addr_t dma_map_single_attrs(struct device *dev, > - void *cpu_addr, > - size_t size, > - enum dma_data_direction direction, > - struct dma_attrs *attrs) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - return dma_ops->map_page(dev, virt_to_page(cpu_addr), > - (unsigned long)cpu_addr % PAGE_SIZE, size, > - direction, attrs); > -} > - > -static inline void dma_unmap_single_attrs(struct device *dev, > - dma_addr_t dma_addr, > - size_t size, > - enum dma_data_direction direction, > - struct dma_attrs *attrs) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - dma_ops->unmap_page(dev, dma_addr, size, direction, attrs); > -} > - > -static inline dma_addr_t dma_map_page_attrs(struct device *dev, > - struct page *page, > - unsigned long offset, size_t size, > - enum dma_data_direction direction, > - struct dma_attrs *attrs) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - return dma_ops->map_page(dev, page, offset, size, direction, attrs); > -} > - > -static inline void dma_unmap_page_attrs(struct device *dev, > - dma_addr_t dma_address, > - size_t size, > - enum dma_data_direction direction, > - struct dma_attrs *attrs) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - dma_ops->unmap_page(dev, dma_address, size, direction, attrs); > -} > - > -static inline int dma_map_sg_attrs(struct device *dev, struct > scatterlist *sg, > - int nents, enum dma_data_direction direction, > - struct dma_attrs *attrs) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - return dma_ops->map_sg(dev, sg, nents, direction, attrs); > -} > - > -static inline void dma_unmap_sg_attrs(struct device *dev, > - struct scatterlist *sg, > - int nhwentries, > - enum dma_data_direction direction, > - struct dma_attrs *attrs) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - dma_ops->unmap_sg(dev, sg, nhwentries, direction, attrs); > -} > - > static inline void *dma_alloc_coherent(struct device *dev, size_t > size, > dma_addr_t *dma_handle, gfp_t flag) > { > @@ -216,161 +141,6 @@ static inline void dma_free_coherent(struct > device *dev, size_t size, > dma_ops->free_coherent(dev, size, cpu_addr, dma_handle); > } > > -static inline dma_addr_t dma_map_single(struct device *dev, void > *cpu_addr, > - size_t size, > - enum dma_data_direction direction) > -{ > - return dma_map_single_attrs(dev, cpu_addr, size, direction, NULL); > -} > - > -static inline void dma_unmap_single(struct device *dev, dma_addr_t > dma_addr, > - size_t size, > - enum dma_data_direction direction) > -{ > - dma_unmap_single_attrs(dev, dma_addr, size, direction, NULL); > -} > - > -static inline dma_addr_t dma_map_page(struct device *dev, struct > page *page, > - unsigned long offset, size_t size, > - enum dma_data_direction direction) > -{ > - return dma_map_page_attrs(dev, page, offset, size, direction, NULL); > -} > - > -static inline void dma_unmap_page(struct device *dev, dma_addr_t > dma_address, > - size_t size, > - enum dma_data_direction direction) > -{ > - dma_unmap_page_attrs(dev, dma_address, size, direction, NULL); > -} > - > -static inline int dma_map_sg(struct device *dev, struct scatterlist > *sg, > - int nents, enum dma_data_direction direction) > -{ > - return dma_map_sg_attrs(dev, sg, nents, direction, NULL); > -} > - > -static inline void dma_unmap_sg(struct device *dev, struct > scatterlist *sg, > - int nhwentries, > - enum dma_data_direction direction) > -{ > - dma_unmap_sg_attrs(dev, sg, nhwentries, direction, NULL); > -} > - > -#ifdef CONFIG_PPC_NEED_DMA_SYNC_OPS > -static inline void dma_sync_single_for_cpu(struct device *dev, > - dma_addr_t dma_handle, size_t size, > - enum dma_data_direction direction) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - if (dma_ops->sync_single_range_for_cpu) > - dma_ops->sync_single_range_for_cpu(dev, dma_handle, 0, > - size, direction); > -} > - > -static inline void dma_sync_single_for_device(struct device *dev, > - dma_addr_t dma_handle, size_t size, > - enum dma_data_direction direction) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - if (dma_ops->sync_single_range_for_device) > - dma_ops->sync_single_range_for_device(dev, dma_handle, > - 0, size, direction); > -} > - > -static inline void dma_sync_sg_for_cpu(struct device *dev, > - struct scatterlist *sgl, int nents, > - enum dma_data_direction direction) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - if (dma_ops->sync_sg_for_cpu) > - dma_ops->sync_sg_for_cpu(dev, sgl, nents, direction); > -} > - > -static inline void dma_sync_sg_for_device(struct device *dev, > - struct scatterlist *sgl, int nents, > - enum dma_data_direction direction) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - if (dma_ops->sync_sg_for_device) > - dma_ops->sync_sg_for_device(dev, sgl, nents, direction); > -} > - > -static inline void dma_sync_single_range_for_cpu(struct device *dev, > - dma_addr_t dma_handle, unsigned long offset, size_t size, > - enum dma_data_direction direction) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - if (dma_ops->sync_single_range_for_cpu) > - dma_ops->sync_single_range_for_cpu(dev, dma_handle, > - offset, size, direction); > -} > - > -static inline void dma_sync_single_range_for_device(struct device > *dev, > - dma_addr_t dma_handle, unsigned long offset, size_t size, > - enum dma_data_direction direction) > -{ > - struct dma_map_ops *dma_ops = get_dma_ops(dev); > - > - BUG_ON(!dma_ops); > - > - if (dma_ops->sync_single_range_for_device) > - dma_ops->sync_single_range_for_device(dev, dma_handle, offset, > - size, direction); > -} > -#else /* CONFIG_PPC_NEED_DMA_SYNC_OPS */ > -static inline void dma_sync_single_for_cpu(struct device *dev, > - dma_addr_t dma_handle, size_t size, > - enum dma_data_direction direction) > -{ > -} > - > -static inline void dma_sync_single_for_device(struct device *dev, > - dma_addr_t dma_handle, size_t size, > - enum dma_data_direction direction) > -{ > -} > - > -static inline void dma_sync_sg_for_cpu(struct device *dev, > - struct scatterlist *sgl, int nents, > - enum dma_data_direction direction) > -{ > -} > - > -static inline void dma_sync_sg_for_device(struct device *dev, > - struct scatterlist *sgl, int nents, > - enum dma_data_direction direction) > -{ > -} > - > -static inline void dma_sync_single_range_for_cpu(struct device *dev, > - dma_addr_t dma_handle, unsigned long offset, size_t size, > - enum dma_data_direction direction) > -{ > -} > - > -static inline void dma_sync_single_range_for_device(struct device > *dev, > - dma_addr_t dma_handle, unsigned long offset, size_t size, > - enum dma_data_direction direction) > -{ > -} > -#endif > - > static inline int dma_mapping_error(struct device *dev, dma_addr_t > dma_addr) > { > #ifdef CONFIG_PPC64 > -- > 1.6.0.6 > > -- > To unsubscribe from this list: send the line "unsubscribe linux- > kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 0603b6c..fb3f4ff 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -307,10 +307,6 @@ config SWIOTLB platforms where the size of a physical address is larger than the bus address. Not all platforms support this. -config PPC_NEED_DMA_SYNC_OPS - def_bool y - depends on (NOT_COHERENT_CACHE || SWIOTLB) - config HOTPLUG_CPU bool "Support for enabling/disabling CPUs" depends on SMP && HOTPLUG && EXPERIMENTAL && (PPC_PSERIES ||