diff mbox

dma: add new dma_mapping_ops API sync_page

Message ID 1222898599.8628.52.camel@pcds-ts102.slac.stanford.edu (mailing list archive)
State Changes Requested, archived
Headers show

Commit Message

Remi Machet Oct. 1, 2008, 10:03 p.m. UTC
This patch replaces the global APIs __dma_sync and __dma_sync_page
with a new dma_mapping_ops API named sync_page. This is necessary to make
sure that the proper synchronization mechanism is used for a device
DMA depending on the bus the device is on.

Signed-off-by: Remi Machet <rmachet@slac.stanford.edu>
---
This patch must be applied on top of Becky's serie of
patches "POWERPC: 32/64-bit DMA code merge and cleanup"
I checked it compiles on 32bits non-coherent, 32bits coherent and 64bits
targets but could only test it on a 32bits non-coherent target (c2k).

 include/asm/dma-mapping.h |   38 +++++++++++++++++++++++++++++++-------
 include/asm/io.h          |    3 +++
 kernel/dma.c              |   13 ++++++++++++-
 3 files changed, 46 insertions(+), 8 deletions(-)

Comments

Remi Machet Oct. 3, 2008, 4:33 p.m. UTC | #1
On Wed, 2008-10-01 at 15:03 -0700, Remi Machet wrote:
> This patch replaces the global APIs __dma_sync and __dma_sync_page
> with a new dma_mapping_ops API named sync_page. This is necessary to make
> sure that the proper synchronization mechanism is used for a device
> DMA depending on the bus the device is on.
> 

After continuing to work on the dma noncoherent code I realized that
sync_page is not the best choice of API: 
-The API should preferably take a dma_addr_t in my opinion
-Using sync_page forced me to create a new define in io.h: bus_to_page.

I now think it would be better to have 2 new API: sync (which takes a
dma_addr_t) and sync_sg (which takes a scatter/gather list). Adding
sync_sg would make us get rid of the last page_to_bus in the code (to
get the dma_addr_t from the scatter gather list entry sg I do:
page_to_bus(sg_page(sg))+sg->offset).

Any feedback is welcome on this issue. I hate adding 2 new APIs but they
would make it possible to have 2 completely different DMA architectures
supported in one kernel on the same hardware.

Remi
Scott Wood Oct. 6, 2008, 4:30 p.m. UTC | #2
Remi Machet wrote:
> After continuing to work on the dma noncoherent code I realized that
> sync_page is not the best choice of API: 
> -The API should preferably take a dma_addr_t in my opinion

A virtual address will typically be needed to perform the flush; why 
pass the bus address?

-Scott
Remi Machet Oct. 6, 2008, 5:38 p.m. UTC | #3
On Mon, 2008-10-06 at 11:30 -0500, Scott Wood wrote:
> Remi Machet wrote:
> > After continuing to work on the dma noncoherent code I realized that
> > sync_page is not the best choice of API: 
> > -The API should preferably take a dma_addr_t in my opinion
> 
> A virtual address will typically be needed to perform the flush; why 
> pass the bus address?
Because it is a sync API. You want to make sure that a physical memory
area is in sync with the caches, not the virtual address. This
distinction can become important in the event where the page is mapped
multiple times in the memory and the architecture does not take care of
synchronizing the multiple mapping, the dma_mapping_ops code should be
able to synchronize the multiple mapping. In most case it would be easy
of course to go from virtual address to the page address, but not if the
page is in high memory ...

By the way I realized later on that scatter/gather structures contain 2
fields: dma_addr and dma_length that can be used to recover the bus
address => no need for sync_sg anymore.

Remi
Benjamin Herrenschmidt Oct. 10, 2008, 3:46 a.m. UTC | #4
> > A virtual address will typically be needed to perform the flush; why 
> > pass the bus address?

> Because it is a sync API. You want to make sure that a physical memory
> area is in sync with the caches, not the virtual address. This
> distinction can become important in the event where the page is mapped
> multiple times in the memory and the architecture does not take care of
> synchronizing the multiple mapping, the dma_mapping_ops code should be
> able to synchronize the multiple mapping. In most case it would be easy
> of course to go from virtual address to the page address, but not if the
> page is in high memory ...

Well, not really. IE, you are right that a dma_addr_t or a struct page
is the way to go but for the wrong reasons :-)

All mappings should be coherent. The powerpc architecture is pretty
strict with that. The only known violation is the instruction cache on
44x but that's irrelevant to your problem.

Thus, -any- virtual address will do.

However, you may not have a virtual address. You may get into a
situation where the page isn't in the linear mapping and needs to be
kmap'ed for the sync to happen.

Now, using PCI_DRAM_OFFSET in bus_to_page() is incorrect here with
Becky's new set of changes. You need to get the offset properly using
the accessor she provides (I don't have the name off the top of my
head).

Cheers,
Ben.
Remi Machet Oct. 10, 2008, 4:16 p.m. UTC | #5
On Fri, 2008-10-10 at 14:46 +1100, Benjamin Herrenschmidt wrote: 
> > > A virtual address will typically be needed to perform the flush; why 
> > > pass the bus address?
> 
> > Because it is a sync API. You want to make sure that a physical memory
> > area is in sync with the caches, not the virtual address. This
> > distinction can become important in the event where the page is mapped
> > multiple times in the memory and the architecture does not take care of
> > synchronizing the multiple mapping, the dma_mapping_ops code should be
> > able to synchronize the multiple mapping. In most case it would be easy
> > of course to go from virtual address to the page address, but not if the
> > page is in high memory ...
> 
> Well, not really. IE, you are right that a dma_addr_t or a struct page
> is the way to go but for the wrong reasons :-)
> 
> All mappings should be coherent. The powerpc architecture is pretty
> strict with that. The only known violation is the instruction cache on
> 44x but that's irrelevant to your problem.
> 
> Thus, -any- virtual address will do.
Good! That certainly simplify the code.

> However, you may not have a virtual address. You may get into a
> situation where the page isn't in the linear mapping and needs to be
> kmap'ed for the sync to happen.
> 
> Now, using PCI_DRAM_OFFSET in bus_to_page() is incorrect here with
> Becky's new set of changes. You need to get the offset properly using
> the accessor she provides (I don't have the name off the top of my
> head).
> 
Totally agree with that. In the last set of patch I committed (which I
need to re-commit because I need to use vmalloc in dma-noncoherent.c) I
removed most of the reference to PCI_DRAM_OFFSET. The only reference to
it remaining is in virt_to_bus which is called by dma_cache_sync.

I did not see an accessor that can be used in dma-mapping.h (the
accessor API I have seen is private to dma.c and dma-noncoherent.c), I
would be happy to use it if there really is one though. I could add
another 2 APIs to dma_mapping_ops which converts a page to/from its bus
address, what do you think?

Remi
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h
index fddb229..8e4cb19 100644
--- a/arch/powerpc/include/asm/dma-mapping.h
+++ b/arch/powerpc/include/asm/dma-mapping.h
@@ -82,6 +82,9 @@  struct dma_mapping_ops {
 				dma_addr_t dma_address, size_t size,
 				enum dma_data_direction direction,
 				struct dma_attrs *attrs);
+	void 		(*sync_page)(struct device *dev, struct page *page,
+				unsigned long offset, size_t size,
+				enum dma_data_direction direction);
 };
 
 /*
@@ -312,42 +315,58 @@  static inline void dma_sync_single_for_cpu(struct device *dev,
 		dma_addr_t dma_handle, size_t size,
 		enum dma_data_direction direction)
 {
+	struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
+
 	BUG_ON(direction == DMA_NONE);
-	__dma_sync(bus_to_virt(dma_handle), size, direction);
+	if (dma_ops->sync_page != NULL)
+		dma_ops->sync_page(dev, bus_to_page(dma_handle),
+				(unsigned long)dma_handle & (PAGE_SIZE-1),
+				size, direction);
 }
 
 static inline void dma_sync_single_for_device(struct device *dev,
 		dma_addr_t dma_handle, size_t size,
 		enum dma_data_direction direction)
 {
+	struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
+
 	BUG_ON(direction == DMA_NONE);
-	__dma_sync(bus_to_virt(dma_handle), size, direction);
+	if (dma_ops->sync_page != NULL)
+		dma_ops->sync_page(dev, bus_to_page(dma_handle),
+				(unsigned long)dma_handle & (PAGE_SIZE-1),
+				size, direction);
 }
 
 static inline void dma_sync_sg_for_cpu(struct device *dev,
 		struct scatterlist *sgl, int nents,
 		enum dma_data_direction direction)
 {
+	struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
 	struct scatterlist *sg;
 	int i;
 
 	BUG_ON(direction == DMA_NONE);
 
-	for_each_sg(sgl, sg, nents, i)
-		__dma_sync_page(sg_page(sg), sg->offset, sg->length, direction);
+	if (dma_ops->sync_page != NULL)
+		for_each_sg(sgl, sg, nents, i)
+			dma_ops->sync_page(dev, sg_page(sg), sg->offset,
+					sg->length, direction);
 }
 
 static inline void dma_sync_sg_for_device(struct device *dev,
 		struct scatterlist *sgl, int nents,
 		enum dma_data_direction direction)
 {
+	struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
 	struct scatterlist *sg;
 	int i;
 
 	BUG_ON(direction == DMA_NONE);
 
-	for_each_sg(sgl, sg, nents, i)
-		__dma_sync_page(sg_page(sg), sg->offset, sg->length, direction);
+	if (dma_ops->sync_page != NULL)
+		for_each_sg(sgl, sg, nents, i)
+			dma_ops->sync_page(dev, sg_page(sg), sg->offset,
+					sg->length, direction);
 }
 
 static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
@@ -401,8 +420,13 @@  static inline void dma_sync_single_range_for_device(struct device *dev,
 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
 		enum dma_data_direction direction)
 {
+	struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
+
 	BUG_ON(direction == DMA_NONE);
-	__dma_sync(vaddr, size, (int)direction);
+	if (dma_ops->sync_page != NULL)
+		dma_ops->sync_page(dev, virt_to_page(vaddr),
+					(unsigned long)vaddr & (PAGE_SIZE-1),
+					size, direction);
 }
 
 #endif /* __KERNEL__ */
diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
index 77c7fa0..4dc5325 100644
--- a/arch/powerpc/include/asm/io.h
+++ b/arch/powerpc/include/asm/io.h
@@ -746,6 +746,9 @@  static inline void * bus_to_virt(unsigned long address)
 
 #endif /* CONFIG_PPC32 */
 
+#define bus_to_page(address)	pfn_to_page((address - PCI_DRAM_OFFSET) \
+						>> PAGE_SHIFT)
+
 /* access ports */
 #define setbits32(_addr, _v) out_be32((_addr), in_be32(_addr) |  (_v))
 #define clrbits32(_addr, _v) out_be32((_addr), in_be32(_addr) & ~(_v))
diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c
index 41fdd48..aab7041 100644
--- a/arch/powerpc/kernel/dma.c
+++ b/arch/powerpc/kernel/dma.c
@@ -95,6 +95,14 @@  static int dma_direct_dma_supported(struct device *dev, u64 mask)
 #endif
 }
 
+static void dma_direct_sync_page(struct device *dev, struct page *page,
+	unsigned long offset, size_t size, enum dma_data_direction direction)
+{
+#ifdef CONFIG_NOT_COHERENT_CACHE
+	__dma_sync_page(page, offset, size, (int)direction);
+#endif
+}
+
 static inline dma_addr_t dma_direct_map_page(struct device *dev,
 					     struct page *page,
 					     unsigned long offset,
@@ -103,7 +111,7 @@  static inline dma_addr_t dma_direct_map_page(struct device *dev,
 					     struct dma_attrs *attrs)
 {
 	BUG_ON(dir == DMA_NONE);
-	__dma_sync_page(page, offset, size, dir);
+	dma_direct_sync_page(dev, page, offset, size, dir);
 	return page_to_phys(page) + offset + get_dma_direct_offset(dev);
 }
 
@@ -123,5 +131,8 @@  struct dma_mapping_ops dma_direct_ops = {
 	.dma_supported	= dma_direct_dma_supported,
 	.map_page	= dma_direct_map_page,
 	.unmap_page	= dma_direct_unmap_page,
+#ifdef CONFIG_NOT_COHERENT_CACHE
+	.sync_page	= dma_direct_sync_page,
+#endif
 };
 EXPORT_SYMBOL(dma_direct_ops);