Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/214/?format=api
{ "id": 214, "url": "http://patchwork.ozlabs.org/api/patches/214/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1220900995-11928-5-git-send-email-becky.bruce@freescale.com/", "project": { "id": 2, "url": "http://patchwork.ozlabs.org/api/projects/2/?format=api", "name": "Linux PPC development", "link_name": "linuxppc-dev", "list_id": "linuxppc-dev.lists.ozlabs.org", "list_email": "linuxppc-dev@lists.ozlabs.org", "web_url": "https://github.com/linuxppc/wiki/wiki", "scm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git", "webscm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/", "list_archive_url": "https://lore.kernel.org/linuxppc-dev/", "list_archive_url_format": "https://lore.kernel.org/linuxppc-dev/{}/", "commit_url_format": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?id={}" }, "msgid": "<1220900995-11928-5-git-send-email-becky.bruce@freescale.com>", "list_archive_url": "https://lore.kernel.org/linuxppc-dev/1220900995-11928-5-git-send-email-becky.bruce@freescale.com/", "date": "2008-09-08T19:09:55", "name": "POWERPC: Merge 32 and 64-bit dma code", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "fb884cba260097ea5042d0b39fe2003c7e1605a8", "submitter": { "id": 12, "url": "http://patchwork.ozlabs.org/api/people/12/?format=api", "name": "Becky Bruce", "email": "becky.bruce@freescale.com" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1220900995-11928-5-git-send-email-becky.bruce@freescale.com/mbox/", "series": [], "comments": "http://patchwork.ozlabs.org/api/patches/214/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/214/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<linuxppc-dev-bounces+patchwork=ozlabs.org@ozlabs.org>", "X-Original-To": [ "patchwork@ozlabs.org", "linuxppc-dev@ozlabs.org" ], "Delivered-To": [ "patchwork@ozlabs.org", "linuxppc-dev@ozlabs.org" ], "Received": [ "from ozlabs.org (localhost [127.0.0.1])\n\tby ozlabs.org (Postfix) with ESMTP id 413A5DE18D\n\tfor <patchwork@ozlabs.org>; Tue, 9 Sep 2008 05:14:35 +1000 (EST)", "from az33egw02.freescale.net (az33egw02.freescale.net\n\t[192.88.158.103])\n\t(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))\n\t(Client CN \"az33egw02.freescale.net\",\n\tIssuer \"Thawte Premium Server CA\" (verified OK))\n\tby ozlabs.org (Postfix) with ESMTPS id 0BA49DDF4C\n\tfor <linuxppc-dev@ozlabs.org>; Tue, 9 Sep 2008 05:10:06 +1000 (EST)", "from az33smr01.freescale.net (az33smr01.freescale.net\n\t[10.64.34.199])\n\tby az33egw02.freescale.net (8.12.11/az33egw02) with ESMTP id\n\tm88J9vpb002777\n\tfor <linuxppc-dev@ozlabs.org>; Mon, 8 Sep 2008 12:09:59 -0700 (MST)", "from blarg.am.freescale.net (blarg.am.freescale.net [10.82.19.176])\n\tby az33smr01.freescale.net (8.13.1/8.13.0) with ESMTP id\n\tm88J9uVl020120\n\tfor <linuxppc-dev@ozlabs.org>; Mon, 8 Sep 2008 14:09:57 -0500 (CDT)", "from blarg.am.freescale.net (localhost.localdomain [127.0.0.1])\n\tby blarg.am.freescale.net (8.14.2/8.14.2) with ESMTP id\n\tm88J9upx012243; Mon, 8 Sep 2008 14:09:56 -0500", "(from bgill@localhost)\n\tby blarg.am.freescale.net (8.14.2/8.14.2/Submit) id m88J9utT012242;\n\tMon, 8 Sep 2008 14:09:56 -0500" ], "From": "Becky Bruce <becky.bruce@freescale.com>", "To": "linuxppc-dev@ozlabs.org", "Subject": "[PATCH 4/4] POWERPC: Merge 32 and 64-bit dma code", "Date": "Mon, 8 Sep 2008 14:09:55 -0500", "Message-Id": "<1220900995-11928-5-git-send-email-becky.bruce@freescale.com>", "X-Mailer": "git-send-email 1.5.5.1", "In-Reply-To": "<1220900995-11928-4-git-send-email-becky.bruce@freescale.com>", "References": "<1220900995-11928-1-git-send-email-becky.bruce@freescale.com>\n\t<1220900995-11928-2-git-send-email-becky.bruce@freescale.com>\n\t<1220900995-11928-3-git-send-email-becky.bruce@freescale.com>\n\t<1220900995-11928-4-git-send-email-becky.bruce@freescale.com>", "X-BeenThere": "linuxppc-dev@ozlabs.org", "X-Mailman-Version": "2.1.11", "Precedence": "list", "List-Id": "Linux on PowerPC Developers Mail List <linuxppc-dev.ozlabs.org>", "List-Unsubscribe": "<https://ozlabs.org/mailman/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@ozlabs.org?subject=unsubscribe>", "List-Archive": "<http://ozlabs.org/pipermail/linuxppc-dev>", "List-Post": "<mailto:linuxppc-dev@ozlabs.org>", "List-Help": "<mailto:linuxppc-dev-request@ozlabs.org?subject=help>", "List-Subscribe": "<https://ozlabs.org/mailman/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@ozlabs.org?subject=subscribe>", "MIME-Version": "1.0", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Sender": "linuxppc-dev-bounces+patchwork=ozlabs.org@ozlabs.org", "Errors-To": "linuxppc-dev-bounces+patchwork=ozlabs.org@ozlabs.org" }, "content": "We essentially adopt the 64-bit dma code, with some changes to support\n32-bit systems, including HIGHMEM. dma functions on 32-bit are now\ninvoked via accessor functions which call the correct op for a device based\non archdata dma_ops. If there is no archdata dma_ops, this defaults\nto dma_direct_ops.\n\nIn addition, the dma_map/unmap_page functions are added to dma_ops\nbecause we can't just fall back on map/unmap_single when HIGHMEM is\nenabled. In the case of dma_direct_*, we stop using map/unmap_single\nand just use the page version - this saves a lot of ugly\nifdeffing. We leave map/unmap_single in the dma_ops definition,\nthough, because they are needed by the iommu code, which does not\nimplement map/unmap_page.\n\nSigned-off-by: Becky Bruce <becky.bruce@freescale.com>", "diff": "diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h\nindex c7ca45f..bd2b2b7 100644\n--- a/arch/powerpc/include/asm/dma-mapping.h\n+++ b/arch/powerpc/include/asm/dma-mapping.h\n@@ -44,8 +44,6 @@ extern void __dma_sync_page(struct page *page, unsigned long offset,\n \n #endif /* ! CONFIG_NOT_COHERENT_CACHE */\n \n-#ifdef CONFIG_PPC64\n-\n static inline unsigned long device_to_mask(struct device *dev)\n {\n \tif (dev->dma_mask && *dev->dma_mask)\n@@ -76,8 +74,24 @@ struct dma_mapping_ops {\n \t\t\t\tstruct dma_attrs *attrs);\n \tint\t\t(*dma_supported)(struct device *dev, u64 mask);\n \tint\t\t(*set_dma_mask)(struct device *dev, u64 dma_mask);\n+\tdma_addr_t \t(*map_page)(struct device *dev, struct page *page,\n+\t\t\t\tunsigned long offset, size_t size,\n+\t\t\t\tenum dma_data_direction direction,\n+\t\t\t\tstruct dma_attrs *attrs);\n+\tvoid\t\t(*unmap_page)(struct device *dev,\n+\t\t\t\tdma_addr_t dma_address, size_t size,\n+\t\t\t\tenum dma_data_direction direction,\n+\t\t\t\tstruct dma_attrs *attrs);\n };\n \n+/*\n+ * Available generic sets of operations\n+ */\n+#ifdef CONFIG_PPC64\n+extern struct dma_mapping_ops dma_iommu_ops;\n+#endif\n+extern struct dma_mapping_ops dma_direct_ops;\n+\n static inline struct dma_mapping_ops *get_dma_ops(struct device *dev)\n {\n \t/* We don't handle the NULL dev case for ISA for now. We could\n@@ -85,8 +99,16 @@ static inline struct dma_mapping_ops *get_dma_ops(struct device *dev)\n \t * only ISA DMA device we support is the floppy and we have a hack\n \t * in the floppy driver directly to get a device for us.\n \t */\n-\tif (unlikely(dev == NULL || dev->archdata.dma_ops == NULL))\n+\n+\tif (unlikely(dev == NULL) || dev->archdata.dma_ops == NULL) {\n+#ifdef CONFIG_PPC64\n \t\treturn NULL;\n+#else\n+\t\t/* Use default on 32-bit if dma_ops is not set up */\n+\t\treturn &dma_direct_ops;\n+#endif\n+\t}\n+\n \treturn dev->archdata.dma_ops;\n }\n \n@@ -132,7 +154,14 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev,\n \tstruct dma_mapping_ops *dma_ops = get_dma_ops(dev);\n \n \tBUG_ON(!dma_ops);\n-\treturn dma_ops->map_single(dev, cpu_addr, size, direction, attrs);\n+\n+\tif (dma_ops->map_single)\n+\t\treturn dma_ops->map_single(dev, cpu_addr, size, direction,\n+\t\t\t\t\t attrs);\n+\n+\treturn dma_ops->map_page(dev, virt_to_page(cpu_addr),\n+\t\t\t\t (unsigned long)cpu_addr % PAGE_SIZE, size,\n+\t\t\t\t direction, attrs);\n }\n \n static inline void dma_unmap_single_attrs(struct device *dev,\n@@ -144,7 +173,13 @@ static inline void dma_unmap_single_attrs(struct device *dev,\n \tstruct dma_mapping_ops *dma_ops = get_dma_ops(dev);\n \n \tBUG_ON(!dma_ops);\n-\tdma_ops->unmap_single(dev, dma_addr, size, direction, attrs);\n+\n+\tif (dma_ops->unmap_single) {\n+\t\tdma_ops->unmap_single(dev, dma_addr, size, direction, attrs);\n+\t\treturn;\n+\t}\n+\n+\tdma_ops->unmap_page(dev, dma_addr, size, direction, attrs);\n }\n \n static inline dma_addr_t dma_map_page_attrs(struct device *dev,\n@@ -156,8 +191,13 @@ static inline dma_addr_t dma_map_page_attrs(struct device *dev,\n \tstruct dma_mapping_ops *dma_ops = get_dma_ops(dev);\n \n \tBUG_ON(!dma_ops);\n+\n+\tif (dma_ops->map_page)\n+\t\treturn dma_ops->map_page(dev, page, offset, size, direction,\n+\t\t\t\t\t attrs);\n+\n \treturn dma_ops->map_single(dev, page_address(page) + offset, size,\n-\t\t\tdirection, attrs);\n+\t\t\t\t direction, attrs);\n }\n \n static inline void dma_unmap_page_attrs(struct device *dev,\n@@ -169,6 +209,12 @@ static inline void dma_unmap_page_attrs(struct device *dev,\n \tstruct dma_mapping_ops *dma_ops = get_dma_ops(dev);\n \n \tBUG_ON(!dma_ops);\n+\n+\tif (dma_ops->unmap_page) {\n+\t\tdma_ops->unmap_page(dev, dma_address, size, direction, attrs);\n+\t\treturn;\n+\t}\n+\n \tdma_ops->unmap_single(dev, dma_address, size, direction, attrs);\n }\n \n@@ -253,126 +299,6 @@ static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sg,\n \tdma_unmap_sg_attrs(dev, sg, nhwentries, direction, NULL);\n }\n \n-/*\n- * Available generic sets of operations\n- */\n-extern struct dma_mapping_ops dma_iommu_ops;\n-extern struct dma_mapping_ops dma_direct_ops;\n-\n-#else /* CONFIG_PPC64 */\n-\n-#define dma_supported(dev, mask)\t(1)\n-\n-static inline int dma_set_mask(struct device *dev, u64 dma_mask)\n-{\n-\tif (!dev->dma_mask || !dma_supported(dev, mask))\n-\t\treturn -EIO;\n-\n-\t*dev->dma_mask = dma_mask;\n-\n-\treturn 0;\n-}\n-\n-static inline void *dma_alloc_coherent(struct device *dev, size_t size,\n-\t\t\t\t dma_addr_t * dma_handle,\n-\t\t\t\t gfp_t gfp)\n-{\n-#ifdef CONFIG_NOT_COHERENT_CACHE\n-\treturn __dma_alloc_coherent(size, dma_handle, gfp);\n-#else\n-\tvoid *ret;\n-\t/* ignore region specifiers */\n-\tgfp &= ~(__GFP_DMA | __GFP_HIGHMEM);\n-\n-\tif (dev == NULL || dev->coherent_dma_mask < 0xffffffff)\n-\t\tgfp |= GFP_DMA;\n-\n-\tret = (void *)__get_free_pages(gfp, get_order(size));\n-\n-\tif (ret != NULL) {\n-\t\tmemset(ret, 0, size);\n-\t\t*dma_handle = virt_to_bus(ret);\n-\t}\n-\n-\treturn ret;\n-#endif\n-}\n-\n-static inline void\n-dma_free_coherent(struct device *dev, size_t size, void *vaddr,\n-\t\t dma_addr_t dma_handle)\n-{\n-#ifdef CONFIG_NOT_COHERENT_CACHE\n-\t__dma_free_coherent(size, vaddr);\n-#else\n-\tfree_pages((unsigned long)vaddr, get_order(size));\n-#endif\n-}\n-\n-static inline dma_addr_t\n-dma_map_single(struct device *dev, void *ptr, size_t size,\n-\t enum dma_data_direction direction)\n-{\n-\tBUG_ON(direction == DMA_NONE);\n-\n-\t__dma_sync(ptr, size, direction);\n-\n-\treturn virt_to_bus(ptr);\n-}\n-\n-static inline void dma_unmap_single(struct device *dev, dma_addr_t dma_addr,\n-\t\t\t\t size_t size,\n-\t\t\t\t enum dma_data_direction direction)\n-{\n-\t/* We do nothing. */\n-}\n-\n-static inline dma_addr_t\n-dma_map_page(struct device *dev, struct page *page,\n-\t unsigned long offset, size_t size,\n-\t enum dma_data_direction direction)\n-{\n-\tBUG_ON(direction == DMA_NONE);\n-\n-\t__dma_sync_page(page, offset, size, direction);\n-\n-\treturn page_to_bus(page) + offset;\n-}\n-\n-static inline void dma_unmap_page(struct device *dev, dma_addr_t dma_address,\n-\t\t\t\t size_t size,\n-\t\t\t\t enum dma_data_direction direction)\n-{\n-\t/* We do nothing. */\n-}\n-\n-static inline int\n-dma_map_sg(struct device *dev, struct scatterlist *sgl, int nents,\n-\t enum dma_data_direction direction)\n-{\n-\tstruct scatterlist *sg;\n-\tint i;\n-\n-\tBUG_ON(direction == DMA_NONE);\n-\n-\tfor_each_sg(sgl, sg, nents, i) {\n-\t\tBUG_ON(!sg_page(sg));\n-\t\t__dma_sync_page(sg_page(sg), sg->offset, sg->length, direction);\n-\t\tsg->dma_address = page_to_bus(sg_page(sg)) + sg->offset;\n-\t}\n-\n-\treturn nents;\n-}\n-\n-static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sg,\n-\t\t\t\tint nhwentries,\n-\t\t\t\tenum dma_data_direction direction)\n-{\n-\t/* We don't do anything here. */\n-}\n-\n-#endif /* CONFIG_PPC64 */\n-\n static inline void dma_sync_single_for_cpu(struct device *dev,\n \t\tdma_addr_t dma_handle, size_t size,\n \t\tenum dma_data_direction direction)\ndiff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h\nindex 893aafd..2740c44 100644\n--- a/arch/powerpc/include/asm/machdep.h\n+++ b/arch/powerpc/include/asm/machdep.h\n@@ -88,8 +88,6 @@ struct machdep_calls {\n \tunsigned long\t(*tce_get)(struct iommu_table *tbl,\n \t\t\t\t long index);\n \tvoid\t\t(*tce_flush)(struct iommu_table *tbl);\n-\tvoid\t\t(*pci_dma_dev_setup)(struct pci_dev *dev);\n-\tvoid\t\t(*pci_dma_bus_setup)(struct pci_bus *bus);\n \n \tvoid __iomem *\t(*ioremap)(phys_addr_t addr, unsigned long size,\n \t\t\t\t unsigned long flags);\n@@ -101,6 +99,9 @@ struct machdep_calls {\n #endif\n #endif /* CONFIG_PPC64 */\n \n+\tvoid\t\t(*pci_dma_dev_setup)(struct pci_dev *dev);\n+\tvoid\t\t(*pci_dma_bus_setup)(struct pci_bus *bus);\n+\n \tint\t\t(*probe)(void);\n \tvoid\t\t(*setup_arch)(void); /* Optional, may be NULL */\n \tvoid\t\t(*init_early)(void);\ndiff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h\nindex a05a942..0e52c78 100644\n--- a/arch/powerpc/include/asm/pci.h\n+++ b/arch/powerpc/include/asm/pci.h\n@@ -60,6 +60,14 @@ static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)\n \treturn channel ? 15 : 14;\n }\n \n+#ifdef CONFIG_PCI\n+extern void set_pci_dma_ops(struct dma_mapping_ops *dma_ops);\n+extern struct dma_mapping_ops *get_pci_dma_ops(void);\n+#else\t/* CONFIG_PCI */\n+#define set_pci_dma_ops(d)\n+#define get_pci_dma_ops()\tNULL\n+#endif\n+\n #ifdef CONFIG_PPC64\n \n /*\n@@ -70,9 +78,6 @@ static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)\n #define PCI_DISABLE_MWI\n \n #ifdef CONFIG_PCI\n-extern void set_pci_dma_ops(struct dma_mapping_ops *dma_ops);\n-extern struct dma_mapping_ops *get_pci_dma_ops(void);\n-\n static inline void pci_dma_burst_advice(struct pci_dev *pdev,\n \t\t\t\t\tenum pci_dma_burst_strategy *strat,\n \t\t\t\t\tunsigned long *strategy_parameter)\n@@ -89,9 +94,6 @@ static inline void pci_dma_burst_advice(struct pci_dev *pdev,\n \t*strat = PCI_DMA_BURST_MULTIPLE;\n \t*strategy_parameter = cacheline_size;\n }\n-#else\t/* CONFIG_PCI */\n-#define set_pci_dma_ops(d)\n-#define get_pci_dma_ops()\tNULL\n #endif\n \n #else /* 32-bit */\ndiff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile\nindex 45570fe..98f5282 100644\n--- a/arch/powerpc/kernel/Makefile\n+++ b/arch/powerpc/kernel/Makefile\n@@ -68,10 +68,10 @@ extra-$(CONFIG_8xx)\t\t:= head_8xx.o\n extra-y\t\t\t\t+= vmlinux.lds\n \n obj-y\t\t\t\t+= time.o prom.o traps.o setup-common.o \\\n-\t\t\t\t udbg.o misc.o io.o \\\n+\t\t\t\t udbg.o misc.o io.o dma.o \\\n \t\t\t\t misc_$(CONFIG_WORD_SIZE).o\n obj-$(CONFIG_PPC32)\t\t+= entry_32.o setup_32.o\n-obj-$(CONFIG_PPC64)\t\t+= dma.o dma-iommu.o iommu.o\n+obj-$(CONFIG_PPC64)\t\t+= dma-iommu.o iommu.o\n obj-$(CONFIG_KGDB)\t\t+= kgdb.o\n obj-$(CONFIG_PPC_MULTIPLATFORM)\t+= prom_init.o\n obj-$(CONFIG_MODULES)\t\t+= ppc_ksyms.o\ndiff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c\nindex 124f867..41fdd48 100644\n--- a/arch/powerpc/kernel/dma.c\n+++ b/arch/powerpc/kernel/dma.c\n@@ -16,21 +16,30 @@\n * This implementation supports a per-device offset that can be applied if\n * the address at which memory is visible to devices is not 0. Platform code\n * can set archdata.dma_data to an unsigned long holding the offset. By\n- * default the offset is zero.\n+ * default the offset is PCI_DRAM_OFFSET.\n */\n \n static unsigned long get_dma_direct_offset(struct device *dev)\n {\n-\treturn (unsigned long)dev->archdata.dma_data;\n+\tif (dev)\n+\t\treturn (unsigned long)dev->archdata.dma_data;\n+\n+\treturn PCI_DRAM_OFFSET;\n }\n \n-static void *dma_direct_alloc_coherent(struct device *dev, size_t size,\n-\t\t\t\t dma_addr_t *dma_handle, gfp_t flag)\n+void *dma_direct_alloc_coherent(struct device *dev, size_t size,\n+\t\t\t\tdma_addr_t *dma_handle, gfp_t flag)\n {\n+#ifdef CONFIG_NOT_COHERENT_CACHE\n+\treturn __dma_alloc_coherent(size, dma_handle, flag);\n+#else\n \tstruct page *page;\n \tvoid *ret;\n \tint node = dev_to_node(dev);\n \n+\t/* ignore region specifiers */\n+\tflag &= ~(__GFP_HIGHMEM);\n+\n \tpage = alloc_pages_node(node, flag, get_order(size));\n \tif (page == NULL)\n \t\treturn NULL;\n@@ -39,27 +48,17 @@ static void *dma_direct_alloc_coherent(struct device *dev, size_t size,\n \t*dma_handle = virt_to_abs(ret) + get_dma_direct_offset(dev);\n \n \treturn ret;\n+#endif\n }\n \n-static void dma_direct_free_coherent(struct device *dev, size_t size,\n-\t\t\t\t void *vaddr, dma_addr_t dma_handle)\n+void dma_direct_free_coherent(struct device *dev, size_t size,\n+\t\t\t void *vaddr, dma_addr_t dma_handle)\n {\n+#ifdef CONFIG_NOT_COHERENT_CACHE\n+\t__dma_free_coherent(size, vaddr);\n+#else\n \tfree_pages((unsigned long)vaddr, get_order(size));\n-}\n-\n-static dma_addr_t dma_direct_map_single(struct device *dev, void *ptr,\n-\t\t\t\t\tsize_t size,\n-\t\t\t\t\tenum dma_data_direction direction,\n-\t\t\t\t\tstruct dma_attrs *attrs)\n-{\n-\treturn virt_to_abs(ptr) + get_dma_direct_offset(dev);\n-}\n-\n-static void dma_direct_unmap_single(struct device *dev, dma_addr_t dma_addr,\n-\t\t\t\t size_t size,\n-\t\t\t\t enum dma_data_direction direction,\n-\t\t\t\t struct dma_attrs *attrs)\n-{\n+#endif\n }\n \n static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl,\n@@ -85,20 +84,44 @@ static void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sg,\n \n static int dma_direct_dma_supported(struct device *dev, u64 mask)\n {\n+#ifdef CONFIG_PPC64\n \t/* Could be improved to check for memory though it better be\n \t * done via some global so platforms can set the limit in case\n \t * they have limited DMA windows\n \t */\n \treturn mask >= DMA_32BIT_MASK;\n+#else\n+\treturn 1;\n+#endif\n+}\n+\n+static inline dma_addr_t dma_direct_map_page(struct device *dev,\n+\t\t\t\t\t struct page *page,\n+\t\t\t\t\t unsigned long offset,\n+\t\t\t\t\t size_t size,\n+\t\t\t\t\t enum dma_data_direction dir,\n+\t\t\t\t\t struct dma_attrs *attrs)\n+{\n+\tBUG_ON(dir == DMA_NONE);\n+\t__dma_sync_page(page, offset, size, dir);\n+\treturn page_to_phys(page) + offset + get_dma_direct_offset(dev);\n+}\n+\n+static inline void dma_direct_unmap_page(struct device *dev,\n+\t\t\t\t\t dma_addr_t dma_address,\n+\t\t\t\t\t size_t size,\n+\t\t\t\t\t enum dma_data_direction direction,\n+\t\t\t\t\t struct dma_attrs *attrs)\n+{\n }\n \n struct dma_mapping_ops dma_direct_ops = {\n \t.alloc_coherent\t= dma_direct_alloc_coherent,\n \t.free_coherent\t= dma_direct_free_coherent,\n-\t.map_single\t= dma_direct_map_single,\n-\t.unmap_single\t= dma_direct_unmap_single,\n \t.map_sg\t\t= dma_direct_map_sg,\n \t.unmap_sg\t= dma_direct_unmap_sg,\n \t.dma_supported\t= dma_direct_dma_supported,\n+\t.map_page\t= dma_direct_map_page,\n+\t.unmap_page\t= dma_direct_unmap_page,\n };\n EXPORT_SYMBOL(dma_direct_ops);\ndiff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c\nindex ea0c61e..52ccfed 100644\n--- a/arch/powerpc/kernel/pci-common.c\n+++ b/arch/powerpc/kernel/pci-common.c\n@@ -56,6 +56,34 @@ resource_size_t isa_mem_base;\n /* Default PCI flags is 0 */\n unsigned int ppc_pci_flags;\n \n+static struct dma_mapping_ops *pci_dma_ops;\n+\n+void set_pci_dma_ops(struct dma_mapping_ops *dma_ops)\n+{\n+\tpci_dma_ops = dma_ops;\n+}\n+\n+struct dma_mapping_ops *get_pci_dma_ops(void)\n+{\n+\treturn pci_dma_ops;\n+}\n+EXPORT_SYMBOL(get_pci_dma_ops);\n+\n+int pci_set_dma_mask(struct pci_dev *dev, u64 mask)\n+{\n+\treturn dma_set_mask(&dev->dev, mask);\n+}\n+\n+int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask)\n+{\n+\tint rc;\n+\n+\trc = dma_set_mask(&dev->dev, mask);\n+\tdev->dev.coherent_dma_mask = dev->dma_mask;\n+\n+\treturn rc;\n+}\n+\n struct pci_controller *pcibios_alloc_controller(struct device_node *dev)\n {\n \tstruct pci_controller *phb;\n@@ -180,6 +208,26 @@ char __devinit *pcibios_setup(char *str)\n \treturn str;\n }\n \n+void __devinit pcibios_setup_new_device(struct pci_dev *dev)\n+{\n+\tstruct dev_archdata *sd = &dev->dev.archdata;\n+\n+\tsd->of_node = pci_device_to_OF_node(dev);\n+\n+\tDBG(\"PCI: device %s OF node: %s\\n\", pci_name(dev),\n+\t sd->of_node ? sd->of_node->full_name : \"<none>\");\n+\n+\tsd->dma_ops = pci_dma_ops;\n+#ifdef CONFIG_PPC32\n+\tsd->dma_data = (void *)PCI_DRAM_OFFSET;\n+#endif\n+\tset_dev_node(&dev->dev, pcibus_to_node(dev->bus));\n+\n+\tif (ppc_md.pci_dma_dev_setup)\n+\t\tppc_md.pci_dma_dev_setup(dev);\n+}\n+EXPORT_SYMBOL(pcibios_setup_new_device);\n+\n /*\n * Reads the interrupt pin to determine if interrupt is use by card.\n * If the interrupt is used, then gets the interrupt line from the\ndiff --git a/arch/powerpc/kernel/pci_32.c b/arch/powerpc/kernel/pci_32.c\nindex 88db4ff..174b77e 100644\n--- a/arch/powerpc/kernel/pci_32.c\n+++ b/arch/powerpc/kernel/pci_32.c\n@@ -424,6 +424,7 @@ void __devinit pcibios_do_bus_setup(struct pci_bus *bus)\n \tunsigned long io_offset;\n \tstruct resource *res;\n \tint i;\n+\tstruct pci_dev *dev;\n \n \t/* Hookup PHB resources */\n \tio_offset = (unsigned long)hose->io_base_virt - isa_io_base;\n@@ -457,6 +458,12 @@ void __devinit pcibios_do_bus_setup(struct pci_bus *bus)\n \t\t\tbus->resource[i+1] = res;\n \t\t}\n \t}\n+\n+\tif (ppc_md.pci_dma_bus_setup)\n+\t\tppc_md.pci_dma_bus_setup(bus);\n+\n+\tlist_for_each_entry(dev, &bus->devices, bus_list)\n+\t\tpcibios_setup_new_device(dev);\n }\n \n /* the next one is stolen from the alpha port... */\ndiff --git a/arch/powerpc/kernel/pci_64.c b/arch/powerpc/kernel/pci_64.c\nindex 1f75bf0..8247cff 100644\n--- a/arch/powerpc/kernel/pci_64.c\n+++ b/arch/powerpc/kernel/pci_64.c\n@@ -52,35 +52,6 @@ EXPORT_SYMBOL(pci_io_base);\n \n LIST_HEAD(hose_list);\n \n-static struct dma_mapping_ops *pci_dma_ops;\n-\n-void set_pci_dma_ops(struct dma_mapping_ops *dma_ops)\n-{\n-\tpci_dma_ops = dma_ops;\n-}\n-\n-struct dma_mapping_ops *get_pci_dma_ops(void)\n-{\n-\treturn pci_dma_ops;\n-}\n-EXPORT_SYMBOL(get_pci_dma_ops);\n-\n-\n-int pci_set_dma_mask(struct pci_dev *dev, u64 mask)\n-{\n-\treturn dma_set_mask(&dev->dev, mask);\n-}\n-\n-int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask)\n-{\n-\tint rc;\n-\n-\trc = dma_set_mask(&dev->dev, mask);\n-\tdev->dev.coherent_dma_mask = dev->dma_mask;\n-\n-\treturn rc;\n-}\n-\n static void fixup_broken_pcnet32(struct pci_dev* dev)\n {\n \tif ((dev->class>>8 == PCI_CLASS_NETWORK_ETHERNET)) {\n@@ -548,23 +519,6 @@ int __devinit pcibios_map_io_space(struct pci_bus *bus)\n }\n EXPORT_SYMBOL_GPL(pcibios_map_io_space);\n \n-void __devinit pcibios_setup_new_device(struct pci_dev *dev)\n-{\n-\tstruct dev_archdata *sd = &dev->dev.archdata;\n-\n-\tsd->of_node = pci_device_to_OF_node(dev);\n-\n-\tDBG(\"PCI: device %s OF node: %s\\n\", pci_name(dev),\n-\t sd->of_node ? sd->of_node->full_name : \"<none>\");\n-\n-\tsd->dma_ops = pci_dma_ops;\n-\tset_dev_node(&dev->dev, pcibus_to_node(dev->bus));\n-\n-\tif (ppc_md.pci_dma_dev_setup)\n-\t\tppc_md.pci_dma_dev_setup(dev);\n-}\n-EXPORT_SYMBOL(pcibios_setup_new_device);\n-\n void __devinit pcibios_do_bus_setup(struct pci_bus *bus)\n {\n \tstruct pci_dev *dev;\n", "prefixes": [] }