Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/213/?format=api
{ "id": 213, "url": "http://patchwork.ozlabs.org/api/patches/213/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1220900995-11928-2-git-send-email-becky.bruce@freescale.com/", "project": { "id": 2, "url": "http://patchwork.ozlabs.org/api/projects/2/?format=api", "name": "Linux PPC development", "link_name": "linuxppc-dev", "list_id": "linuxppc-dev.lists.ozlabs.org", "list_email": "linuxppc-dev@lists.ozlabs.org", "web_url": "https://github.com/linuxppc/wiki/wiki", "scm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git", "webscm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/", "list_archive_url": "https://lore.kernel.org/linuxppc-dev/", "list_archive_url_format": "https://lore.kernel.org/linuxppc-dev/{}/", "commit_url_format": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?id={}" }, "msgid": "<1220900995-11928-2-git-send-email-becky.bruce@freescale.com>", "list_archive_url": "https://lore.kernel.org/linuxppc-dev/1220900995-11928-2-git-send-email-becky.bruce@freescale.com/", "date": "2008-09-08T19:09:52", "name": "POWERPC: Rename dma_64.c to dma.c", "commit_ref": "7c05d7e08d907d66b8e18515572f42c71fb709fe", "pull_url": null, "state": "accepted", "archived": true, "hash": "fcf7b1d5b6d57c94c2901ffe740213be0d22b02a", "submitter": { "id": 12, "url": "http://patchwork.ozlabs.org/api/people/12/?format=api", "name": "Becky Bruce", "email": "becky.bruce@freescale.com" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1220900995-11928-2-git-send-email-becky.bruce@freescale.com/mbox/", "series": [], "comments": "http://patchwork.ozlabs.org/api/patches/213/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/213/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<linuxppc-dev-bounces+patchwork=ozlabs.org@ozlabs.org>", "X-Original-To": [ "patchwork@ozlabs.org", "linuxppc-dev@ozlabs.org" ], "Delivered-To": [ "patchwork@ozlabs.org", "linuxppc-dev@ozlabs.org" ], "Received": [ "from ozlabs.org (localhost [127.0.0.1])\n\tby ozlabs.org (Postfix) with ESMTP id 4ECE847697\n\tfor <patchwork@ozlabs.org>; Tue, 9 Sep 2008 05:13:10 +1000 (EST)", "from az33egw02.freescale.net (az33egw02.freescale.net\n\t[192.88.158.103])\n\t(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))\n\t(Client CN \"az33egw02.freescale.net\",\n\tIssuer \"Thawte Premium Server CA\" (verified OK))\n\tby ozlabs.org (Postfix) with ESMTPS id C8E37DDF79\n\tfor <linuxppc-dev@ozlabs.org>; Tue, 9 Sep 2008 05:10:06 +1000 (EST)", "from az33smr01.freescale.net (az33smr01.freescale.net\n\t[10.64.34.199])\n\tby az33egw02.freescale.net (8.12.11/az33egw02) with ESMTP id\n\tm88J9uSb002765\n\tfor <linuxppc-dev@ozlabs.org>; Mon, 8 Sep 2008 12:09:59 -0700 (MST)", "from blarg.am.freescale.net (blarg.am.freescale.net [10.82.19.176])\n\tby az33smr01.freescale.net (8.13.1/8.13.0) with ESMTP id\n\tm88J9umY020113\n\tfor <linuxppc-dev@ozlabs.org>; Mon, 8 Sep 2008 14:09:56 -0500 (CDT)", "from blarg.am.freescale.net (localhost.localdomain [127.0.0.1])\n\tby blarg.am.freescale.net (8.14.2/8.14.2) with ESMTP id\n\tm88J9umK012231; Mon, 8 Sep 2008 14:09:56 -0500", "(from bgill@localhost)\n\tby blarg.am.freescale.net (8.14.2/8.14.2/Submit) id m88J9ud9012230;\n\tMon, 8 Sep 2008 14:09:56 -0500" ], "From": "Becky Bruce <becky.bruce@freescale.com>", "To": "linuxppc-dev@ozlabs.org", "Subject": "[PATCH 1/4] POWERPC: Rename dma_64.c to dma.c", "Date": "Mon, 8 Sep 2008 14:09:52 -0500", "Message-Id": "<1220900995-11928-2-git-send-email-becky.bruce@freescale.com>", "X-Mailer": "git-send-email 1.5.5.1", "In-Reply-To": "<1220900995-11928-1-git-send-email-becky.bruce@freescale.com>", "References": "<1220900995-11928-1-git-send-email-becky.bruce@freescale.com>", "X-BeenThere": "linuxppc-dev@ozlabs.org", "X-Mailman-Version": "2.1.11", "Precedence": "list", "List-Id": "Linux on PowerPC Developers Mail List <linuxppc-dev.ozlabs.org>", "List-Unsubscribe": "<https://ozlabs.org/mailman/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@ozlabs.org?subject=unsubscribe>", "List-Archive": "<http://ozlabs.org/pipermail/linuxppc-dev>", "List-Post": "<mailto:linuxppc-dev@ozlabs.org>", "List-Help": "<mailto:linuxppc-dev-request@ozlabs.org?subject=help>", "List-Subscribe": "<https://ozlabs.org/mailman/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@ozlabs.org?subject=subscribe>", "MIME-Version": "1.0", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Sender": "linuxppc-dev-bounces+patchwork=ozlabs.org@ozlabs.org", "Errors-To": "linuxppc-dev-bounces+patchwork=ozlabs.org@ozlabs.org" }, "content": "This is in preparation for the merge of the 32 and 64-bit\ndma code in arch/powerpc.\n\nSigned-off-by: Becky Bruce <becky.bruce@freescale.com>", "diff": "diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile\nindex f17b579..a8a6724 100644\n--- a/arch/powerpc/kernel/Makefile\n+++ b/arch/powerpc/kernel/Makefile\n@@ -71,7 +71,7 @@ obj-y\t\t\t\t+= time.o prom.o traps.o setup-common.o \\\n \t\t\t\t udbg.o misc.o io.o \\\n \t\t\t\t misc_$(CONFIG_WORD_SIZE).o\n obj-$(CONFIG_PPC32)\t\t+= entry_32.o setup_32.o\n-obj-$(CONFIG_PPC64)\t\t+= dma_64.o iommu.o\n+obj-$(CONFIG_PPC64)\t\t+= dma.o iommu.o\n obj-$(CONFIG_KGDB)\t\t+= kgdb.o\n obj-$(CONFIG_PPC_MULTIPLATFORM)\t+= prom_init.o\n obj-$(CONFIG_MODULES)\t\t+= ppc_ksyms.o\ndiff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c\nnew file mode 100644\nindex 0000000..ae5708e\n--- /dev/null\n+++ b/arch/powerpc/kernel/dma.c\n@@ -0,0 +1,200 @@\n+/*\n+ * Copyright (C) 2006 Benjamin Herrenschmidt, IBM Corporation\n+ *\n+ * Provide default implementations of the DMA mapping callbacks for\n+ * directly mapped busses and busses using the iommu infrastructure\n+ */\n+\n+#include <linux/device.h>\n+#include <linux/dma-mapping.h>\n+#include <asm/bug.h>\n+#include <asm/iommu.h>\n+#include <asm/abs_addr.h>\n+\n+/*\n+ * Generic iommu implementation\n+ */\n+\n+/* Allocates a contiguous real buffer and creates mappings over it.\n+ * Returns the virtual address of the buffer and sets dma_handle\n+ * to the dma address (mapping) of the first page.\n+ */\n+static void *dma_iommu_alloc_coherent(struct device *dev, size_t size,\n+\t\t\t\t dma_addr_t *dma_handle, gfp_t flag)\n+{\n+\treturn iommu_alloc_coherent(dev, dev->archdata.dma_data, size,\n+\t\t\t\t dma_handle, device_to_mask(dev), flag,\n+\t\t\t\t dev->archdata.numa_node);\n+}\n+\n+static void dma_iommu_free_coherent(struct device *dev, size_t size,\n+\t\t\t\t void *vaddr, dma_addr_t dma_handle)\n+{\n+\tiommu_free_coherent(dev->archdata.dma_data, size, vaddr, dma_handle);\n+}\n+\n+/* Creates TCEs for a user provided buffer. The user buffer must be\n+ * contiguous real kernel storage (not vmalloc). The address of the buffer\n+ * passed here is the kernel (virtual) address of the buffer. The buffer\n+ * need not be page aligned, the dma_addr_t returned will point to the same\n+ * byte within the page as vaddr.\n+ */\n+static dma_addr_t dma_iommu_map_single(struct device *dev, void *vaddr,\n+\t\t\t\t size_t size,\n+\t\t\t\t enum dma_data_direction direction,\n+\t\t\t\t struct dma_attrs *attrs)\n+{\n+\treturn iommu_map_single(dev, dev->archdata.dma_data, vaddr, size,\n+\t\t\t\tdevice_to_mask(dev), direction, attrs);\n+}\n+\n+\n+static void dma_iommu_unmap_single(struct device *dev, dma_addr_t dma_handle,\n+\t\t\t\t size_t size,\n+\t\t\t\t enum dma_data_direction direction,\n+\t\t\t\t struct dma_attrs *attrs)\n+{\n+\tiommu_unmap_single(dev->archdata.dma_data, dma_handle, size, direction,\n+\t\t\t attrs);\n+}\n+\n+\n+static int dma_iommu_map_sg(struct device *dev, struct scatterlist *sglist,\n+\t\t\t int nelems, enum dma_data_direction direction,\n+\t\t\t struct dma_attrs *attrs)\n+{\n+\treturn iommu_map_sg(dev, dev->archdata.dma_data, sglist, nelems,\n+\t\t\t device_to_mask(dev), direction, attrs);\n+}\n+\n+static void dma_iommu_unmap_sg(struct device *dev, struct scatterlist *sglist,\n+\t\tint nelems, enum dma_data_direction direction,\n+\t\tstruct dma_attrs *attrs)\n+{\n+\tiommu_unmap_sg(dev->archdata.dma_data, sglist, nelems, direction,\n+\t\t attrs);\n+}\n+\n+/* We support DMA to/from any memory page via the iommu */\n+static int dma_iommu_dma_supported(struct device *dev, u64 mask)\n+{\n+\tstruct iommu_table *tbl = dev->archdata.dma_data;\n+\n+\tif (!tbl || tbl->it_offset > mask) {\n+\t\tprintk(KERN_INFO\n+\t\t \"Warning: IOMMU offset too big for device mask\\n\");\n+\t\tif (tbl)\n+\t\t\tprintk(KERN_INFO\n+\t\t\t \"mask: 0x%08lx, table offset: 0x%08lx\\n\",\n+\t\t\t\tmask, tbl->it_offset);\n+\t\telse\n+\t\t\tprintk(KERN_INFO \"mask: 0x%08lx, table unavailable\\n\",\n+\t\t\t\tmask);\n+\t\treturn 0;\n+\t} else\n+\t\treturn 1;\n+}\n+\n+struct dma_mapping_ops dma_iommu_ops = {\n+\t.alloc_coherent\t= dma_iommu_alloc_coherent,\n+\t.free_coherent\t= dma_iommu_free_coherent,\n+\t.map_single\t= dma_iommu_map_single,\n+\t.unmap_single\t= dma_iommu_unmap_single,\n+\t.map_sg\t\t= dma_iommu_map_sg,\n+\t.unmap_sg\t= dma_iommu_unmap_sg,\n+\t.dma_supported\t= dma_iommu_dma_supported,\n+};\n+EXPORT_SYMBOL(dma_iommu_ops);\n+\n+/*\n+ * Generic direct DMA implementation\n+ *\n+ * This implementation supports a per-device offset that can be applied if\n+ * the address at which memory is visible to devices is not 0. Platform code\n+ * can set archdata.dma_data to an unsigned long holding the offset. By\n+ * default the offset is zero.\n+ */\n+\n+static unsigned long get_dma_direct_offset(struct device *dev)\n+{\n+\treturn (unsigned long)dev->archdata.dma_data;\n+}\n+\n+static void *dma_direct_alloc_coherent(struct device *dev, size_t size,\n+\t\t\t\t dma_addr_t *dma_handle, gfp_t flag)\n+{\n+\tstruct page *page;\n+\tvoid *ret;\n+\tint node = dev->archdata.numa_node;\n+\n+\tpage = alloc_pages_node(node, flag, get_order(size));\n+\tif (page == NULL)\n+\t\treturn NULL;\n+\tret = page_address(page);\n+\tmemset(ret, 0, size);\n+\t*dma_handle = virt_to_abs(ret) + get_dma_direct_offset(dev);\n+\n+\treturn ret;\n+}\n+\n+static void dma_direct_free_coherent(struct device *dev, size_t size,\n+\t\t\t\t void *vaddr, dma_addr_t dma_handle)\n+{\n+\tfree_pages((unsigned long)vaddr, get_order(size));\n+}\n+\n+static dma_addr_t dma_direct_map_single(struct device *dev, void *ptr,\n+\t\t\t\t\tsize_t size,\n+\t\t\t\t\tenum dma_data_direction direction,\n+\t\t\t\t\tstruct dma_attrs *attrs)\n+{\n+\treturn virt_to_abs(ptr) + get_dma_direct_offset(dev);\n+}\n+\n+static void dma_direct_unmap_single(struct device *dev, dma_addr_t dma_addr,\n+\t\t\t\t size_t size,\n+\t\t\t\t enum dma_data_direction direction,\n+\t\t\t\t struct dma_attrs *attrs)\n+{\n+}\n+\n+static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl,\n+\t\t\t int nents, enum dma_data_direction direction,\n+\t\t\t struct dma_attrs *attrs)\n+{\n+\tstruct scatterlist *sg;\n+\tint i;\n+\n+\tfor_each_sg(sgl, sg, nents, i) {\n+\t\tsg->dma_address = sg_phys(sg) + get_dma_direct_offset(dev);\n+\t\tsg->dma_length = sg->length;\n+\t}\n+\n+\treturn nents;\n+}\n+\n+static void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sg,\n+\t\t\t\tint nents, enum dma_data_direction direction,\n+\t\t\t\tstruct dma_attrs *attrs)\n+{\n+}\n+\n+static int dma_direct_dma_supported(struct device *dev, u64 mask)\n+{\n+\t/* Could be improved to check for memory though it better be\n+\t * done via some global so platforms can set the limit in case\n+\t * they have limited DMA windows\n+\t */\n+\treturn mask >= DMA_32BIT_MASK;\n+}\n+\n+struct dma_mapping_ops dma_direct_ops = {\n+\t.alloc_coherent\t= dma_direct_alloc_coherent,\n+\t.free_coherent\t= dma_direct_free_coherent,\n+\t.map_single\t= dma_direct_map_single,\n+\t.unmap_single\t= dma_direct_unmap_single,\n+\t.map_sg\t\t= dma_direct_map_sg,\n+\t.unmap_sg\t= dma_direct_unmap_sg,\n+\t.dma_supported\t= dma_direct_dma_supported,\n+};\n+EXPORT_SYMBOL(dma_direct_ops);\ndiff --git a/arch/powerpc/kernel/dma_64.c b/arch/powerpc/kernel/dma_64.c\ndeleted file mode 100644\nindex ae5708e..0000000\n--- a/arch/powerpc/kernel/dma_64.c\n+++ /dev/null\n@@ -1,200 +0,0 @@\n-/*\n- * Copyright (C) 2006 Benjamin Herrenschmidt, IBM Corporation\n- *\n- * Provide default implementations of the DMA mapping callbacks for\n- * directly mapped busses and busses using the iommu infrastructure\n- */\n-\n-#include <linux/device.h>\n-#include <linux/dma-mapping.h>\n-#include <asm/bug.h>\n-#include <asm/iommu.h>\n-#include <asm/abs_addr.h>\n-\n-/*\n- * Generic iommu implementation\n- */\n-\n-/* Allocates a contiguous real buffer and creates mappings over it.\n- * Returns the virtual address of the buffer and sets dma_handle\n- * to the dma address (mapping) of the first page.\n- */\n-static void *dma_iommu_alloc_coherent(struct device *dev, size_t size,\n-\t\t\t\t dma_addr_t *dma_handle, gfp_t flag)\n-{\n-\treturn iommu_alloc_coherent(dev, dev->archdata.dma_data, size,\n-\t\t\t\t dma_handle, device_to_mask(dev), flag,\n-\t\t\t\t dev->archdata.numa_node);\n-}\n-\n-static void dma_iommu_free_coherent(struct device *dev, size_t size,\n-\t\t\t\t void *vaddr, dma_addr_t dma_handle)\n-{\n-\tiommu_free_coherent(dev->archdata.dma_data, size, vaddr, dma_handle);\n-}\n-\n-/* Creates TCEs for a user provided buffer. The user buffer must be\n- * contiguous real kernel storage (not vmalloc). The address of the buffer\n- * passed here is the kernel (virtual) address of the buffer. The buffer\n- * need not be page aligned, the dma_addr_t returned will point to the same\n- * byte within the page as vaddr.\n- */\n-static dma_addr_t dma_iommu_map_single(struct device *dev, void *vaddr,\n-\t\t\t\t size_t size,\n-\t\t\t\t enum dma_data_direction direction,\n-\t\t\t\t struct dma_attrs *attrs)\n-{\n-\treturn iommu_map_single(dev, dev->archdata.dma_data, vaddr, size,\n-\t\t\t\tdevice_to_mask(dev), direction, attrs);\n-}\n-\n-\n-static void dma_iommu_unmap_single(struct device *dev, dma_addr_t dma_handle,\n-\t\t\t\t size_t size,\n-\t\t\t\t enum dma_data_direction direction,\n-\t\t\t\t struct dma_attrs *attrs)\n-{\n-\tiommu_unmap_single(dev->archdata.dma_data, dma_handle, size, direction,\n-\t\t\t attrs);\n-}\n-\n-\n-static int dma_iommu_map_sg(struct device *dev, struct scatterlist *sglist,\n-\t\t\t int nelems, enum dma_data_direction direction,\n-\t\t\t struct dma_attrs *attrs)\n-{\n-\treturn iommu_map_sg(dev, dev->archdata.dma_data, sglist, nelems,\n-\t\t\t device_to_mask(dev), direction, attrs);\n-}\n-\n-static void dma_iommu_unmap_sg(struct device *dev, struct scatterlist *sglist,\n-\t\tint nelems, enum dma_data_direction direction,\n-\t\tstruct dma_attrs *attrs)\n-{\n-\tiommu_unmap_sg(dev->archdata.dma_data, sglist, nelems, direction,\n-\t\t attrs);\n-}\n-\n-/* We support DMA to/from any memory page via the iommu */\n-static int dma_iommu_dma_supported(struct device *dev, u64 mask)\n-{\n-\tstruct iommu_table *tbl = dev->archdata.dma_data;\n-\n-\tif (!tbl || tbl->it_offset > mask) {\n-\t\tprintk(KERN_INFO\n-\t\t \"Warning: IOMMU offset too big for device mask\\n\");\n-\t\tif (tbl)\n-\t\t\tprintk(KERN_INFO\n-\t\t\t \"mask: 0x%08lx, table offset: 0x%08lx\\n\",\n-\t\t\t\tmask, tbl->it_offset);\n-\t\telse\n-\t\t\tprintk(KERN_INFO \"mask: 0x%08lx, table unavailable\\n\",\n-\t\t\t\tmask);\n-\t\treturn 0;\n-\t} else\n-\t\treturn 1;\n-}\n-\n-struct dma_mapping_ops dma_iommu_ops = {\n-\t.alloc_coherent\t= dma_iommu_alloc_coherent,\n-\t.free_coherent\t= dma_iommu_free_coherent,\n-\t.map_single\t= dma_iommu_map_single,\n-\t.unmap_single\t= dma_iommu_unmap_single,\n-\t.map_sg\t\t= dma_iommu_map_sg,\n-\t.unmap_sg\t= dma_iommu_unmap_sg,\n-\t.dma_supported\t= dma_iommu_dma_supported,\n-};\n-EXPORT_SYMBOL(dma_iommu_ops);\n-\n-/*\n- * Generic direct DMA implementation\n- *\n- * This implementation supports a per-device offset that can be applied if\n- * the address at which memory is visible to devices is not 0. Platform code\n- * can set archdata.dma_data to an unsigned long holding the offset. By\n- * default the offset is zero.\n- */\n-\n-static unsigned long get_dma_direct_offset(struct device *dev)\n-{\n-\treturn (unsigned long)dev->archdata.dma_data;\n-}\n-\n-static void *dma_direct_alloc_coherent(struct device *dev, size_t size,\n-\t\t\t\t dma_addr_t *dma_handle, gfp_t flag)\n-{\n-\tstruct page *page;\n-\tvoid *ret;\n-\tint node = dev->archdata.numa_node;\n-\n-\tpage = alloc_pages_node(node, flag, get_order(size));\n-\tif (page == NULL)\n-\t\treturn NULL;\n-\tret = page_address(page);\n-\tmemset(ret, 0, size);\n-\t*dma_handle = virt_to_abs(ret) + get_dma_direct_offset(dev);\n-\n-\treturn ret;\n-}\n-\n-static void dma_direct_free_coherent(struct device *dev, size_t size,\n-\t\t\t\t void *vaddr, dma_addr_t dma_handle)\n-{\n-\tfree_pages((unsigned long)vaddr, get_order(size));\n-}\n-\n-static dma_addr_t dma_direct_map_single(struct device *dev, void *ptr,\n-\t\t\t\t\tsize_t size,\n-\t\t\t\t\tenum dma_data_direction direction,\n-\t\t\t\t\tstruct dma_attrs *attrs)\n-{\n-\treturn virt_to_abs(ptr) + get_dma_direct_offset(dev);\n-}\n-\n-static void dma_direct_unmap_single(struct device *dev, dma_addr_t dma_addr,\n-\t\t\t\t size_t size,\n-\t\t\t\t enum dma_data_direction direction,\n-\t\t\t\t struct dma_attrs *attrs)\n-{\n-}\n-\n-static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl,\n-\t\t\t int nents, enum dma_data_direction direction,\n-\t\t\t struct dma_attrs *attrs)\n-{\n-\tstruct scatterlist *sg;\n-\tint i;\n-\n-\tfor_each_sg(sgl, sg, nents, i) {\n-\t\tsg->dma_address = sg_phys(sg) + get_dma_direct_offset(dev);\n-\t\tsg->dma_length = sg->length;\n-\t}\n-\n-\treturn nents;\n-}\n-\n-static void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sg,\n-\t\t\t\tint nents, enum dma_data_direction direction,\n-\t\t\t\tstruct dma_attrs *attrs)\n-{\n-}\n-\n-static int dma_direct_dma_supported(struct device *dev, u64 mask)\n-{\n-\t/* Could be improved to check for memory though it better be\n-\t * done via some global so platforms can set the limit in case\n-\t * they have limited DMA windows\n-\t */\n-\treturn mask >= DMA_32BIT_MASK;\n-}\n-\n-struct dma_mapping_ops dma_direct_ops = {\n-\t.alloc_coherent\t= dma_direct_alloc_coherent,\n-\t.free_coherent\t= dma_direct_free_coherent,\n-\t.map_single\t= dma_direct_map_single,\n-\t.unmap_single\t= dma_direct_unmap_single,\n-\t.map_sg\t\t= dma_direct_map_sg,\n-\t.unmap_sg\t= dma_direct_unmap_sg,\n-\t.dma_supported\t= dma_direct_dma_supported,\n-};\n-EXPORT_SYMBOL(dma_direct_ops);\n", "prefixes": [] }