From patchwork Fri Jan 9 16:19:29 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 17533 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id EF928475CE for ; Sat, 10 Jan 2009 03:21:15 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753351AbZAIQU2 (ORCPT ); Fri, 9 Jan 2009 11:20:28 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753317AbZAIQUZ (ORCPT ); Fri, 9 Jan 2009 11:20:25 -0500 Received: from outbound-dub.frontbridge.com ([213.199.154.16]:28526 "EHLO IE1EHSOBE002.bigfish.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752503AbZAIQTu (ORCPT ); Fri, 9 Jan 2009 11:19:50 -0500 Received: from mail131-dub-R.bigfish.com (10.5.252.3) by IE1EHSOBE002.bigfish.com (10.5.252.22) with Microsoft SMTP Server id 8.1.291.1; Fri, 9 Jan 2009 16:19:48 +0000 Received: from mail131-dub (localhost.localdomain [127.0.0.1]) by mail131-dub-R.bigfish.com (Postfix) with ESMTP id 0ED702C80D2; Fri, 9 Jan 2009 16:19:48 +0000 (UTC) X-BigFish: VPS3(zzzzzzz32i43j62h) X-Spam-TCS-SCL: 1:0 Received: by mail131-dub (MessageSwitch) id 1231517986231718_17808; Fri, 9 Jan 2009 16:19:46 +0000 (UCT) Received: from svlb1extmailp02.amd.com (unknown [139.95.251.11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail131-dub.bigfish.com (Postfix) with ESMTP id B1D59C18057; Fri, 9 Jan 2009 16:19:44 +0000 (UTC) Received: from svlb1twp01.amd.com ([139.95.250.34]) by svlb1extmailp02.amd.com (Switch-3.2.7/Switch-3.2.7) with ESMTP id n09GJYkI004167; Fri, 9 Jan 2009 08:19:37 -0800 X-WSS-ID: 0KD7PCK-03-IX8-01 Received: from SSVLEXBH2.amd.com (ssvlexbh2.amd.com [139.95.53.183]) by svlb1twp01.amd.com (Tumbleweed MailGate 3.5.1) with ESMTP id 27670884944; Fri, 9 Jan 2009 08:19:32 -0800 (PST) Received: from ssvlexmb2.amd.com ([139.95.53.7]) by SSVLEXBH2.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 9 Jan 2009 08:19:38 -0800 Received: from SF30EXMB1.amd.com ([172.20.6.49]) by ssvlexmb2.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 9 Jan 2009 08:19:38 -0800 Received: from lemmy.localdomain ([165.204.85.93]) by SF30EXMB1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 9 Jan 2009 17:19:30 +0100 Received: by lemmy.localdomain (Postfix, from userid 41430) id C9E9153C69; Fri, 9 Jan 2009 17:19:30 +0100 (CET) From: Joerg Roedel To: linux-kernel@vger.kernel.org CC: mingo@redhat.com, dwmw2@infradead.org, fujita.tomonori@lab.ntt.co.jp, netdev@vger.kernel.org, iommu@lists.linux-foundation.org, Joerg Roedel Subject: [PATCH 15/16] dma-debug: x86 architecture bindings Date: Fri, 9 Jan 2009 17:19:29 +0100 Message-ID: <1231517970-20288-16-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.5.6.4 In-Reply-To: <1231517970-20288-1-git-send-email-joerg.roedel@amd.com> References: <1231517970-20288-1-git-send-email-joerg.roedel@amd.com> X-OriginalArrivalTime: 09 Jan 2009 16:19:30.0888 (UTC) FILETIME=[0D231480:01C97276] MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Impact: make use of DMA-API debugging code in x86 Signed-off-by: Joerg Roedel --- arch/x86/Kconfig | 1 + arch/x86/include/asm/dma-mapping.h | 30 ++++++++++++++++++++++++++---- arch/x86/kernel/pci-dma.c | 5 +++++ 3 files changed, 32 insertions(+), 4 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 862adb9..68a806c 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -39,6 +39,7 @@ config X86 select HAVE_GENERIC_DMA_COHERENT if X86_32 select HAVE_EFFICIENT_UNALIGNED_ACCESS select USER_STACKTRACE_SUPPORT + select HAVE_DMA_API_DEBUG config ARCH_DEFCONFIG string diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h index 4035357..939d5b3 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -7,6 +7,7 @@ */ #include +#include #include #include #include @@ -93,9 +94,12 @@ dma_map_single(struct device *hwdev, void *ptr, size_t size, int direction) { struct dma_mapping_ops *ops = get_dma_ops(hwdev); + dma_addr_t addr; BUG_ON(!valid_dma_direction(direction)); - return ops->map_single(hwdev, virt_to_phys(ptr), size, direction); + addr = ops->map_single(hwdev, virt_to_phys(ptr), size, direction); + debug_map_single(hwdev, ptr, size, direction, addr); + return addr; } static inline void @@ -105,6 +109,7 @@ dma_unmap_single(struct device *dev, dma_addr_t addr, size_t size, struct dma_mapping_ops *ops = get_dma_ops(dev); BUG_ON(!valid_dma_direction(direction)); + debug_unmap_single(dev, addr, size, direction); if (ops->unmap_single) ops->unmap_single(dev, addr, size, direction); } @@ -114,9 +119,13 @@ dma_map_sg(struct device *hwdev, struct scatterlist *sg, int nents, int direction) { struct dma_mapping_ops *ops = get_dma_ops(hwdev); + int ret; BUG_ON(!valid_dma_direction(direction)); - return ops->map_sg(hwdev, sg, nents, direction); + ret = ops->map_sg(hwdev, sg, nents, direction); + debug_map_sg(hwdev, sg, ret, direction); + + return ret; } static inline void @@ -126,6 +135,7 @@ dma_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nents, struct dma_mapping_ops *ops = get_dma_ops(hwdev); BUG_ON(!valid_dma_direction(direction)); + debug_unmap_sg(hwdev, sg, nents, direction); if (ops->unmap_sg) ops->unmap_sg(hwdev, sg, nents, direction); } @@ -137,6 +147,7 @@ dma_sync_single_for_cpu(struct device *hwdev, dma_addr_t dma_handle, struct dma_mapping_ops *ops = get_dma_ops(hwdev); BUG_ON(!valid_dma_direction(direction)); + debug_sync_single_for_cpu(hwdev, dma_handle, size, direction); if (ops->sync_single_for_cpu) ops->sync_single_for_cpu(hwdev, dma_handle, size, direction); flush_write_buffers(); @@ -149,6 +160,7 @@ dma_sync_single_for_device(struct device *hwdev, dma_addr_t dma_handle, struct dma_mapping_ops *ops = get_dma_ops(hwdev); BUG_ON(!valid_dma_direction(direction)); + debug_sync_single_for_device(hwdev, dma_handle, size, direction); if (ops->sync_single_for_device) ops->sync_single_for_device(hwdev, dma_handle, size, direction); flush_write_buffers(); @@ -161,6 +173,8 @@ dma_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dma_handle, struct dma_mapping_ops *ops = get_dma_ops(hwdev); BUG_ON(!valid_dma_direction(direction)); + debug_sync_single_range_for_cpu(hwdev, dma_handle, offset, size, + direction); if (ops->sync_single_range_for_cpu) ops->sync_single_range_for_cpu(hwdev, dma_handle, offset, size, direction); @@ -175,6 +189,8 @@ dma_sync_single_range_for_device(struct device *hwdev, dma_addr_t dma_handle, struct dma_mapping_ops *ops = get_dma_ops(hwdev); BUG_ON(!valid_dma_direction(direction)); + debug_sync_single_range_for_device(hwdev, dma_handle, offset, + size, direction); if (ops->sync_single_range_for_device) ops->sync_single_range_for_device(hwdev, dma_handle, offset, size, direction); @@ -188,6 +204,7 @@ dma_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, struct dma_mapping_ops *ops = get_dma_ops(hwdev); BUG_ON(!valid_dma_direction(direction)); + debug_sync_sg_for_cpu(hwdev, sg, nelems, direction); if (ops->sync_sg_for_cpu) ops->sync_sg_for_cpu(hwdev, sg, nelems, direction); flush_write_buffers(); @@ -200,6 +217,7 @@ dma_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg, struct dma_mapping_ops *ops = get_dma_ops(hwdev); BUG_ON(!valid_dma_direction(direction)); + debug_sync_sg_for_device(hwdev, sg, nelems, direction); if (ops->sync_sg_for_device) ops->sync_sg_for_device(hwdev, sg, nelems, direction); @@ -267,7 +285,7 @@ dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) { struct dma_mapping_ops *ops = get_dma_ops(dev); - void *memory; + void *memory, *addr; gfp &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32); @@ -285,8 +303,11 @@ dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, if (!ops->alloc_coherent) return NULL; - return ops->alloc_coherent(dev, size, dma_handle, + addr = ops->alloc_coherent(dev, size, dma_handle, dma_alloc_coherent_gfp_flags(dev, gfp)); + debug_alloc_coherent(dev, size, *dma_handle, addr); + + return addr; } static inline void dma_free_coherent(struct device *dev, size_t size, @@ -299,6 +320,7 @@ static inline void dma_free_coherent(struct device *dev, size_t size, if (dma_release_from_coherent(dev, get_order(size), vaddr)) return; + debug_free_coherent(dev, size, vaddr, bus); if (ops->free_coherent) ops->free_coherent(dev, size, vaddr, bus); } diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index b254285..c8efbcc 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -44,6 +44,9 @@ struct device x86_dma_fallback_dev = { }; EXPORT_SYMBOL(x86_dma_fallback_dev); +/* Number of entries preallocated for DMA-API debugging */ +#define PREALLOC_ENTRIES 8192 /* needs 512kb */ + int dma_set_mask(struct device *dev, u64 mask) { if (!dev->dma_mask || !dma_supported(dev, mask)) @@ -265,6 +268,8 @@ EXPORT_SYMBOL(dma_supported); static int __init pci_iommu_init(void) { + dma_debug_init(PREALLOC_ENTRIES); + calgary_iommu_init(); intel_iommu_init();