From patchwork Fri Nov 21 16:26:06 2008 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10040 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 5955BDDDED for ; Sat, 22 Nov 2008 03:26:33 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755147AbYKUQ01 (ORCPT ); Fri, 21 Nov 2008 11:26:27 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754891AbYKUQ00 (ORCPT ); Fri, 21 Nov 2008 11:26:26 -0500 Received: from outbound-wa4.frontbridge.com ([216.32.181.16]:5021 "EHLO WA4EHSOBE006.bigfish.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753759AbYKUQ0X (ORCPT ); Fri, 21 Nov 2008 11:26:23 -0500 Received: from mail143-wa4-R.bigfish.com (10.8.14.244) by WA4EHSOBE006.bigfish.com (10.8.40.26) with Microsoft SMTP Server id 8.1.291.1; Fri, 21 Nov 2008 16:26:22 +0000 Received: from mail143-wa4 (localhost.localdomain [127.0.0.1]) by mail143-wa4-R.bigfish.com (Postfix) with ESMTP id 8F6641968634; Fri, 21 Nov 2008 16:26:22 +0000 (UTC) X-BigFish: VPS3(zzzzzzz32i43j62h) X-Spam-TCS-SCL: 1:0 X-FB-SS: 5, Received: by mail143-wa4 (MessageSwitch) id 1227284780929509_14165; Fri, 21 Nov 2008 16:26:20 +0000 (UCT) Received: from svlb1extmailp02.amd.com (unknown [139.95.251.11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail143-wa4.bigfish.com (Postfix) with ESMTP id D25761790070; Fri, 21 Nov 2008 16:26:20 +0000 (UTC) Received: from svlb1twp02.amd.com ([139.95.250.35]) by svlb1extmailp02.amd.com (Switch-3.2.7/Switch-3.2.7) with ESMTP id mALGQCT8014058; Fri, 21 Nov 2008 08:26:15 -0800 X-WSS-ID: 0KAOYZM-04-Y1F-01 Received: from SSVLEXBH2.amd.com (ssvlexbh2.amd.com [139.95.53.183]) by svlb1twp02.amd.com (Tumbleweed MailGate 3.5.1) with ESMTP id 22FD71103C2; Fri, 21 Nov 2008 08:26:10 -0800 (PST) Received: from ssvlexmb2.amd.com ([139.95.53.7]) by SSVLEXBH2.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 21 Nov 2008 08:26:16 -0800 Received: from SF30EXMB1.amd.com ([172.20.6.49]) by ssvlexmb2.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 21 Nov 2008 08:26:16 -0800 Received: from lemmy.localdomain ([165.204.85.93]) by SF30EXMB1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 21 Nov 2008 17:26:10 +0100 Received: by lemmy.localdomain (Postfix, from userid 41430) id 8602F53942; Fri, 21 Nov 2008 17:26:10 +0100 (CET) From: Joerg Roedel To: Ingo Molnar , Thomas Gleixner CC: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, iommu@lists.linux-foundation.org, Joerg Roedel Subject: [PATCH 06/10] x86: add check code for map/unmap_sg code Date: Fri, 21 Nov 2008 17:26:06 +0100 Message-ID: <1227284770-19215-7-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.5.6.4 In-Reply-To: <1227284770-19215-1-git-send-email-joerg.roedel@amd.com> References: <1227284770-19215-1-git-send-email-joerg.roedel@amd.com> X-OriginalArrivalTime: 21 Nov 2008 16:26:10.0609 (UTC) FILETIME=[DD25EA10:01C94BF5] MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Impact: detect bugs in map/unmap_sg usage Signed-off-by: Joerg Roedel --- arch/x86/include/asm/dma-mapping.h | 9 ++++- arch/x86/include/asm/dma_debug.h | 20 +++++++++++ arch/x86/kernel/pci-dma-debug.c | 63 ++++++++++++++++++++++++++++++++++++ 3 files changed, 91 insertions(+), 1 deletions(-) diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h index c9bead2..c7bdb75 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -126,9 +126,14 @@ dma_map_sg(struct device *hwdev, struct scatterlist *sg, int nents, int direction) { struct dma_mapping_ops *ops = get_dma_ops(hwdev); + int ret; BUG_ON(!valid_dma_direction(direction)); - return ops->map_sg(hwdev, sg, nents, direction); + ret = ops->map_sg(hwdev, sg, nents, direction); + + debug_map_sg(hwdev, sg, ret, direction); + + return ret; } static inline void @@ -140,6 +145,8 @@ dma_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nents, BUG_ON(!valid_dma_direction(direction)); if (ops->unmap_sg) ops->unmap_sg(hwdev, sg, nents, direction); + + debug_unmap_sg(hwdev, sg, nents, direction); } static inline void diff --git a/arch/x86/include/asm/dma_debug.h b/arch/x86/include/asm/dma_debug.h index ba4d9b7..ff06d1c 100644 --- a/arch/x86/include/asm/dma_debug.h +++ b/arch/x86/include/asm/dma_debug.h @@ -51,6 +51,14 @@ extern void debug_unmap_single(struct device *dev, dma_addr_t addr, size_t size, int direction); +extern +void debug_map_sg(struct device *dev, struct scatterlist *sg, + int nents, int direction); + +extern +void debug_unmap_sg(struct device *dev, struct scatterlist *sglist, + int nelems, int dir); + #else /* CONFIG_DMA_API_DEBUG */ static inline @@ -70,6 +78,18 @@ void debug_unmap_single(struct device *dev, dma_addr_t addr, { } +static inline +void debug_map_sg(struct device *dev, struct scatterlist *sg, + int nents, int direction) +{ +} + +static inline +void debug_unmap_sg(struct device *dev, struct scatterlist *sglist, + int nelems, int dir) +{ +} + #endif /* CONFIG_DMA_API_DEBUG */ #endif /* __ASM_X86_DMA_DEBUG */ diff --git a/arch/x86/kernel/pci-dma-debug.c b/arch/x86/kernel/pci-dma-debug.c index 9afb6c8..55ef69a 100644 --- a/arch/x86/kernel/pci-dma-debug.c +++ b/arch/x86/kernel/pci-dma-debug.c @@ -289,3 +289,66 @@ void debug_unmap_single(struct device *dev, dma_addr_t addr, } EXPORT_SYMBOL(debug_unmap_single); +void debug_map_sg(struct device *dev, struct scatterlist *sg, + int nents, int direction) +{ + unsigned long flags; + struct dma_debug_entry *entry; + struct scatterlist *s; + int i; + + for_each_sg(sg, s, nents, i) { + entry = dma_entry_alloc(); + if (!entry) + return; + + entry->type = DMA_DEBUG_SG; + entry->dev = dev; + entry->cpu_addr = sg_virt(s); + entry->size = s->length; + entry->dev_addr = s->dma_address; + entry->direction = direction; + + spin_lock_irqsave(&dma_lock, flags); + add_dma_entry(entry); + spin_unlock_irqrestore(&dma_lock, flags); + } +} +EXPORT_SYMBOL(debug_map_sg); + +void debug_unmap_sg(struct device *dev, struct scatterlist *sglist, + int nelems, int dir) +{ + unsigned long flags; + struct dma_debug_entry *entry; + struct scatterlist *s; + int i; + + spin_lock_irqsave(&dma_lock, flags); + + for_each_sg(sglist, s, nelems, i) { + + struct dma_debug_entry ref = { + .type = DMA_DEBUG_SG, + .dev = dev, + .cpu_addr = sg_virt(s), + .dev_addr = s->dma_address, + .size = s->length, + .direction = dir, + }; + + if (ref.dev_addr == bad_dma_address) + continue; + + entry = find_dma_entry(&ref); + + if (check_unmap(&ref, entry)) { + remove_dma_entry(entry); + dma_entry_free(entry); + } + } + + spin_unlock_irqrestore(&dma_lock, flags); +} +EXPORT_SYMBOL(debug_unmap_sg); +