From patchwork Tue Aug 18 19:04:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wdavis@nvidia.com X-Patchwork-Id: 508440 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id CD9C014032E for ; Wed, 19 Aug 2015 05:19:29 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754110AbbHRTTW (ORCPT ); Tue, 18 Aug 2015 15:19:22 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:12986 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753342AbbHRTTV (ORCPT ); Tue, 18 Aug 2015 15:19:21 -0400 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Tue, 18 Aug 2015 12:19:25 -0700 Received: from HQMAIL104.nvidia.com ([172.20.12.94]) by hqnvupgp08.nvidia.com (PGP Universal service); Tue, 18 Aug 2015 12:19:20 -0700 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Tue, 18 Aug 2015 12:19:20 -0700 Received: from HQMAIL103.nvidia.com (172.20.187.11) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Tue, 18 Aug 2015 19:19:20 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL103.nvidia.com (172.20.187.11) with Microsoft SMTP Server id 15.0.1044.25 via Frontend Transport; Tue, 18 Aug 2015 19:19:20 +0000 Received: from wdavis-lt.nvidia.com (Not Verified[10.20.168.59]) by hqnvemgw02.nvidia.com with MailMarshal (v7, 1, 2, 5326) id ; Tue, 18 Aug 2015 12:19:20 -0700 From: Will Davis To: Bjorn Helgaas CC: Alex Williamson , Joerg Roedel , , , Konrad Wilk , Mark Hounschell , "David S. Miller" , Jonathan Corbet , Terence Ripperda , John Hubbard , Jerome Glisse , Will Davis Subject: [PATCH v5 13/13] x86: declare support for DMA P2P Date: Tue, 18 Aug 2015 14:04:09 -0500 Message-ID: <1439924649-29698-14-git-send-email-wdavis@nvidia.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1439924649-29698-1-git-send-email-wdavis@nvidia.com> References: <1439924649-29698-1-git-send-email-wdavis@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Use CONFIG_HAS_DMA_P2P to declare that we want the DMA peer resource APIs enabled on x86, and provide the requisite dma_peer_mapping_error() implementation. Signed-off-by: Will Davis --- arch/x86/Kconfig | 1 + arch/x86/include/asm/dma-mapping.h | 18 ++++++++++++++++-- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 3dbb7e7..581d1ad 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -149,6 +149,7 @@ config X86 select VIRT_TO_BUS select X86_DEV_DMA_OPS if X86_64 select X86_FEATURE_NAMES if PROC_FS + select HAS_DMA_P2P config INSTRUCTION_DECODER def_bool y diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h index 1f5b728..64472d8 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -44,16 +44,30 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) #include /* Make sure we keep the same behaviour */ -static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) +static inline int dma_mapping_error_common(struct device *dev, u64 dma_addr) { struct dma_map_ops *ops = get_dma_ops(dev); - debug_dma_mapping_error(dev, dma_addr); if (ops->mapping_error) return ops->mapping_error(dev, dma_addr); return (dma_addr == DMA_ERROR_CODE); } +static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) +{ + debug_dma_mapping_error(dev, dma_addr); + return dma_mapping_error_common(dev, (u64)dma_addr); +} + +#ifdef CONFIG_HAS_DMA_P2P +static inline int dma_peer_mapping_error(struct device *dev, + dma_peer_addr_t dma_addr) +{ + debug_dma_peer_mapping_error(dev, dma_addr); + return dma_mapping_error_common(dev, (u64)dma_addr); +} +#endif + #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)