From patchwork Wed Jul 22 21:39:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wdavis@nvidia.com X-Patchwork-Id: 498843 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 28D08140D18 for ; Thu, 23 Jul 2015 07:54:12 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753222AbbGVVyI (ORCPT ); Wed, 22 Jul 2015 17:54:08 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:2995 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751820AbbGVVyG (ORCPT ); Wed, 22 Jul 2015 17:54:06 -0400 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Wed, 22 Jul 2015 14:54:43 -0700 Received: from HQMAIL107.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Wed, 22 Jul 2015 14:52:30 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Wed, 22 Jul 2015 14:52:30 -0700 Received: from HQPUB102.nvidia.com (172.18.146.14) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 22 Jul 2015 21:54:06 +0000 Received: from HQMAIL103.nvidia.com (172.20.187.11) by HQPUB102.nvidia.com (172.18.146.14) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 22 Jul 2015 21:54:07 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL103.nvidia.com (172.20.187.11) with Microsoft SMTP Server id 15.0.1044.25 via Frontend Transport; Wed, 22 Jul 2015 21:54:05 +0000 Received: from wdavis-lt.nvidia.com (Not Verified[10.20.168.59]) by hqnvemgw01.nvidia.com with MailMarshal (v7, 1, 2, 5326) id ; Wed, 22 Jul 2015 14:54:05 -0700 From: Will Davis To: Bjorn Helgaas , Alex Williamson , Joerg Roedel CC: , , "Konrad Wilk" , Mark Hounschell , "David S. Miller" , Jonathan Corbet , "Terence Ripperda" , John Hubbard , "Jerome Glisse" Subject: [PATCH v4 08/12] iommu/amd: Implement (un)map_peer_resource Date: Wed, 22 Jul 2015 16:39:53 -0500 Message-ID: <1437601197-6481-9-git-send-email-wdavis@nvidia.com> X-Mailer: git-send-email 2.4.6 In-Reply-To: <1437601197-6481-1-git-send-email-wdavis@nvidia.com> References: <1437601197-6481-1-git-send-email-wdavis@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Implement 'map_peer_resource' for the AMD IOMMU driver. Generalize the existing map_page implementation to operate on a physical address, and make both map_page and map_resource wrappers around that helper (and similarly, for unmap_page and unmap_resource). This allows a device to map another's resource, to enable peer-to-peer transactions. This is guarded behind CONFIG_HAS_DMA_P2P, since the struct dma_map_ops members are as well. Signed-off-by: Will Davis Reviewed-by: Terence Ripperda Reviewed-by: John Hubbard --- drivers/iommu/amd_iommu.c | 99 ++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 86 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index a57e9b7..adf9496 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -471,6 +471,10 @@ DECLARE_STATS_COUNTER(cnt_map_single); DECLARE_STATS_COUNTER(cnt_unmap_single); DECLARE_STATS_COUNTER(cnt_map_sg); DECLARE_STATS_COUNTER(cnt_unmap_sg); +#ifdef CONFIG_HAS_DMA_P2P +DECLARE_STATS_COUNTER(cnt_map_peer_resource); +DECLARE_STATS_COUNTER(cnt_unmap_peer_resource); +#endif DECLARE_STATS_COUNTER(cnt_alloc_coherent); DECLARE_STATS_COUNTER(cnt_free_coherent); DECLARE_STATS_COUNTER(cross_page); @@ -509,6 +513,10 @@ static void amd_iommu_stats_init(void) amd_iommu_stats_add(&cnt_unmap_single); amd_iommu_stats_add(&cnt_map_sg); amd_iommu_stats_add(&cnt_unmap_sg); +#ifdef CONFIG_HAS_DMA_P2P + amd_iommu_stats_add(&cnt_map_peer_resource); + amd_iommu_stats_add(&cnt_unmap_peer_resource); +#endif amd_iommu_stats_add(&cnt_alloc_coherent); amd_iommu_stats_add(&cnt_free_coherent); amd_iommu_stats_add(&cross_page); @@ -2585,20 +2593,16 @@ static void __unmap_single(struct dma_ops_domain *dma_dom, } /* - * The exported map_single function for dma_ops. + * Wrapper function that contains code common to mapping a physical address + * range from a page or a resource. */ -static dma_addr_t map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, - struct dma_attrs *attrs) +static dma_addr_t __map_phys(struct device *dev, phys_addr_t paddr, + size_t size, enum dma_data_direction dir) { unsigned long flags; struct protection_domain *domain; dma_addr_t addr; u64 dma_mask; - phys_addr_t paddr = page_to_phys(page) + offset; - - INC_STATS_COUNTER(cnt_map_single); domain = get_domain(dev); if (PTR_ERR(domain) == -EINVAL) @@ -2624,16 +2628,15 @@ out: } /* - * The exported unmap_single function for dma_ops. + * Wrapper function that contains code common to unmapping a physical address + * range from a page or a resource. */ -static void unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size, - enum dma_data_direction dir, struct dma_attrs *attrs) +static void __unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, + enum dma_data_direction dir) { unsigned long flags; struct protection_domain *domain; - INC_STATS_COUNTER(cnt_unmap_single); - domain = get_domain(dev); if (IS_ERR(domain)) return; @@ -2707,6 +2710,72 @@ unmap: } /* + * The exported map_single function for dma_ops. + */ +static dma_addr_t map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + INC_STATS_COUNTER(cnt_map_single); + + return __map_phys(dev, page_to_phys(page) + offset, size, dir); +} + +/* + * The exported unmap_single function for dma_ops. + */ +static void unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size, + enum dma_data_direction dir, struct dma_attrs *attrs) +{ + INC_STATS_COUNTER(cnt_unmap_single); + + __unmap_phys(dev, dma_addr, size, dir); +} + +#ifdef CONFIG_HAS_DMA_P2P +/* + * The exported map_peer_resource function for dma_ops. + */ +static dma_peer_addr_t map_peer_resource(struct device *dev, + struct device *peer, + struct resource *res, + unsigned long offset, + size_t size, + enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct pci_dev *ppeer = to_pci_dev(peer); + struct pci_host_bridge *hbdev = pci_find_host_bridge(pdev->bus); + struct pci_host_bridge *hbpeer = pci_find_host_bridge(ppeer->bus); + + INC_STATS_COUNTER(cnt_map_peer_resource); + + /* + * Disallow the peer-to-peer mapping if the devices do not share a host + * bridge. + */ + if (hbdev != hbpeer) + return DMA_ERROR_CODE; + + return __map_phys(dev, res->start + offset, size, dir); +} + +/* + * The exported unmap_peer_resource function for dma_ops. + */ +static void unmap_peer_resource(struct device *dev, dma_peer_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + INC_STATS_COUNTER(cnt_unmap_peer_resource); + + __unmap_phys(dev, dma_addr, size, dir); +} +#endif + +/* * The exported map_sg function for dma_ops (handles scatter-gather * lists). */ @@ -2852,6 +2921,10 @@ static struct dma_map_ops amd_iommu_dma_ops = { .unmap_page = unmap_page, .map_sg = map_sg, .unmap_sg = unmap_sg, +#ifdef CONFIG_HAS_DMA_P2P + .map_peer_resource = map_peer_resource, + .unmap_peer_resource = unmap_peer_resource, +#endif .dma_supported = amd_iommu_dma_supported, };