From patchwork Tue Aug 18 19:04:07 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wdavis@nvidia.com X-Patchwork-Id: 508442 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 2577B14032E for ; Wed, 19 Aug 2015 05:19:31 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754092AbbHRTTZ (ORCPT ); Tue, 18 Aug 2015 15:19:25 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:12978 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752868AbbHRTTT (ORCPT ); Tue, 18 Aug 2015 15:19:19 -0400 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Tue, 18 Aug 2015 12:19:23 -0700 Received: from HQMAIL105.nvidia.com ([172.20.187.12]) by hqnvupgp08.nvidia.com (PGP Universal service); Tue, 18 Aug 2015 12:19:19 -0700 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Tue, 18 Aug 2015 12:19:19 -0700 Received: from HQPUB101.nvidia.com (172.20.187.14) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Tue, 18 Aug 2015 19:19:18 +0000 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQPUB101.nvidia.com (172.20.187.14) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Tue, 18 Aug 2015 19:19:18 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server id 15.0.1044.25 via Frontend Transport; Tue, 18 Aug 2015 19:19:18 +0000 Received: from wdavis-lt.nvidia.com (Not Verified[10.20.168.59]) by hqnvemgw02.nvidia.com with MailMarshal (v7, 1, 2, 5326) id ; Tue, 18 Aug 2015 12:19:17 -0700 From: Will Davis To: Bjorn Helgaas CC: Alex Williamson , Joerg Roedel , , , Konrad Wilk , "Mark Hounschell" , "David S. Miller" , Jonathan Corbet , Terence Ripperda , John Hubbard , Jerome Glisse , "Will Davis" Subject: [PATCH v5 11/13] iommu/vt-d: implement (un)map_peer_resource Date: Tue, 18 Aug 2015 14:04:07 -0500 Message-ID: <1439924649-29698-12-git-send-email-wdavis@nvidia.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1439924649-29698-1-git-send-email-wdavis@nvidia.com> References: <1439924649-29698-1-git-send-email-wdavis@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Implement 'map_peer_resource' for the Intel IOMMU driver. Simply translate the resource to a physical address and route it to the same handlers used by the 'map_page' API. This allows a device to map another's resource, to enable peer-to-peer transactions. Add behind CONFIG_HAS_DMA_P2P guards, since the dma_map_ops members are behind them as well. Signed-off-by: Will Davis Reviewed-by: Terence Ripperda Reviewed-by: John Hubbard --- drivers/iommu/intel-iommu.c | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index a98a7b2..0d3746f 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3544,6 +3544,40 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr, intel_unmap(dev, dev_addr); } +#ifdef CONFIG_HAS_DMA_P2P +static dma_peer_addr_t intel_map_peer_resource(struct device *dev, + struct device *peer, + struct resource *res, + unsigned long offset, + size_t size, + enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + struct pci_dev *pdev; + struct pci_dev *ppeer; + + if (!dev_is_pci(dev) || !dev_is_pci(peer)) + return 0; + + pdev = to_pci_dev(dev); + ppeer = to_pci_dev(peer); + + if (!pci_peer_traffic_supported(pdev, ppeer)) + return 0; + + return __intel_map_single(dev, res->start + offset, size, + dir, *dev->dma_mask); +} + +static void intel_unmap_peer_resource(struct device *dev, + dma_peer_addr_t dev_addr, + size_t size, enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + intel_unmap(dev, dev_addr); +} +#endif + static void *intel_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flags, struct dma_attrs *attrs) @@ -3700,6 +3734,10 @@ struct dma_map_ops intel_dma_ops = { .unmap_sg = intel_unmap_sg, .map_page = intel_map_page, .unmap_page = intel_unmap_page, +#ifdef CONFIG_HAS_DMA_P2P + .map_peer_resource = intel_map_peer_resource, + .unmap_peer_resource = intel_unmap_peer_resource, +#endif .mapping_error = intel_mapping_error, };