From patchwork Fri May 29 17:14:45 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wdavis@nvidia.com X-Patchwork-Id: 477997 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 842B3140F98 for ; Sat, 30 May 2015 03:28:15 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755347AbbE2R2O (ORCPT ); Fri, 29 May 2015 13:28:14 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:10281 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756151AbbE2R2M (ORCPT ); Fri, 29 May 2015 13:28:12 -0400 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Fri, 29 May 2015 10:28:12 -0700 Received: from HQMAIL103.nvidia.com ([172.20.187.11]) by hqnvupgp07.nvidia.com (PGP Universal service); Fri, 29 May 2015 10:25:53 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Fri, 29 May 2015 10:25:53 -0700 Received: from HQMAIL106.nvidia.com (172.18.146.12) by HQMAIL103.nvidia.com (172.20.187.11) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Fri, 29 May 2015 17:28:11 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server id 15.0.1044.25 via Frontend Transport; Fri, 29 May 2015 17:28:11 +0000 Received: from wdavis-lt.nvidia.com (Not Verified[10.20.168.59]) by hqnvemgw02.nvidia.com with MailMarshal (v7, 1, 2, 5326) id ; Fri, 29 May 2015 10:28:11 -0700 From: To: Joerg Roedel , Bjorn Helgaas CC: Terence Ripperda , John Hubbard , Jerome Glisse , Mark Hounschell , Konrad Rzeszutek Wilk , Jonathan Corbet , "David S. Miller" , Yijing Wang , Alex Williamson , Dave Jiang , , , Will Davis Subject: [PATCH v3 6/7] iommu/vt-d: implement (un)map_resource Date: Fri, 29 May 2015 12:14:45 -0500 Message-ID: <1432919686-32306-7-git-send-email-wdavis@nvidia.com> X-Mailer: git-send-email 2.4.0 In-Reply-To: <1432919686-32306-1-git-send-email-wdavis@nvidia.com> References: <1432919686-32306-1-git-send-email-wdavis@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Will Davis Implement 'map_resource' for the Intel IOMMU driver. Simply translate the resource to a physical address and route it to the same handlers used by the 'map_page' API. This allows a device to map another's resource, to enable peer-to-peer transactions. Signed-off-by: Will Davis Reviewed-by: Terence Ripperda Reviewed-by: John Hubbard --- drivers/iommu/intel-iommu.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 68d43be..0f49eff 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3095,6 +3095,15 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page, dir, *dev->dma_mask); } +static dma_addr_t intel_map_resource(struct device *dev, struct resource *res, + unsigned long offset, size_t size, + enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + return __intel_map_single(dev, res->start + offset, size, + dir, *dev->dma_mask); +} + static void flush_unmaps(void) { int i, j; @@ -3226,6 +3235,13 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr, intel_unmap(dev, dev_addr); } +static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr, + size_t size, enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + intel_unmap(dev, dev_addr); +} + static void *intel_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flags, struct dma_attrs *attrs) @@ -3382,6 +3398,8 @@ struct dma_map_ops intel_dma_ops = { .unmap_sg = intel_unmap_sg, .map_page = intel_map_page, .unmap_page = intel_unmap_page, + .map_resource = intel_map_resource, + .unmap_resource = intel_unmap_resource, .mapping_error = intel_mapping_error, };