From patchwork Fri May 30 11:20:16 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi Doyu X-Patchwork-Id: 354075 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 61BC91400E2 for ; Fri, 30 May 2014 21:20:46 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752358AbaE3LUn (ORCPT ); Fri, 30 May 2014 07:20:43 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:17134 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751291AbaE3LUm (ORCPT ); Fri, 30 May 2014 07:20:42 -0400 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Fri, 30 May 2014 04:20:31 -0700 Received: from hqemhub01.nvidia.com ([172.20.12.94]) by hqnvupgp08.nvidia.com (PGP Universal service); Fri, 30 May 2014 04:15:38 -0700 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Fri, 30 May 2014 04:15:38 -0700 Received: from deemhub02.nvidia.com (10.21.69.138) by hqemhub01.nvidia.com (172.20.150.30) with Microsoft SMTP Server (TLS) id 8.3.342.0; Fri, 30 May 2014 04:20:41 -0700 Received: from oreo.nvidia.com (10.21.65.27) by deemhub02.nvidia.com (10.21.69.138) with Microsoft SMTP Server (TLS) id 8.3.342.0; Fri, 30 May 2014 13:20:39 +0200 From: Hiroshi Doyu To: Subject: [PATCHv8 03/21] iommu/of: check if dependee iommu is ready or not Date: Fri, 30 May 2014 14:20:16 +0300 Message-ID: <1401448834-32659-4-git-send-email-hdoyu@nvidia.com> X-Mailer: git-send-email 2.0.0.rc1.15.g7e76a2f In-Reply-To: <1401448834-32659-1-git-send-email-hdoyu@nvidia.com> References: <1401448834-32659-1-git-send-email-hdoyu@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org IOMMU devices on the bus need to be poplulated first, then iommu master devices are done later. With CONFIG_OF_IOMMU, "iommus=" DT binding would be used to identify whether a device can be an iommu msater or not. If a device can, we'll defer to populate that device till an depending iommu device is populated. Signed-off-by: Hiroshi Doyu --- drivers/iommu/of_iommu.c | 13 +++++++++++++ include/linux/of_iommu.h | 6 ++++++ 2 files changed, 19 insertions(+) diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c index 5d1aeb90eae3..b9f5081515ae 100644 --- a/drivers/iommu/of_iommu.c +++ b/drivers/iommu/of_iommu.c @@ -125,3 +125,16 @@ int of_get_dma_window(struct device_node *dn, const char *prefix, int index, return 0; } EXPORT_SYMBOL_GPL(of_get_dma_window); + +int of_iommu_attach(struct device *dev) +{ + struct of_phandle_iter iter; + + of_property_for_each_phandle_with_args(iter, dev->of_node, "iommus", + "iommu-cells", 0) { + if (!of_find_iommu_by_node(iter.out_args.np)) + return -EPROBE_DEFER; + } + + return 0; +} diff --git a/include/linux/of_iommu.h b/include/linux/of_iommu.h index 108306898c38..0e2f5681b45a 100644 --- a/include/linux/of_iommu.h +++ b/include/linux/of_iommu.h @@ -14,6 +14,7 @@ extern int of_get_dma_window(struct device_node *dn, const char *prefix, void iommu_add(struct iommu *iommu); void iommu_del(struct iommu *iommu); +int of_iommu_attach(struct device *dev); #else @@ -32,6 +33,11 @@ static inline void iommu_del(struct iommu *iommu) { } +static inline int of_iommu_attach(struct device *dev) +{ + return 0; +} + #endif /* CONFIG_OF_IOMMU */ #endif /* __OF_IOMMU_H */