From patchwork Mon Nov 27 09:50:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thierry Reding X-Patchwork-Id: 841539 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="U49S6xUJ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3ylhrk4bldz9sP9 for ; Mon, 27 Nov 2017 20:51:18 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751773AbdK0JvR (ORCPT ); Mon, 27 Nov 2017 04:51:17 -0500 Received: from mail-wr0-f194.google.com ([209.85.128.194]:42125 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751418AbdK0Ju7 (ORCPT ); Mon, 27 Nov 2017 04:50:59 -0500 Received: by mail-wr0-f194.google.com with SMTP id o14so25784363wrf.9 for ; Mon, 27 Nov 2017 01:50:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1EeNw78KqrAmJbM6r7dzawuzigiKP8k11iUrNgvhsgM=; b=U49S6xUJT2r63AT+QYznF4qjfB99W8fNeX0kPBbcxrIX8tgoFLjYAfFHWkp1/3imAf Og2NZYTlJwBmoTfFhj84axU+0CK8TwD/bMFoOclDERWabTLDUziKsJjACEON0+Tfzuhe KUB6Hb5N6m8MWcQj/hFoXZM5cTvEXhY6vL8CZG459UsGjsINIkYbPb2LhJtpyaZz0LWR HSIuXSSbp3903D7IsNOBVQrKHo6TRbJzk51C+GSuozrF3PofLgdOiGY4VRA+N7bR37Gr Sl5uOLgR937NuJT/QQzzaqNJIFX3+NMxWytKHn6qYczfrvI/GANYxN9/gX8Kv2kfTGwr GCJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1EeNw78KqrAmJbM6r7dzawuzigiKP8k11iUrNgvhsgM=; b=AQCDu1OM85WgrfL6nWHDuGsbZ02sbl00YP4iZSapR/NugpJ2WSelZOJWB9T2o+EfFM vU9mWmqsEI3ed9Wb8JyLLED3PiZvXD6d/5hpmakZiKBmVR9hSePqqjyVH/7vK5hZEE08 uOGQOPvU3lZsYL2QFpoMG89HEOdBRdLJlohFVnGnRmpdJ7o6U+uNfjRa4rK5qlaFE6Ta m1ImmtItPxY2ziNGxkUyDYmbaWO8I/saVXysV7TPTILVHe3ZhLnStOHOPnG+zyESr1Fm y2sCnUJmjeS2UweY7FvsLA9CYKRR9PTuU+31qyiL5Fyksc3mqVjdJO3cdvw7G4UfDoxW AdsQ== X-Gm-Message-State: AJaThX5wGeTI0gE85F/XtSqqiXHdMTAwefeZ23z5lCQLnYd97NT24aha WKNih4R+vBheOZb6Apldb/E= X-Google-Smtp-Source: AGs4zMY+Rm+cwWYs5r/2RBxVTPtXvK5mgdbjQ4m9+SviRl3jfBRi8KlokTZWcECnsqAxW0efsbHAaA== X-Received: by 10.223.169.100 with SMTP id u91mr29527066wrc.108.1511776258222; Mon, 27 Nov 2017 01:50:58 -0800 (PST) Received: from localhost (p200300E41F200F003F65F430A8AE2E44.dip0.t-ipconnect.de. [2003:e4:1f20:f00:3f65:f430:a8ae:2e44]) by smtp.gmail.com with ESMTPSA id a74sm60190765wrc.7.2017.11.27.01.50.57 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 27 Nov 2017 01:50:57 -0800 (PST) From: Thierry Reding To: Joerg Roedel Cc: Jonathan Hunter , Mikko Perttunen , iommu@lists.linux-foundation.org, linux-tegra@vger.kernel.org Subject: [PATCH 1/2] iommu/tegra: Allow devices to be grouped Date: Mon, 27 Nov 2017 10:50:54 +0100 Message-Id: <20171127095055.21486-2-thierry.reding@gmail.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20171127095055.21486-1-thierry.reding@gmail.com> References: <20171127095055.21486-1-thierry.reding@gmail.com> Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org From: Thierry Reding Implement the ->device_group() and ->of_xlate() callbacks which are used in order to group devices. Each group can then share a single domain. This is implemented primarily in order to achieve the same semantics on Tegra210 and earlier as on Tegra186 where the Tegra SMMU was replaced by an ARM SMMU. Users of the IOMMU API can now use the same code to share domains between devices, whereas previously they used to attach each device individually. Signed-off-by: Thierry Reding Acked-by: Alex Williamson --- drivers/iommu/tegra-smmu.c | 124 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 120 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 3b6449e2cbf1..8885635d0a3b 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -20,6 +20,12 @@ #include #include +struct tegra_smmu_group { + struct list_head list; + const struct tegra_smmu_group_soc *soc; + struct iommu_group *group; +}; + struct tegra_smmu { void __iomem *regs; struct device *dev; @@ -27,6 +33,8 @@ struct tegra_smmu { struct tegra_mc *mc; const struct tegra_smmu_soc *soc; + struct list_head groups; + unsigned long pfn_mask; unsigned long tlb_mask; @@ -703,19 +711,47 @@ static struct tegra_smmu *tegra_smmu_find(struct device_node *np) return mc->smmu; } +static int tegra_smmu_configure(struct tegra_smmu *smmu, struct device *dev, + struct of_phandle_args *args) +{ + const struct iommu_ops *ops = smmu->iommu.ops; + int err; + + err = iommu_fwspec_init(dev, &dev->of_node->fwnode, ops); + if (err < 0) { + dev_err(dev, "failed to initialize fwspec: %d\n", err); + return err; + } + + err = ops->of_xlate(dev, args); + if (err < 0) { + dev_err(dev, "failed to parse SW group ID: %d\n", err); + iommu_fwspec_free(dev); + return err; + } + + return 0; +} + static int tegra_smmu_add_device(struct device *dev) { struct device_node *np = dev->of_node; + struct tegra_smmu *smmu = NULL; struct iommu_group *group; struct of_phandle_args args; unsigned int index = 0; + int err; while (of_parse_phandle_with_args(np, "iommus", "#iommu-cells", index, &args) == 0) { - struct tegra_smmu *smmu; - smmu = tegra_smmu_find(args.np); if (smmu) { + err = tegra_smmu_configure(smmu, dev, &args); + of_node_put(args.np); + + if (err < 0) + return err; + /* * Only a single IOMMU master interface is currently * supported by the Linux kernel, so abort after the @@ -728,9 +764,13 @@ static int tegra_smmu_add_device(struct device *dev) break; } + of_node_put(args.np); index++; } + if (!smmu) + return -ENODEV; + group = iommu_group_get_for_dev(dev); if (IS_ERR(group)) return PTR_ERR(group); @@ -751,6 +791,80 @@ static void tegra_smmu_remove_device(struct device *dev) iommu_group_remove_device(dev); } +static const struct tegra_smmu_group_soc * +tegra_smmu_find_group(struct tegra_smmu *smmu, unsigned int swgroup) +{ + unsigned int i, j; + + for (i = 0; i < smmu->soc->num_groups; i++) + for (j = 0; j < smmu->soc->groups[i].num_swgroups; j++) + if (smmu->soc->groups[i].swgroups[j] == swgroup) + return &smmu->soc->groups[i]; + + return NULL; +} + +static struct iommu_group *tegra_smmu_group_get(struct tegra_smmu *smmu, + unsigned int swgroup) +{ + const struct tegra_smmu_group_soc *soc; + struct tegra_smmu_group *group; + + soc = tegra_smmu_find_group(smmu, swgroup); + if (!soc) + return NULL; + + mutex_lock(&smmu->lock); + + list_for_each_entry(group, &smmu->groups, list) + if (group->soc == soc) { + mutex_unlock(&smmu->lock); + return group->group; + } + + group = devm_kzalloc(smmu->dev, sizeof(*group), GFP_KERNEL); + if (!group) { + mutex_unlock(&smmu->lock); + return NULL; + } + + INIT_LIST_HEAD(&group->list); + group->soc = soc; + + group->group = iommu_group_alloc(); + if (!group->group) { + devm_kfree(smmu->dev, group); + mutex_unlock(&smmu->lock); + return NULL; + } + + list_add_tail(&group->list, &smmu->groups); + mutex_unlock(&smmu->lock); + + return group->group; +} + +static struct iommu_group *tegra_smmu_device_group(struct device *dev) +{ + struct iommu_fwspec *fwspec = dev->iommu_fwspec; + struct tegra_smmu *smmu = dev->archdata.iommu; + struct iommu_group *group; + + group = tegra_smmu_group_get(smmu, fwspec->ids[0]); + if (!group) + group = generic_device_group(dev); + + return group; +} + +static int tegra_smmu_of_xlate(struct device *dev, + struct of_phandle_args *args) +{ + u32 id = args->args[0]; + + return iommu_fwspec_add_ids(dev, &id, 1); +} + static const struct iommu_ops tegra_smmu_ops = { .capable = tegra_smmu_capable, .domain_alloc = tegra_smmu_domain_alloc, @@ -759,12 +873,12 @@ static const struct iommu_ops tegra_smmu_ops = { .detach_dev = tegra_smmu_detach_dev, .add_device = tegra_smmu_add_device, .remove_device = tegra_smmu_remove_device, - .device_group = generic_device_group, + .device_group = tegra_smmu_device_group, .map = tegra_smmu_map, .unmap = tegra_smmu_unmap, .map_sg = default_iommu_map_sg, .iova_to_phys = tegra_smmu_iova_to_phys, - + .of_xlate = tegra_smmu_of_xlate, .pgsize_bitmap = SZ_4K, }; @@ -913,6 +1027,7 @@ struct tegra_smmu *tegra_smmu_probe(struct device *dev, if (!smmu->asids) return ERR_PTR(-ENOMEM); + INIT_LIST_HEAD(&smmu->groups); mutex_init(&smmu->lock); smmu->regs = mc->regs; @@ -954,6 +1069,7 @@ struct tegra_smmu *tegra_smmu_probe(struct device *dev, return ERR_PTR(err); iommu_device_set_ops(&smmu->iommu, &tegra_smmu_ops); + iommu_device_set_fwnode(&smmu->iommu, dev->fwnode); err = iommu_device_register(&smmu->iommu); if (err) {