From patchwork Tue Aug 16 09:49:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jon Hunter X-Patchwork-Id: 659597 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3sD72x40hyz9t1t for ; Tue, 16 Aug 2016 19:53:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753140AbcHPJwN (ORCPT ); Tue, 16 Aug 2016 05:52:13 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:1120 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753147AbcHPJug (ORCPT ); Tue, 16 Aug 2016 05:50:36 -0400 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com id ; Tue, 16 Aug 2016 02:50:15 -0700 Received: from HQMAIL104.nvidia.com ([172.20.12.94]) by hqnvupgp08.nvidia.com (PGP Universal service); Tue, 16 Aug 2016 02:46:25 -0700 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Tue, 16 Aug 2016 02:46:25 -0700 Received: from HQMAIL110.nvidia.com (172.18.146.15) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Tue, 16 Aug 2016 09:50:10 +0000 Received: from HQMAIL107.nvidia.com (172.20.187.13) by hqmail110.nvidia.com (172.18.146.15) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Tue, 16 Aug 2016 09:50:10 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server id 15.0.1210.3 via Frontend Transport; Tue, 16 Aug 2016 09:50:09 +0000 Received: from jonathanh-lm.nvidia.com (Not Verified[10.26.11.90]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7, 5, 5, 8150) id ; Tue, 16 Aug 2016 02:50:08 -0700 From: Jon Hunter To: "Rafael J. Wysocki" , Kevin Hilman , Ulf Hansson CC: Thierry Reding , Kukjin Kim , Krzysztof Kozlowski , Alexander Aring , Eric Anholt , , , , Jon Hunter Subject: [PATCH 06/10] PM / Domains: Verify the PM domain is present when adding a provider Date: Tue, 16 Aug 2016 10:49:32 +0100 Message-ID: <1471340976-5379-7-git-send-email-jonathanh@nvidia.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1471340976-5379-1-git-send-email-jonathanh@nvidia.com> References: <1471340976-5379-1-git-send-email-jonathanh@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org When a PM domain provider is added, there is currently no way to tell if any PM domains associated with the provider are present. Naturally, the PM domain provider should not be registered if the PM domains have not been added. Nonetheless, verify that the PM domain(s) associated with a provider are present when registering the PM domain provider. This change adds a dependency on the function pm_genpd_present() when CONFIG_PM_GENERIC_DOMAINS_OF is enabled and so ensure this function is available when CONFIG_PM_GENERIC_DOMAINS_OF selected. Signed-off-by: Jon Hunter Acked-by: Ulf Hansson --- drivers/base/power/domain.c | 45 ++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 42 insertions(+), 3 deletions(-) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index d09e45145a3d..50223ae0c9a7 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -586,7 +586,7 @@ static int __init genpd_poweroff_unused(void) } late_initcall(genpd_poweroff_unused); -#ifdef CONFIG_PM_SLEEP +#if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_GENERIC_DOMAINS_OF) /** * pm_genpd_present - Check if the given PM domain has been initialized. @@ -606,6 +606,10 @@ static bool pm_genpd_present(const struct generic_pm_domain *genpd) return false; } +#endif + +#ifdef CONFIG_PM_SLEEP + static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd, struct device *dev) { @@ -1453,7 +1457,23 @@ static int genpd_add_provider(struct device_node *np, genpd_xlate_t xlate, int of_genpd_add_provider_simple(struct device_node *np, struct generic_pm_domain *genpd) { - return genpd_add_provider(np, genpd_xlate_simple, genpd); + int ret; + + if (!np || !genpd) + return -EINVAL; + + mutex_lock(&gpd_list_lock); + + if (!pm_genpd_present(genpd)) { + mutex_unlock(&gpd_list_lock); + return -EINVAL; + } + + ret = genpd_add_provider(np, genpd_xlate_simple, genpd); + + mutex_unlock(&gpd_list_lock); + + return ret; } EXPORT_SYMBOL_GPL(of_genpd_add_provider_simple); @@ -1465,7 +1485,26 @@ EXPORT_SYMBOL_GPL(of_genpd_add_provider_simple); int of_genpd_add_provider_onecell(struct device_node *np, struct genpd_onecell_data *data) { - return genpd_add_provider(np, genpd_xlate_onecell, data); + unsigned int i; + int ret; + + if (!np || !data) + return -EINVAL; + + mutex_lock(&gpd_list_lock); + + for (i = 0; i < data->num_domains; i++) { + if (!pm_genpd_present(data->domains[i])) { + mutex_unlock(&gpd_list_lock); + return -EINVAL; + } + } + + ret = genpd_add_provider(np, genpd_xlate_onecell, data); + + mutex_unlock(&gpd_list_lock); + + return ret; } EXPORT_SYMBOL_GPL(of_genpd_add_provider_onecell);