From patchwork Wed Jan 24 12:45:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter De Schrijver X-Patchwork-Id: 865322 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zRPzJ5fLyz9s72 for ; Wed, 24 Jan 2018 23:45:48 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933625AbeAXMpr (ORCPT ); Wed, 24 Jan 2018 07:45:47 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:10141 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933463AbeAXMpq (ORCPT ); Wed, 24 Jan 2018 07:45:46 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com id ; Wed, 24 Jan 2018 04:46:09 -0800 Received: from HQMAIL101.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 24 Jan 2018 04:45:45 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 24 Jan 2018 04:45:45 -0800 Received: from UKMAIL101.nvidia.com (10.26.138.13) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Wed, 24 Jan 2018 12:45:45 +0000 Received: from tbergstrom-lnx.Nvidia.com (10.21.24.170) by UKMAIL101.nvidia.com (10.26.138.13) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Wed, 24 Jan 2018 12:45:41 +0000 Received: from tbergstrom-lnx.nvidia.com (localhost [127.0.0.1]) by tbergstrom-lnx.Nvidia.com (Postfix) with ESMTP id 4163CF80F62; Wed, 24 Jan 2018 14:45:41 +0200 (EET) From: Peter De Schrijver To: , , , CC: Peter De Schrijver Subject: [PATCH v2 1/6] clk: tegra: dfll registration for multiple SoCs Date: Wed, 24 Jan 2018 14:45:33 +0200 Message-ID: <1516797938-32044-3-git-send-email-pdeschrijver@nvidia.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1516797938-32044-1-git-send-email-pdeschrijver@nvidia.com> References: <1516797938-32044-1-git-send-email-pdeschrijver@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-Originating-IP: [10.21.24.170] X-ClientProxiedBy: UKMAIL101.nvidia.com (10.26.138.13) To UKMAIL101.nvidia.com (10.26.138.13) Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org In a future patch, support for the DFLL in Tegra210 will be introduced. This requires support for more than 1 set of CVB and CPU max frequency tables. Signed-off-by: Peter De Schrijver Reviewed-by: Jon Hunter --- drivers/clk/tegra/clk-tegra124-dfll-fcpu.c | 43 +++++++++++++++++++++++------- 1 file changed, 33 insertions(+), 10 deletions(-) diff --git a/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c b/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c index 269d359..440eb8d 100644 --- a/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c +++ b/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include @@ -28,8 +29,15 @@ #include "clk-dfll.h" #include "cvb.h" +struct dfll_fcpu_data { + const unsigned long *cpu_max_freq_table; + unsigned int cpu_max_freq_table_size; + const struct cvb_table *cpu_cvb_tables; + unsigned int cpu_cvb_tables_size; +}; + /* Maximum CPU frequency, indexed by CPU speedo id */ -static const unsigned long cpu_max_freq_table[] = { +static const unsigned long tegra124_cpu_max_freq_table[] = { [0] = 2014500000UL, [1] = 2320500000UL, [2] = 2116500000UL, @@ -82,16 +90,36 @@ }, }; +static const struct dfll_fcpu_data tegra124_dfll_fcpu_data = { + .cpu_max_freq_table = tegra124_cpu_max_freq_table, + .cpu_max_freq_table_size = ARRAY_SIZE(tegra124_cpu_max_freq_table), + .cpu_cvb_tables = tegra124_cpu_cvb_tables, + .cpu_cvb_tables_size = ARRAY_SIZE(tegra124_cpu_cvb_tables) +}; + +static const struct of_device_id tegra124_dfll_fcpu_of_match[] = { + { + .compatible = "nvidia,tegra124-dfll", + .data = &tegra124_dfll_fcpu_data, + }, + { }, +}; + static int tegra124_dfll_fcpu_probe(struct platform_device *pdev) { int process_id, speedo_id, speedo_value, err; struct tegra_dfll_soc_data *soc; + const struct of_device_id *of_id; + const struct dfll_fcpu_data *fcpu_data; + + of_id = of_match_device(tegra124_dfll_fcpu_of_match, &pdev->dev); + fcpu_data = of_id->data; process_id = tegra_sku_info.cpu_process_id; speedo_id = tegra_sku_info.cpu_speedo_id; speedo_value = tegra_sku_info.cpu_speedo_value; - if (speedo_id >= ARRAY_SIZE(cpu_max_freq_table)) { + if (speedo_id >= fcpu_data->cpu_max_freq_table_size) { dev_err(&pdev->dev, "unknown max CPU freq for speedo_id=%d\n", speedo_id); return -ENODEV; @@ -107,10 +135,10 @@ static int tegra124_dfll_fcpu_probe(struct platform_device *pdev) return -ENODEV; } - soc->max_freq = cpu_max_freq_table[speedo_id]; + soc->max_freq = fcpu_data->cpu_max_freq_table[speedo_id]; - soc->cvb = tegra_cvb_add_opp_table(soc->dev, tegra124_cpu_cvb_tables, - ARRAY_SIZE(tegra124_cpu_cvb_tables), + soc->cvb = tegra_cvb_add_opp_table(soc->dev, fcpu_data->cpu_cvb_tables, + fcpu_data->cpu_cvb_tables_size, process_id, speedo_id, speedo_value, soc->max_freq); if (IS_ERR(soc->cvb)) { @@ -142,11 +170,6 @@ static int tegra124_dfll_fcpu_remove(struct platform_device *pdev) return 0; } -static const struct of_device_id tegra124_dfll_fcpu_of_match[] = { - { .compatible = "nvidia,tegra124-dfll", }, - { }, -}; - static const struct dev_pm_ops tegra124_dfll_pm_ops = { SET_RUNTIME_PM_OPS(tegra_dfll_runtime_suspend, tegra_dfll_runtime_resume, NULL)