From patchwork Wed May 29 08:21:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Lo X-Patchwork-Id: 1106945 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="GRsZ+TMB"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45DNwk3yKpz9s55 for ; Wed, 29 May 2019 18:21:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726205AbfE2IV5 (ORCPT ); Wed, 29 May 2019 04:21:57 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:2635 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726102AbfE2IV4 (ORCPT ); Wed, 29 May 2019 04:21:56 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 29 May 2019 01:21:54 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 29 May 2019 01:21:54 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 29 May 2019 01:21:54 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 29 May 2019 08:21:54 +0000 Received: from HQMAIL104.nvidia.com (172.18.146.11) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 29 May 2019 08:21:51 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 29 May 2019 08:21:51 +0000 Received: from josephl-linux.nvidia.com (Not Verified[10.19.108.132]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 29 May 2019 01:21:51 -0700 From: Joseph Lo To: Thierry Reding , Peter De Schrijver , Jonathan Hunter , "Rob Herring" , Stephen Boyd CC: , , , , Joseph Lo Subject: [PATCH V4 1/8] dt-bindings: memory: tegra: Add external memory controller binding for Tegra210 Date: Wed, 29 May 2019 16:21:32 +0800 Message-ID: <20190529082139.5581-2-josephl@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190529082139.5581-1-josephl@nvidia.com> References: <20190529082139.5581-1-josephl@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1559118114; bh=M6XClSAqRcTCzWoRxr+WNOjVjPa8z/F09Ee0LVJBj0U=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=GRsZ+TMB5XWOYDxwrOS4SFU/6fswClB8p/pGizRrOhyQ/QoaNT9UJ+FJy93WGNtDC bTQrc7h0J9sjvmtc5JjTRuI8G18uITVwOe/WLsLqB0pqIY07RcYj7IQPBbiqmVgnYZ hB9o+noyPcWj2tZwvQTGQ07VK1Uu19AtrkkBrV146+1JVTiAmRSOTqumnH8hmO6tdO b+mgwUShl11pPbh93+W9tUhB0dqeAvHiq9KZV3stDnuIPdtYP19DsRIS+CCdvEfmPj P2iRlmLwomRPhvQ8CidQ6+o3eAMikfYdGbH/ME3WOGkegAbVWWDXcPyT9t8+QvM2NQ MnWPECuzvDQdA== Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Add the binding document for the external memory controller (EMC) which communicates with external LPDDR4 devices. It includes the bindings of the EMC node and a sub-node of EMC table which under the reserved memory node. The EMC table contains the data of the rates that EMC supported. Signed-off-by: Joseph Lo --- v4: - no change v3: - drop the bindings of EMC table - add memory-region and reserved-memory node for EMC table --- .../nvidia,tegra210-emc.txt | 55 +++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra210-emc.txt diff --git a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra210-emc.txt b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra210-emc.txt new file mode 100644 index 000000000000..d65aeef2329c --- /dev/null +++ b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra210-emc.txt @@ -0,0 +1,55 @@ +NVIDIA Tegra210 SoC EMC (external memory controller) +==================================================== + +Device node +=========== +Required properties : +- compatible : should be "nvidia,tegra210-emc". +- reg : physical base address and length of the controller's registers. +- clocks : phandles of the possible source clocks. +- clock-names : names of the possible source clocks. +- interrupts : Should contain the EMC general interrupt. +- memory-region : phandle to the reserved memory (see + Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt) which + contains a sub-node of EMC table. +- nvidia,memory-controller : phandle of the memory controller. + +Reserved memory node +==================== +Should contain a sub-node of EMC table with required properties: +- compatible : should be "nvidia,tegra210-emc-table". +- reg : physical address and length of the location of EMC table. + +Example: + reserved-memory { + #address-cells = <2>; + #size-cells = <2>; + ranges; + + emc_table: emc-table@8be00000 { + compatible = "nvidia,tegra210-emc-table"; + reg = <0x0 0x8be00000 0x0 0x10000>; + status = "okay"; + }; + }; + + external-memory-controller@7001b000 { + compatible = "nvidia,tegra210-emc"; + reg = <0x0 0x7001b000 0x0 0x1000>, + <0x0 0x7001e000 0x0 0x1000>, + <0x0 0x7001f000 0x0 0x1000>; + clocks = <&tegra_car TEGRA210_CLK_EMC>, + <&tegra_car TEGRA210_CLK_PLL_M>, + <&tegra_car TEGRA210_CLK_PLL_C>, + <&tegra_car TEGRA210_CLK_PLL_P>, + <&tegra_car TEGRA210_CLK_CLK_M>, + <&tegra_car TEGRA210_CLK_PLL_M_UD>, + <&tegra_car TEGRA210_CLK_PLL_MB_UD>, + <&tegra_car TEGRA210_CLK_PLL_MB>, + <&tegra_car TEGRA210_CLK_PLL_P_UD>; + clock-names = "emc", "pll_m", "pll_c", "pll_p", "clk_m", + "pll_m_ud", "pll_mb_ud", "pll_mb", "pll_p_ud"; + interrupts = ; + memory-region = <&emc_table>; + nvidia,memory-controller = <&mc>; + }; From patchwork Wed May 29 08:21:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Lo X-Patchwork-Id: 1106943 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="ikQjOlnH"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45DNwj2B7vz9sB8 for ; Wed, 29 May 2019 18:21:57 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726112AbfE2IV4 (ORCPT ); Wed, 29 May 2019 04:21:56 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:1325 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725917AbfE2IVz (ORCPT ); Wed, 29 May 2019 04:21:55 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 29 May 2019 01:21:46 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 29 May 2019 01:21:54 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 29 May 2019 01:21:54 -0700 Received: from HQMAIL104.nvidia.com (172.18.146.11) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 29 May 2019 08:21:54 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 29 May 2019 08:21:54 +0000 Received: from josephl-linux.nvidia.com (Not Verified[10.19.108.132]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 29 May 2019 01:21:54 -0700 From: Joseph Lo To: Thierry Reding , Peter De Schrijver , Jonathan Hunter , Rob Herring , Stephen Boyd CC: , , , , Joseph Lo Subject: [PATCH V4 2/8] clk: tegra: Add PLLP_UD and PLLMB_UD for Tegra210 Date: Wed, 29 May 2019 16:21:33 +0800 Message-ID: <20190529082139.5581-3-josephl@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190529082139.5581-1-josephl@nvidia.com> References: <20190529082139.5581-1-josephl@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1559118106; bh=2LkLmmvl6fwEW9gxHSRE6zwrBwz0+9SHdHQ/WWaDBns=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=ikQjOlnHNnJVSi5xV6DF6og+ahAd8sKRjgktuiR+C2CvK49GnsRVvF6yLz+YbwPri ihdRy9rUXY2XyvFiSLCfSTv8F0ZDSEV1Haz+zAA/N3KhyHcLifn7B23ABgN4DymK9B FMtCDr6p3uGB9jgYTr2gGjTRiaWYWbmmryWMDGWzTSTidcJoCSTr57KupZYm/40KCV lCj1nr1gZ41Y/edypf1XQXG3pIkmDgodu0Re7rOS7lT9tWKRb3Lqeopja9/uadIA5Y fOTw6p7KEumrn1roU1qhLmqnGSRS9nTbjkWxg8onWf8L+18L5uGfyv5xgkqQi1/tVv EP+d2ttx21L9g== Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Introduce the low jitter path of PLLP and PLLMB which can be used as EMC clock source. Signed-off-by: Joseph Lo --- v4: - no change v3: - split to 3 patches from the previous version --- drivers/clk/tegra/clk-tegra210.c | 11 +++++++++++ include/dt-bindings/clock/tegra210-car.h | 4 ++-- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c index ed3c7df75d1e..a985faa4a3c1 100644 --- a/drivers/clk/tegra/clk-tegra210.c +++ b/drivers/clk/tegra/clk-tegra210.c @@ -3116,6 +3116,17 @@ static void __init tegra210_pll_init(void __iomem *clk_base, clk_register_clkdev(clk, "pll_m_ud", NULL); clks[TEGRA210_CLK_PLL_M_UD] = clk; + /* PLLMB_UD */ + clk = clk_register_fixed_factor(NULL, "pll_mb_ud", "pll_mb", + CLK_SET_RATE_PARENT, 1, 1); + clk_register_clkdev(clk, "pll_mb_ud", NULL); + clks[TEGRA210_CLK_PLL_MB_UD] = clk; + + /* PLLP_UD */ + clk = clk_register_fixed_factor(NULL, "pll_p_ud", "pll_p", + 0, 1, 1); + clks[TEGRA210_CLK_PLL_P_UD] = clk; + /* PLLU_VCO */ if (!tegra210_init_pllu()) { clk = clk_register_fixed_rate(NULL, "pll_u_vco", "pll_ref", 0, diff --git a/include/dt-bindings/clock/tegra210-car.h b/include/dt-bindings/clock/tegra210-car.h index 6b77e721f6b1..832a89788525 100644 --- a/include/dt-bindings/clock/tegra210-car.h +++ b/include/dt-bindings/clock/tegra210-car.h @@ -349,8 +349,8 @@ #define TEGRA210_CLK_PLL_P_OUT_XUSB 317 #define TEGRA210_CLK_XUSB_SSP_SRC 318 #define TEGRA210_CLK_PLL_RE_OUT1 319 -/* 320 */ -/* 321 */ +#define TEGRA210_CLK_PLL_MB_UD 320 +#define TEGRA210_CLK_PLL_P_UD 321 #define TEGRA210_CLK_ISP 322 #define TEGRA210_CLK_PLL_A_OUT_ADSP 323 #define TEGRA210_CLK_PLL_A_OUT0_OUT_ADSP 324 From patchwork Wed May 29 08:21:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Lo X-Patchwork-Id: 1106948 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="OrQx8DRP"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45DNwm0wmwz9sBb for ; Wed, 29 May 2019 18:22:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726323AbfE2IV7 (ORCPT ); Wed, 29 May 2019 04:21:59 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:2640 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726102AbfE2IV7 (ORCPT ); Wed, 29 May 2019 04:21:59 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 29 May 2019 01:21:57 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 29 May 2019 01:21:57 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 29 May 2019 01:21:57 -0700 Received: from HQMAIL104.nvidia.com (172.18.146.11) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 29 May 2019 08:21:57 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 29 May 2019 08:21:56 +0000 Received: from josephl-linux.nvidia.com (Not Verified[10.19.108.132]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 29 May 2019 01:21:56 -0700 From: Joseph Lo To: Thierry Reding , Peter De Schrijver , Jonathan Hunter , Rob Herring , Stephen Boyd CC: , , , , Joseph Lo Subject: [PATCH V4 3/8] clk: tegra: Export functions for EMC clock scaling Date: Wed, 29 May 2019 16:21:34 +0800 Message-ID: <20190529082139.5581-4-josephl@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190529082139.5581-1-josephl@nvidia.com> References: <20190529082139.5581-1-josephl@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1559118117; bh=13pp1F63Nzgu9M/P9qq/n3xX2+XBsdK0+qoNt5EjMc8=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=OrQx8DRPBGLS8sV31cQ8LVW5oAy4bqLBZT+HCn5jC5lfJy+0118anSiz/1aN/r6yn 0RcwXDTxscUaWapoTtH8S6hH4PsR39HcizCwtlc/CimjSMp3RsMG0AnsmDfFvIPObN PQjAunLeAownFpwuWpBep1k7eBxedGdtCVGqNLq9nJXRqnpVhZ+u6MgQJ5kYt/fjF9 U6cs+4gt9w+qV0w8mJWIFcPYKN3B1kHTY6tv6t4aNV/v7kQZSfXBUtDUwsoRquAlhn XyNQ69rhrv6mhADeOhHdmYthGU7mrQiTFLh+sZvCJHU2STJjPBVj3tCq+Wq97VzScW 75V//LW7kQnww== Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Export functions to allow accessing the CAR register required by EMC clock scaling. These functions will be used to access the CAR register as part of the scaling sequence. Signed-off-by: Joseph Lo --- v4: - no change v3: - split to 3 patches from the previous version --- drivers/clk/tegra/clk-tegra210.c | 36 ++++++++++++++++++++++++++++++++ include/linux/clk/tegra.h | 5 +++++ 2 files changed, 41 insertions(+) diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c index a985faa4a3c1..1d52354820ca 100644 --- a/drivers/clk/tegra/clk-tegra210.c +++ b/drivers/clk/tegra/clk-tegra210.c @@ -47,6 +47,7 @@ #define CLK_SOURCE_LA 0x1f8 #define CLK_SOURCE_SDMMC2 0x154 #define CLK_SOURCE_SDMMC4 0x164 +#define CLK_SOURCE_EMC_DLL 0x664 #define PLLC_BASE 0x80 #define PLLC_OUT 0x84 @@ -234,6 +235,10 @@ #define RST_DFLL_DVCO 0x2f4 #define DVFS_DFLL_RESET_SHIFT 0 +#define CLK_RST_CONTROLLER_CLK_OUT_ENB_X_SET 0x284 +#define CLK_RST_CONTROLLER_CLK_OUT_ENB_X_CLR 0x288 +#define CLK_OUT_ENB_X_CLK_ENB_EMC_DLL BIT(14) + #define CLK_RST_CONTROLLER_RST_DEV_Y_SET 0x2a8 #define CLK_RST_CONTROLLER_RST_DEV_Y_CLR 0x2ac @@ -560,6 +565,37 @@ void tegra210_set_sata_pll_seq_sw(bool state) } EXPORT_SYMBOL_GPL(tegra210_set_sata_pll_seq_sw); +void tegra210_clk_emc_dll_enable(bool flag) +{ + u32 offset = flag ? CLK_RST_CONTROLLER_CLK_OUT_ENB_X_SET : + CLK_RST_CONTROLLER_CLK_OUT_ENB_X_CLR; + + writel_relaxed(CLK_OUT_ENB_X_CLK_ENB_EMC_DLL, clk_base + offset); +} +EXPORT_SYMBOL_GPL(tegra210_clk_emc_dll_enable); + +void tegra210_clk_emc_dll_update_setting(u32 emc_dll_src_value) +{ + writel_relaxed(emc_dll_src_value, clk_base + CLK_SOURCE_EMC_DLL); +} +EXPORT_SYMBOL_GPL(tegra210_clk_emc_dll_update_setting); + +void tegra210_clk_emc_update_setting(u32 emc_src_value) +{ + writel_relaxed(emc_src_value, clk_base + CLK_SOURCE_EMC); +} +EXPORT_SYMBOL_GPL(tegra210_clk_emc_update_setting); + +u32 tegra210_clk_emc_get_setting(void) +{ + u32 val; + + val = readl_relaxed(clk_base + CLK_SOURCE_EMC); + + return val; +} +EXPORT_SYMBOL_GPL(tegra210_clk_emc_get_setting); + static void tegra210_generic_mbist_war(struct tegra210_domain_mbist_war *mbist) { u32 val; diff --git a/include/linux/clk/tegra.h b/include/linux/clk/tegra.h index afb9edfa5d58..b212c332b6e0 100644 --- a/include/linux/clk/tegra.h +++ b/include/linux/clk/tegra.h @@ -129,5 +129,10 @@ extern void tegra210_set_sata_pll_seq_sw(bool state); extern void tegra210_put_utmipll_in_iddq(void); extern void tegra210_put_utmipll_out_iddq(void); extern int tegra210_clk_handle_mbist_war(unsigned int id); +extern void tegra210_clk_emc_dll_enable(bool flag); +extern void tegra210_clk_emc_dll_update_setting(u32 emc_dll_src_value); +extern void tegra210_clk_emc_update_setting(u32 emc_src_value); +extern u32 tegra210_clk_emc_get_setting(void); + #endif /* __LINUX_CLK_TEGRA_H_ */ From patchwork Wed May 29 08:21:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Lo X-Patchwork-Id: 1106949 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="A4fx9Voh"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45DNwr4Jccz9s55 for ; Wed, 29 May 2019 18:22:04 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726515AbfE2IWE (ORCPT ); Wed, 29 May 2019 04:22:04 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:11393 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726102AbfE2IWD (ORCPT ); Wed, 29 May 2019 04:22:03 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 29 May 2019 01:21:59 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 29 May 2019 01:22:00 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 29 May 2019 01:22:00 -0700 Received: from HQMAIL103.nvidia.com (172.20.187.11) by HQMAIL103.nvidia.com (172.20.187.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 29 May 2019 08:21:59 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL103.nvidia.com (172.20.187.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 29 May 2019 08:21:59 +0000 Received: from josephl-linux.nvidia.com (Not Verified[10.19.108.132]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 29 May 2019 01:21:59 -0700 From: Joseph Lo To: Thierry Reding , Peter De Schrijver , Jonathan Hunter , Rob Herring , Stephen Boyd CC: , , , , Joseph Lo Subject: [PATCH V4 4/8] memory: tegra: Add Tegra210 EMC clock driver Date: Wed, 29 May 2019 16:21:35 +0800 Message-ID: <20190529082139.5581-5-josephl@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190529082139.5581-1-josephl@nvidia.com> References: <20190529082139.5581-1-josephl@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1559118120; bh=irNShFEaQrH0P4JuqiGFwpraYSRpDZl5jWmiGSZSIvk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=A4fx9VohDbkmQ8Oe9dzswdsnYLWEcnIAcjI1iI6KRlEPja7Q0Cou8GXbXxtSNBd0Y jjvFsSfolkyJ2f9jw22yQNCt0yY51vNS/pgw4ZbtVeOf6W2eLs8YUThdOSRXUXKxAi wFokDlKEZoMmz0DG9tR3pG3CZeSi+u5MvZoJIw2lU6L2Pk9Pxl1awKjXhRpbLUFsYt U4py7jhe3uO8dHtcVSiM9gI46O/azEeB7TXAwdpn2HhBe+s4CYopb7hRZV2kgVaLMS 2QaAwCdY3T2jZ3r7v1Xu0q/QsRu9RMm2Uf5cPPGBhTKArOzW9zV9s8pA4uIb0bvNq0 +EGxcNq8O32Ag== Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org This is the initial patch for Tegra210 EMC clock driver, which doesn't include the support code and detail sequence for clock scaling yet. The driver is designed to support LPDDR4 SDRAM. Because of the LPDDR4 devices need to do initial time training before it can be used, the firmware will help to do that at early boot stage. Then, the trained table of the rates we support will pass to the kernel via DT. So the driver can get the trained table for clock scaling support. For the higher rate support (above 800MHz), the periodic training is needed for the timing compensation. So basically, two methodologies for clock scaling are supported, one is following the clock changing sequence to update the EMC table to EMC registers and another is if the rate needs periodic training, then we will start a timer to do that periodically until it scales to the lower rate. Based on the work of Peter De Schrijver . Signed-off-by: Joseph Lo --- v4: - remove the statistic data in debugfs - add tegra210_clk_register_emc API to make it compatible with the case if the kernel still uses the older DTB which doesn't have EMC node. And the MC and EMC clock can still be registered successfully. v3: - address almost all the comments from the previous version - remove the DT parser of EMC table - The EMC table is passing as a binary blob now. --- drivers/memory/tegra/Kconfig | 10 + drivers/memory/tegra/Makefile | 1 + drivers/memory/tegra/tegra210-emc.c | 671 ++++++++++++++++++++++++++++ drivers/memory/tegra/tegra210-emc.h | 156 +++++++ include/soc/tegra/emc.h | 2 + 5 files changed, 840 insertions(+) create mode 100644 drivers/memory/tegra/tegra210-emc.c create mode 100644 drivers/memory/tegra/tegra210-emc.h diff --git a/drivers/memory/tegra/Kconfig b/drivers/memory/tegra/Kconfig index 4680124ddcab..9d051bcdbee3 100644 --- a/drivers/memory/tegra/Kconfig +++ b/drivers/memory/tegra/Kconfig @@ -26,3 +26,13 @@ config TEGRA124_EMC Tegra124 chips. The EMC controls the external DRAM on the board. This driver is required to change memory timings / clock rate for external memory. + +config TEGRA210_EMC + bool "NVIDIA Tegra210 External Memory Controller driver" + default y + depends on TEGRA_MC && ARCH_TEGRA_210_SOC + help + This driver is for the External Memory Controller (EMC) found on + Tegra210 chips. The EMC controls the external DRAM on the board. + This driver is required to change memory timings / clock rate for + external memory. diff --git a/drivers/memory/tegra/Makefile b/drivers/memory/tegra/Makefile index 3971a6b7c487..f78bbb7cd16f 100644 --- a/drivers/memory/tegra/Makefile +++ b/drivers/memory/tegra/Makefile @@ -12,4 +12,5 @@ obj-$(CONFIG_TEGRA_MC) += tegra-mc.o obj-$(CONFIG_TEGRA20_EMC) += tegra20-emc.o obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o +obj-$(CONFIG_TEGRA210_EMC) += tegra210-emc.o obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o diff --git a/drivers/memory/tegra/tegra210-emc.c b/drivers/memory/tegra/tegra210-emc.c new file mode 100644 index 000000000000..80ea14d1e6ce --- /dev/null +++ b/drivers/memory/tegra/tegra210-emc.c @@ -0,0 +1,671 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2015-2019, NVIDIA CORPORATION. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mc.h" +#include "tegra210-emc.h" + +#define CLK_RST_CONTROLLER_CLK_SOURCE_EMC 0x19c +#define EMC_CLK_EMC_2X_CLK_SRC_SHIFT 29 +#define EMC_CLK_EMC_2X_CLK_SRC_MASK \ + (0x7 << EMC_CLK_EMC_2X_CLK_SRC_SHIFT) +#define EMC_CLK_MC_EMC_SAME_FREQ BIT(16) +#define EMC_CLK_EMC_2X_CLK_DIVISOR_SHIFT 0 +#define EMC_CLK_EMC_2X_CLK_DIVISOR_MASK \ + (0xff << EMC_CLK_EMC_2X_CLK_DIVISOR_SHIFT) + +#define MC_EMEM_ARB_MISC0_EMC_SAME_FREQ BIT(27) + +#define TEGRA_EMC_MAX_FREQS 16 +#define TEGRA210_EMC_SUSPEND_RATE 204000000 + +#define CLK_CHANGE_DELAY 100 +#define TRAINING_TIME 100 + +enum { + TEGRA_EMC_SRC_PLLM, + TEGRA_EMC_SRC_PLLC, + TEGRA_EMC_SRC_PLLP, + TEGRA_EMC_SRC_CLKM, + TEGRA_EMC_SRC_PLLM_UD, + TEGRA_EMC_SRC_PLLMB_UD, + TEGRA_EMC_SRC_PLLMB, + TEGRA_EMC_SRC_PLLP_UD, + TEGRA_EMC_SRC_COUNT, +}; + +struct emc_sel { + struct clk *input; + u32 value; + unsigned long input_rate; + + struct clk *input_b; // second source of PLLM: PLLMB + u32 value_b; + unsigned long input_rate_b; +}; + +static struct emc_sel *emc_clk_sel; +static struct clk *emc_src[TEGRA_EMC_SRC_COUNT]; +static const char *emc_src_names[TEGRA_EMC_SRC_COUNT] = { + [TEGRA_EMC_SRC_PLLM] = "pll_m", + [TEGRA_EMC_SRC_PLLC] = "pll_c", + [TEGRA_EMC_SRC_PLLP] = "pll_p", + [TEGRA_EMC_SRC_CLKM] = "clk_m", + [TEGRA_EMC_SRC_PLLM_UD] = "pll_m_ud", + [TEGRA_EMC_SRC_PLLMB_UD] = "pll_mb_ud", + [TEGRA_EMC_SRC_PLLMB] = "pll_mb", + [TEGRA_EMC_SRC_PLLP_UD] = "pll_p_ud", +}; + +static const struct supported_sequence supported_seqs[] = { + { + 0, + NULL, + NULL, + NULL + } +}; +static const struct supported_sequence *seq = supported_seqs; +static DEFINE_SPINLOCK(emc_access_lock); + +static inline struct tegra_emc *clk_hw_to_emc(struct clk_hw *hw) +{ + return container_of(hw, struct tegra_emc, hw); +} + +u32 emc_readl(struct tegra_emc *emc, unsigned long offset) +{ + return readl_relaxed(emc->emc_base[REG_EMC] + offset); +} + +u32 emc_readl_per_ch(struct tegra_emc *emc, int type, + unsigned long offset) +{ + u32 val = 0; + + switch (type) { + case REG_EMC: + case REG_EMC0: + val = readl_relaxed(emc->emc_base[REG_EMC] + offset); + break; + case REG_EMC1: + val = readl_relaxed(emc->emc_base[REG_EMC1] + offset); + break; + } + + return val; +} + +static inline u32 emc_src_val(u32 val) +{ + return (val & EMC_CLK_EMC_2X_CLK_SRC_MASK) >> + EMC_CLK_EMC_2X_CLK_SRC_SHIFT; +} + +static inline u32 emc_div_val(u32 val) +{ + return (val & EMC_CLK_EMC_2X_CLK_DIVISOR_MASK) >> + EMC_CLK_EMC_2X_CLK_DIVISOR_SHIFT; +} + +static void emc_train_func(struct timer_list *tmr) +{ + unsigned long flags; + struct tegra_emc *emc = from_timer(emc, tmr, training_timer); + + if (!emc->current_timing) + return; + + spin_lock_irqsave(&emc_access_lock, flags); + if (seq->periodic_compensation) + seq->periodic_compensation(emc); + spin_unlock_irqrestore(&emc_access_lock, flags); + + mod_timer(&emc->training_timer, + jiffies + msecs_to_jiffies(emc->timer_period_training)); +} + +static void emc_training_timer_start(struct tegra_emc *emc) +{ + mod_timer(&emc->training_timer, + jiffies + msecs_to_jiffies(emc->timer_period_training)); +} + +static void emc_training_timer_stop(struct tegra_emc *emc) +{ + del_timer(&emc->training_timer); +} + +static void emc_set_clock(struct tegra_emc *emc, u32 clksrc) +{ + seq->set_clock(emc, clksrc); + + if (emc->next_timing->periodic_training) + emc_training_timer_start(emc); + else + emc_training_timer_stop(emc); +} + +static inline unsigned long emc_get_src_clk_rate(void) +{ + int div; + u32 val; + unsigned long rate; + + val = tegra210_clk_emc_get_setting(); + rate = clk_get_rate(emc_src[emc_src_val(val)]); + div = emc_div_val(val); + div += 2; + rate *= 2; + rate += div - 1; + do_div(rate, div); + + return rate; +} + +static int emc_table_lookup(struct tegra_emc *emc, unsigned long rate) +{ + int i; + + for (i = 0; i < emc->emc_table_size; i++) { + if (emc_clk_sel[i].input == NULL) + continue; + + if (emc->emc_table[i].rate == rate) + return i; + } + + return -EINVAL; +} + +static struct clk *emc_predict_parent(struct tegra_emc *emc, + unsigned long rate) +{ + struct clk *old_parent, *new_parent; + unsigned long parent_rate; + int idx; + + idx = emc_table_lookup(emc, rate / 1000); + if (idx < 0) + return ERR_PTR(-EINVAL); + + parent_rate = emc_clk_sel[idx].input_rate * 1000; + new_parent = emc_clk_sel[idx].input; + old_parent = clk_get_parent(emc->emc_clk); + + if (parent_rate == clk_get_rate(old_parent)) + return old_parent; + + if (clk_is_match(new_parent, old_parent)) + new_parent = emc_clk_sel[idx].input_b; + + if (parent_rate != clk_get_rate(new_parent)) + clk_set_rate(new_parent, parent_rate); + + return new_parent; +} + +static int emc_set_rate(struct tegra_emc *emc, unsigned long rate) +{ + int i; + unsigned long flags; + s64 last_change_delay; + struct clk *parent; + + if (emc->emc_suspend) + rate = TEGRA210_EMC_SUSPEND_RATE; + + if (rate == emc->current_timing->rate) + return 0; + + i = emc_table_lookup(emc, rate / 1000); + + if (i < 0) + return i; + + if (rate > 204000000 && !emc->emc_table[i].trained) + return -EINVAL; + + parent = emc_predict_parent(emc, rate); + if (clk_is_match(parent, emc_clk_sel[i].input)) + emc->clk_setting = emc_clk_sel[i].value; + else + emc->clk_setting = emc_clk_sel[i].value_b; + + emc->next_timing = &emc->emc_table[i]; + last_change_delay = ktime_us_delta(ktime_get(), emc->clkchange_time); + if ((last_change_delay >= 0) && + (last_change_delay < emc->clkchange_delay)) + udelay(emc->clkchange_delay - (int)last_change_delay); + + spin_lock_irqsave(&emc_access_lock, flags); + emc_set_clock(emc, emc->clk_setting); + emc->clkchange_time = ktime_get(); + emc->current_timing = &emc->emc_table[i]; + spin_unlock_irqrestore(&emc_access_lock, flags); + + return 0; +} + +#ifdef CONFIG_DEBUG_FS +static int debug_emc_get_rate(void *data, u64 *val) +{ + struct clk *c = data; + + *val = clk_get_rate(c); + + return 0; +} + +static int debug_emc_set_rate(void *data, u64 val) +{ + struct clk *c = data; + + return clk_set_rate(c, val); +} +DEFINE_SIMPLE_ATTRIBUTE(emc_rate_fops, debug_emc_get_rate, + debug_emc_set_rate, "%llu\n"); + +static int tegra_emc_debug_init(struct tegra_emc *emc) +{ + struct dentry *emc_debugfs_root; + + emc_debugfs_root = debugfs_create_dir("tegra_emc", NULL); + if (!emc_debugfs_root) + return -ENOMEM; + + if (!debugfs_create_file("rate", 0644, emc_debugfs_root, emc->emc_clk, + &emc_rate_fops)) + goto err_out; + + return 0; + +err_out: + debugfs_remove_recursive(emc_debugfs_root); + return -ENOMEM; +} +#endif /* CONFIG_DEBUG_FS */ + +static u8 clk_emc_get_parent(struct clk_hw *hw) +{ + struct tegra_emc *emc = clk_hw_to_emc(hw); + + if (!emc->clk_setting) + emc->clk_setting = tegra210_clk_emc_get_setting(); + + return emc_src_val(emc->clk_setting); +} + +static unsigned long clk_emc_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct tegra_emc *emc = clk_hw_to_emc(hw); + + if (!emc->emc_table_size || !seq) { + u32 emc_setting = tegra210_clk_emc_get_setting(); + + return clk_get_rate(emc_src[emc_src_val(emc_setting)]); + } + + return emc->current_timing->rate * 1000; +} + +static long clk_emc_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *prate) +{ + struct tegra_emc *emc = clk_hw_to_emc(hw); + int i; + + if (!emc->emc_table_size || !seq) { + u32 emc_setting = tegra210_clk_emc_get_setting(); + return clk_get_rate(emc_src[emc_src_val(emc_setting)]); + } + + if (emc->emc_suspend) + return TEGRA210_EMC_SUSPEND_RATE; + + rate /= 1000; + + for (i = 0; i < emc->emc_table_size; i++) { + if (emc->emc_table[i].rate >= rate) + return emc->emc_table[i].rate * 1000; + } + + return emc->emc_table[i - 1].rate * 1000; +} + +static int clk_emc_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct tegra_emc *emc = clk_hw_to_emc(hw); + struct clk *old_parent, *new_parent; + int ret = -EINVAL; + + if (!emc->emc_table_size || !seq) + return ret; + + if (emc->emc_suspend) + rate = TEGRA210_EMC_SUSPEND_RATE; + + old_parent = clk_get_parent(hw->clk); + new_parent = emc_predict_parent(emc, rate); + if (IS_ERR(new_parent)) + goto out; + + if (!clk_is_match(new_parent, old_parent)) + clk_prepare_enable(new_parent); + + ret = emc_set_rate(emc, rate); + if (ret) { + if (new_parent != old_parent) + clk_disable_unprepare(new_parent); + goto out; + } + + if (!clk_is_match(new_parent, old_parent)) { + clk_hw_reparent(hw, __clk_get_hw(new_parent)); + clk_disable_unprepare(old_parent); + } + +out: + return ret; +} + +static const struct clk_ops tegra_clk_emc_ops = { + .get_parent = clk_emc_get_parent, + .recalc_rate = clk_emc_recalc_rate, + .round_rate = clk_emc_round_rate, + .set_rate = clk_emc_set_rate, +}; + +static int find_matching_input(struct emc_table *table, struct emc_sel *sel) +{ + u32 div_value; + u32 src_value; + unsigned long input_rate = 0; + struct clk *input_clk; + + div_value = emc_div_val(table->clk_src_emc); + src_value = emc_src_val(table->clk_src_emc); + + if (div_value & 0x1) { + pr_warn("Tegra EMC: invalid odd divider for EMC rate %u\n", + table->rate); + return -EINVAL; + } + + if (!(table->clk_src_emc & EMC_CLK_MC_EMC_SAME_FREQ) != + !(MC_EMEM_ARB_MISC0_EMC_SAME_FREQ & + table->burst_mc_regs[MC_EMEM_ARB_MISC0_INDEX])) { + pr_warn("Tegra EMC: ambiguous EMC to MC ratio for rate %u\n", + table->rate); + return -EINVAL; + } + + input_clk = emc_src[src_value]; + if (input_clk == emc_src[TEGRA_EMC_SRC_PLLM] + || input_clk == emc_src[TEGRA_EMC_SRC_PLLM_UD]) { + input_rate = table->rate * (1 + div_value / 2); + } else { + input_rate = clk_get_rate(input_clk) / 1000; + if (input_rate != (table->rate * (1 + div_value / 2))) { + pr_warn("Tegra EMC: rate %u doesn't match input\n", + table->rate); + return -EINVAL; + } + } + + sel->input = input_clk; + sel->input_rate = input_rate; + sel->value = table->clk_src_emc; + sel->input_b = input_clk; + sel->input_rate_b = input_rate; + sel->value_b = table->clk_src_emc; + + if (input_clk == emc_src[TEGRA_EMC_SRC_PLLM]) { + sel->input_b = emc_src[TEGRA_EMC_SRC_PLLMB]; + sel->value_b = table->clk_src_emc & + ~EMC_CLK_EMC_2X_CLK_SRC_MASK; + sel->value_b |= TEGRA_EMC_SRC_PLLMB << + EMC_CLK_EMC_2X_CLK_SRC_SHIFT; + } + + if (input_clk == emc_src[TEGRA_EMC_SRC_PLLM_UD]) { + sel->input_b = emc_src[TEGRA_EMC_SRC_PLLMB_UD]; + sel->value_b = table->clk_src_emc & + ~EMC_CLK_EMC_2X_CLK_SRC_MASK; + sel->value_b |= TEGRA_EMC_SRC_PLLMB_UD << + EMC_CLK_EMC_2X_CLK_SRC_SHIFT; + } + + return 0; +} + +static int tegra210_emc_probe(struct platform_device *pdev) +{ + int i; + unsigned long table_rate; + unsigned long current_rate; + struct clk *emc_clk; + struct device_node *np; + struct platform_device *mc; + struct resource res; + struct tegra_emc *emc; + void *table_addr; + + emc_clk = devm_clk_get(&pdev->dev, "emc"); + if (IS_ERR(emc_clk)) + return PTR_ERR(emc_clk); + emc = clk_hw_to_emc(__clk_get_hw(emc_clk)); + + np = of_parse_phandle(pdev->dev.of_node, "nvidia,memory-controller", 0); + if (!np) { + dev_err(&pdev->dev, "could not get memory controller\n"); + return -ENOENT; + } + + mc = of_find_device_by_node(np); + of_node_put(np); + if (!mc) + return -ENOENT; + + emc->mc = platform_get_drvdata(mc); + if (!emc->mc) + return -EPROBE_DEFER; + + emc->emc_base[REG_EMC] = devm_platform_ioremap_resource(pdev, 0); + emc->emc_base[REG_EMC0] = devm_platform_ioremap_resource(pdev, 1); + emc->emc_base[REG_EMC1] = devm_platform_ioremap_resource(pdev, 2); + + for (i = 0; i < TEGRA_EMC_SRC_COUNT; i++) { + if (!IS_ERR(emc_src[i])) + clk_put(emc_src[i]); + + emc_src[i] = devm_clk_get(&pdev->dev, emc_src_names[i]); + if (IS_ERR(emc_src[i])) { + dev_err(&pdev->dev, "Can not find EMC source clock\n"); + return -ENODATA; + } + } + + np = of_parse_phandle(pdev->dev.of_node, "memory-region", 0); + if (!np) { + dev_err(&pdev->dev, "could not find EMC table\n"); + return -ENODATA; + } + + if (!of_device_is_compatible(np, "nvidia,tegra210-emc-table") || + !of_device_is_available(np)) { + dev_err(&pdev->dev, "EMC table is invalid\n"); + return -ENODATA; + } + + of_address_to_resource(np, 0, &res); + table_addr = memremap(res.start, resource_size(&res), MEMREMAP_WB); + of_node_put(np); + if (!table_addr) { + dev_err(&pdev->dev, "could not map EMC table\n"); + return -ENOMEM; + } + emc->emc_table = (struct emc_table *)table_addr; + + for (i = 0; i < TEGRA_EMC_MAX_FREQS; i++) + if (emc->emc_table[i].rev != 0) + emc->emc_table_size++; + else + break; + + /* check the supported sequence */ + while (seq->table_rev) { + if (seq->table_rev == emc->emc_table[0].rev) + break; + seq++; + } + if (!seq->set_clock) { + seq = NULL; + dev_err(&pdev->dev, "Invalid EMC sequence for table Rev. %d\n", + emc->emc_table[0].rev); + return -ENODATA; + } + + emc_clk_sel = devm_kcalloc(&pdev->dev, + emc->emc_table_size, + sizeof(struct emc_sel), + GFP_KERNEL); + + /* calculate the rate from source clock */ + current_rate = emc_get_src_clk_rate() / 1000; + + /* validate the table */ + for (i = 0; i < emc->emc_table_size; i++) { + table_rate = emc->emc_table[i].rate; + if (!table_rate) + continue; + + if (i && ((table_rate <= emc->emc_table[i-1].rate) || + (emc->emc_table[i].min_volt < + emc->emc_table[i-1].min_volt))) + continue; + + if (emc->emc_table[i].rev != emc->emc_table[0].rev) + continue; + + if (find_matching_input(&emc->emc_table[i], &emc_clk_sel[i])) + continue; + + if (table_rate == current_rate) + emc->current_timing = &emc->emc_table[i]; + } + + emc->clk_setting = tegra210_clk_emc_get_setting(); + emc->clkchange_delay = CLK_CHANGE_DELAY; + emc->timer_period_training = TRAINING_TIME; + emc->dev = &pdev->dev; + dev_set_drvdata(emc->dev, emc); + + /* EMC training timer */ + timer_setup(&emc->training_timer, emc_train_func, 0); + +#ifdef CONFIG_DEBUG_FS + tegra_emc_debug_init(emc); +#endif + + return 0; +} + +struct clk *tegra210_clk_register_emc(void) +{ + struct clk_init_data init; + struct clk *clk; + struct tegra_emc *emc; + int i; + + emc = kzalloc(sizeof(*emc), GFP_KERNEL); + if (!emc) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < TEGRA_EMC_SRC_COUNT; i++) + emc_src[i] = clk_get_sys(NULL, emc_src_names[i]); + + init.name = "emc"; + init.ops = &tegra_clk_emc_ops; + init.flags = CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE; + init.parent_names = emc_src_names; + init.num_parents = ARRAY_SIZE(emc_src_names); + emc->hw.init = &init; + + clk = clk_register(NULL, &emc->hw); + if (IS_ERR(clk)) { + kfree(emc); + return clk; + } + emc->emc_clk = clk; + + return clk; +} +EXPORT_SYMBOL_GPL(tegra210_clk_register_emc); + +#ifdef CONFIG_PM_SLEEP +static int tegra210_emc_suspend(struct device *dev) +{ + struct tegra_emc *emc = dev_get_drvdata(dev); + + emc->emc_suspend = true; + emc->emc_resume_rate = clk_get_rate(emc->emc_clk); + clk_set_rate(emc->emc_clk, TEGRA210_EMC_SUSPEND_RATE); + + pr_debug("%s at rate %lu\n", __func__, clk_get_rate(emc->emc_clk)); + + return 0; +} + +static int tegra210_emc_resume(struct device *dev) +{ + struct tegra_emc *emc = dev_get_drvdata(dev); + + emc->emc_suspend = false; + clk_set_rate(emc->emc_clk, emc->emc_resume_rate); + + pr_debug("%s at rate %lu\n", __func__, clk_get_rate(emc->emc_clk)); + + return 0; +} +#endif + +static const struct dev_pm_ops tegra210_emc_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(tegra210_emc_suspend, tegra210_emc_resume) +}; + +static const struct of_device_id tegra210_emc_of_match[] = { + { .compatible = "nvidia,tegra210-emc", }, + { }, +}; + +static struct platform_driver tegra210_emc_driver = { + .driver = { + .name = "tegra210-emc", + .of_match_table = tegra210_emc_of_match, + .pm = &tegra210_emc_pm_ops, + .suppress_bind_attrs = true, + }, + .probe = tegra210_emc_probe, +}; + +static int __init tegra210_emc_init(void) +{ + return platform_driver_register(&tegra210_emc_driver); +} +subsys_initcall(tegra210_emc_init); diff --git a/drivers/memory/tegra/tegra210-emc.h b/drivers/memory/tegra/tegra210-emc.h new file mode 100644 index 000000000000..029f8afb2d66 --- /dev/null +++ b/drivers/memory/tegra/tegra210-emc.h @@ -0,0 +1,156 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2015-2019, NVIDIA CORPORATION. All rights reserved. + */ + +#ifndef _TEGRA210_EMC_REG_H +#define _TEGRA210_EMC_REG_H + +#include +#include +#include + +#include "mc.h" + +enum burst_mc_regs_list { + MC_EMEM_ARB_MISC0_INDEX = 20, +}; + +enum { + REG_EMC, + REG_EMC0, + REG_EMC1, + REG_TYPE_NUM, +}; + +enum { + C0D0U0, + C0D0U1, + C0D1U0, + C0D1U1, + C1D0U0, + C1D0U1, + C1D1U0, + C1D1U1, + DRAM_CLKTREE_NUM, +}; + +enum { + VREF_REGS_PER_CH_SIZE = 4, + DRAM_TIMINGS_NUM = 5, + BURST_REGS_PER_CH_SIZE = 8, + TRIM_REGS_PER_CH_SIZE = 10, + PTFV_ARRAY_SIZE = 12, + SAVE_RESTORE_MOD_REGS_SIZE = 12, + TRAINING_MOD_REGS_SIZE = 20, + BURST_UP_DOWN_REGS_SIZE = 24, + BURST_MC_REGS_SIZE = 33, + TRIM_REGS_SIZE = 138, + BURST_REGS_SIZE = 221, +}; + +struct emc_table { + u32 rev; + const char dvfs_ver[60]; + u32 rate; + u32 min_volt; + u32 gpu_min_volt; + const char clock_src[32]; + u32 clk_src_emc; + u32 needs_training; + u32 training_pattern; + u32 trained; + + u32 periodic_training; + u32 trained_dram_clktree[DRAM_CLKTREE_NUM]; + u32 current_dram_clktree[DRAM_CLKTREE_NUM]; + u32 run_clocks; + u32 tree_margin; + + u32 num_burst; + u32 num_burst_per_ch; + u32 num_trim; + u32 num_trim_per_ch; + u32 num_mc_regs; + u32 num_up_down; + u32 vref_num; + u32 training_mod_num; + u32 dram_timing_num; + + u32 ptfv_list[PTFV_ARRAY_SIZE]; + + u32 burst_regs[BURST_REGS_SIZE]; + u32 burst_reg_per_ch[BURST_REGS_PER_CH_SIZE]; + u32 shadow_regs_ca_train[BURST_REGS_SIZE]; + u32 shadow_regs_quse_train[BURST_REGS_SIZE]; + u32 shadow_regs_rdwr_train[BURST_REGS_SIZE]; + + u32 trim_regs[TRIM_REGS_SIZE]; + u32 trim_perch_regs[TRIM_REGS_PER_CH_SIZE]; + + u32 vref_perch_regs[VREF_REGS_PER_CH_SIZE]; + + u32 dram_timings[DRAM_TIMINGS_NUM]; + u32 training_mod_regs[TRAINING_MOD_REGS_SIZE]; + u32 save_restore_mod_regs[SAVE_RESTORE_MOD_REGS_SIZE]; + u32 burst_mc_regs[BURST_MC_REGS_SIZE]; + u32 la_scale_regs[BURST_UP_DOWN_REGS_SIZE]; + + u32 min_mrs_wait; + u32 emc_mrw; + u32 emc_mrw2; + u32 emc_mrw3; + u32 emc_mrw4; + u32 emc_mrw9; + u32 emc_mrs; + u32 emc_emrs; + u32 emc_emrs2; + u32 emc_auto_cal_config; + u32 emc_auto_cal_config2; + u32 emc_auto_cal_config3; + u32 emc_auto_cal_config4; + u32 emc_auto_cal_config5; + u32 emc_auto_cal_config6; + u32 emc_auto_cal_config7; + u32 emc_auto_cal_config8; + u32 emc_cfg_2; + u32 emc_sel_dpd_ctrl; + u32 emc_fdpd_ctrl_cmd_no_ramp; + u32 dll_clk_src; + u32 clk_out_enb_x_0_clk_enb_emc_dll; + u32 latency; +}; + +struct tegra_emc { + struct clk_hw hw; + struct clk *emc_clk; + struct device *dev; + struct timer_list training_timer; + + struct tegra_mc *mc; + + void __iomem *emc_base[REG_TYPE_NUM]; + + struct emc_table *current_timing; + struct emc_table *next_timing; + + struct emc_table *emc_table; + unsigned int emc_table_size; + + u32 clk_setting; + ktime_t clkchange_time; + int clkchange_delay; + u32 timer_period_training; + + bool emc_suspend; + unsigned long emc_resume_rate; +}; + +struct supported_sequence { + u8 table_rev; + void (*set_clock)(struct tegra_emc *emc, u32 clksrc); + u32 (*periodic_compensation)(struct tegra_emc *emc); + char *seq_rev; +}; + +#endif diff --git a/include/soc/tegra/emc.h b/include/soc/tegra/emc.h index f6db33b579ec..9f16861a715b 100644 --- a/include/soc/tegra/emc.h +++ b/include/soc/tegra/emc.h @@ -16,4 +16,6 @@ int tegra_emc_prepare_timing_change(struct tegra_emc *emc, void tegra_emc_complete_timing_change(struct tegra_emc *emc, unsigned long rate); +struct clk *tegra210_clk_register_emc(void); + #endif /* __SOC_TEGRA_EMC_H__ */ From patchwork Wed May 29 08:21:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Lo X-Patchwork-Id: 1106950 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="gmel9Dgt"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45DNwy2gwgz9sB8 for ; Wed, 29 May 2019 18:22:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726205AbfE2IWK (ORCPT ); Wed, 29 May 2019 04:22:10 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:1333 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726005AbfE2IWJ (ORCPT ); Wed, 29 May 2019 04:22:09 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 29 May 2019 01:21:54 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 29 May 2019 01:22:03 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 29 May 2019 01:22:03 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 29 May 2019 08:22:02 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 29 May 2019 08:22:02 +0000 Received: from josephl-linux.nvidia.com (Not Verified[10.19.108.132]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 29 May 2019 01:22:02 -0700 From: Joseph Lo To: Thierry Reding , Peter De Schrijver , Jonathan Hunter , Rob Herring , Stephen Boyd CC: , , , , Joseph Lo Subject: [PATCH V4 5/8] memory: tegra: Add EMC scaling support code for Tegra210 Date: Wed, 29 May 2019 16:21:36 +0800 Message-ID: <20190529082139.5581-6-josephl@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190529082139.5581-1-josephl@nvidia.com> References: <20190529082139.5581-1-josephl@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1559118114; bh=WSXZPgthYc+kj8hn/QNql1jh44Eq4sCI5n7HpbbYvkM=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=gmel9DgtJ+TOuYTxnLPTUqiIOGGKdB7KxITK72Nh9QY4m1DWlEKwH8WrP5rnnmikZ T3yj6kUTiBComRqOjAA6Fz9bJpxwNUL4nvEvB16c1wlMMhuESetg6OrbjeuO85RnZZ NCtQ7ECl6gMFoP1mV1et6GaItw6rPO3LsxgokWdgxZq0X80cfxsttEuF+84J1ONenk u388ibGzTEgg6qx6goJW0MpamdcBkxn8EzOO/HXxc9t16QUr/yV5BefIu8MPxngrVe AcEGh2B87/LxOlcv8JdzgJxZPYVkow+34N7awsRsfaqTHi9YWICwcEkvdq/T1c+xMv xeY8wDbLUzVSQ== Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org This patch adds the required APIs and variables for the EMC scaling sequence code on Tegra210. Based on the work of Peter De Schrijver . Signed-off-by: Joseph Lo --- v4: - fix the API with generic naming - use 'u16' in 'struct emc_table_register_offset' --- drivers/memory/tegra/tegra210-emc.c | 1365 +++++++++++++++++++++++++++ drivers/memory/tegra/tegra210-emc.h | 718 ++++++++++++++ 2 files changed, 2083 insertions(+) diff --git a/drivers/memory/tegra/tegra210-emc.c b/drivers/memory/tegra/tegra210-emc.c index 80ea14d1e6ce..6eb26cdd9679 100644 --- a/drivers/memory/tegra/tegra210-emc.c +++ b/drivers/memory/tegra/tegra210-emc.c @@ -22,11 +22,27 @@ #define EMC_CLK_EMC_2X_CLK_SRC_SHIFT 29 #define EMC_CLK_EMC_2X_CLK_SRC_MASK \ (0x7 << EMC_CLK_EMC_2X_CLK_SRC_SHIFT) +#define EMC_CLK_SOURCE_PLLM_LJ 0x4 +#define EMC_CLK_SOURCE_PLLMB_LJ 0x5 #define EMC_CLK_MC_EMC_SAME_FREQ BIT(16) #define EMC_CLK_EMC_2X_CLK_DIVISOR_SHIFT 0 #define EMC_CLK_EMC_2X_CLK_DIVISOR_MASK \ (0xff << EMC_CLK_EMC_2X_CLK_DIVISOR_SHIFT) +#define CLK_RST_CONTROLLER_CLK_SOURCE_EMC_DLL 0x664 +#define DLL_CLK_EMC_DLL_CLK_SRC_SHIFT 29 +#define DLL_CLK_EMC_DLL_CLK_SRC_MASK \ + (0x7 << DLL_CLK_EMC_DLL_CLK_SRC_SHIFT) +#define DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT 10 +#define DLL_CLK_EMC_DLL_DDLL_CLK_SEL_MASK \ + (0x3 << DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT) +#define PLLM_VCOA 0 +#define PLLM_VCOB 1 +#define EMC_DLL_SWITCH_OUT 2 +#define DLL_CLK_EMC_DLL_CLK_DIVISOR_SHIFT 0 +#define DLL_CLK_EMC_DLL_CLK_DIVISOR_MASK \ + (0xff << DLL_CLK_EMC_DLL_CLK_DIVISOR_SHIFT) + #define MC_EMEM_ARB_MISC0_EMC_SAME_FREQ BIT(27) #define TEGRA_EMC_MAX_FREQS 16 @@ -35,7 +51,46 @@ #define CLK_CHANGE_DELAY 100 #define TRAINING_TIME 100 +#define EMC0_EMC_DATA_BRLSHFT_0_INDEX 2 +#define EMC1_EMC_DATA_BRLSHFT_0_INDEX 3 +#define EMC0_EMC_DATA_BRLSHFT_1_INDEX 4 +#define EMC1_EMC_DATA_BRLSHFT_1_INDEX 5 + +#define TRIM_REG(chan, rank, reg, byte) \ + (((EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## reg ## \ + _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte ## _MASK & \ + next_timing->trim_regs[EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## \ + rank ## _ ## reg ## _INDEX]) >> \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## reg ## \ + _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte ## _SHIFT) \ + + \ + (((EMC_DATA_BRLSHFT_ ## rank ## _RANK ## rank ## _BYTE ## \ + byte ## _DATA_BRLSHFT_MASK & \ + next_timing->trim_perch_regs[EMC ## chan ## \ + _EMC_DATA_BRLSHFT_ ## rank ## _INDEX]) >> \ + EMC_DATA_BRLSHFT_ ## rank ## _RANK ## rank ## _BYTE ## \ + byte ## _DATA_BRLSHFT_SHIFT) * 64)) + +#define CALC_TEMP(rank, reg, byte1, byte2, n) \ + (((new[n] << EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## \ + reg ## _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte1 ## _SHIFT) & \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## reg ## \ + _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte1 ## _MASK) \ + | \ + ((new[n + 1] << EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ##\ + reg ## _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte2 ## _SHIFT) & \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## reg ## \ + _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte2 ## _MASK)) + enum { + TEGRA_DRAM_OVER_TEMP_NONE = 0, + TEGRA_DRAM_OVER_TEMP_REFRESH_X2, + TEGRA_DRAM_OVER_TEMP_REFRESH_X4, + TEGRA_DRAM_OVER_TEMP_THROTTLE, /* 4x Refresh + derating. */ + TEGRA_DRAM_OVER_TEMP_MAX, +}; + +enum TEGRA_EMC_SOURCE { TEGRA_EMC_SRC_PLLM, TEGRA_EMC_SRC_PLLC, TEGRA_EMC_SRC_PLLP, @@ -80,17 +135,499 @@ static const struct supported_sequence supported_seqs[] = { }; static const struct supported_sequence *seq = supported_seqs; static DEFINE_SPINLOCK(emc_access_lock); +unsigned long dram_over_temp_state = TEGRA_DRAM_OVER_TEMP_NONE; + +const struct emc_table_register_offset reg_off = { + .burst_regs_off = { + EMC_RC, + EMC_RFC, + EMC_RFCPB, + EMC_REFCTRL2, + EMC_RFC_SLR, + EMC_RAS, + EMC_RP, + EMC_R2W, + EMC_W2R, + EMC_R2P, + EMC_W2P, + EMC_R2R, + EMC_TPPD, + EMC_CCDMW, + EMC_RD_RCD, + EMC_WR_RCD, + EMC_RRD, + EMC_REXT, + EMC_WEXT, + EMC_WDV_CHK, + EMC_WDV, + EMC_WSV, + EMC_WEV, + EMC_WDV_MASK, + EMC_WS_DURATION, + EMC_WE_DURATION, + EMC_QUSE, + EMC_QUSE_WIDTH, + EMC_IBDLY, + EMC_OBDLY, + EMC_EINPUT, + EMC_MRW6, + EMC_EINPUT_DURATION, + EMC_PUTERM_EXTRA, + EMC_PUTERM_WIDTH, + EMC_QRST, + EMC_QSAFE, + EMC_RDV, + EMC_RDV_MASK, + EMC_RDV_EARLY, + EMC_RDV_EARLY_MASK, + EMC_REFRESH, + EMC_BURST_REFRESH_NUM, + EMC_PRE_REFRESH_REQ_CNT, + EMC_PDEX2WR, + EMC_PDEX2RD, + EMC_PCHG2PDEN, + EMC_ACT2PDEN, + EMC_AR2PDEN, + EMC_RW2PDEN, + EMC_CKE2PDEN, + EMC_PDEX2CKE, + EMC_PDEX2MRR, + EMC_TXSR, + EMC_TXSRDLL, + EMC_TCKE, + EMC_TCKESR, + EMC_TPD, + EMC_TFAW, + EMC_TRPAB, + EMC_TCLKSTABLE, + EMC_TCLKSTOP, + EMC_MRW7, + EMC_TREFBW, + EMC_ODT_WRITE, + EMC_FBIO_CFG5, + EMC_FBIO_CFG7, + EMC_CFG_DIG_DLL, + EMC_CFG_DIG_DLL_PERIOD, + EMC_PMACRO_IB_RXRT, + EMC_CFG_PIPE_1, + EMC_CFG_PIPE_2, + EMC_PMACRO_QUSE_DDLL_RANK0_4, + EMC_PMACRO_QUSE_DDLL_RANK0_5, + EMC_PMACRO_QUSE_DDLL_RANK1_4, + EMC_PMACRO_QUSE_DDLL_RANK1_5, + EMC_MRW8, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_4, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_5, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_0, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_1, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_2, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_3, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_4, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_5, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_0, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_1, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_2, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_3, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_4, + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_5, + EMC_PMACRO_DDLL_LONG_CMD_0, + EMC_PMACRO_DDLL_LONG_CMD_1, + EMC_PMACRO_DDLL_LONG_CMD_2, + EMC_PMACRO_DDLL_LONG_CMD_3, + EMC_PMACRO_DDLL_LONG_CMD_4, + EMC_PMACRO_DDLL_SHORT_CMD_0, + EMC_PMACRO_DDLL_SHORT_CMD_1, + EMC_PMACRO_DDLL_SHORT_CMD_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_3, + EMC_TXDSRVTTGEN, + EMC_FDPD_CTRL_DQ, + EMC_FDPD_CTRL_CMD, + EMC_FBIO_SPARE, + EMC_ZCAL_INTERVAL, + EMC_ZCAL_WAIT_CNT, + EMC_MRS_WAIT_CNT, + EMC_MRS_WAIT_CNT2, + EMC_AUTO_CAL_CHANNEL, + EMC_DLL_CFG_0, + EMC_DLL_CFG_1, + EMC_PMACRO_AUTOCAL_CFG_COMMON, + EMC_PMACRO_ZCTRL, + EMC_CFG, + EMC_CFG_PIPE, + EMC_DYN_SELF_REF_CONTROL, + EMC_QPOP, + EMC_DQS_BRLSHFT_0, + EMC_DQS_BRLSHFT_1, + EMC_CMD_BRLSHFT_2, + EMC_CMD_BRLSHFT_3, + EMC_PMACRO_PAD_CFG_CTRL, + EMC_PMACRO_DATA_PAD_RX_CTRL, + EMC_PMACRO_CMD_PAD_RX_CTRL, + EMC_PMACRO_DATA_RX_TERM_MODE, + EMC_PMACRO_CMD_RX_TERM_MODE, + EMC_PMACRO_CMD_PAD_TX_CTRL, + EMC_PMACRO_DATA_PAD_TX_CTRL, + EMC_PMACRO_COMMON_PAD_TX_CTRL, + EMC_PMACRO_VTTGEN_CTRL_0, + EMC_PMACRO_VTTGEN_CTRL_1, + EMC_PMACRO_VTTGEN_CTRL_2, + EMC_PMACRO_BRICK_CTRL_RFU1, + EMC_PMACRO_CMD_BRICK_CTRL_FDPD, + EMC_PMACRO_BRICK_CTRL_RFU2, + EMC_PMACRO_DATA_BRICK_CTRL_FDPD, + EMC_PMACRO_BG_BIAS_CTRL_0, + EMC_CFG_3, + EMC_PMACRO_TX_PWRD_0, + EMC_PMACRO_TX_PWRD_1, + EMC_PMACRO_TX_PWRD_2, + EMC_PMACRO_TX_PWRD_3, + EMC_PMACRO_TX_PWRD_4, + EMC_PMACRO_TX_PWRD_5, + EMC_CONFIG_SAMPLE_DELAY, + EMC_PMACRO_TX_SEL_CLK_SRC_0, + EMC_PMACRO_TX_SEL_CLK_SRC_1, + EMC_PMACRO_TX_SEL_CLK_SRC_2, + EMC_PMACRO_TX_SEL_CLK_SRC_3, + EMC_PMACRO_TX_SEL_CLK_SRC_4, + EMC_PMACRO_TX_SEL_CLK_SRC_5, + EMC_PMACRO_DDLL_BYPASS, + EMC_PMACRO_DDLL_PWRD_0, + EMC_PMACRO_DDLL_PWRD_1, + EMC_PMACRO_DDLL_PWRD_2, + EMC_PMACRO_CMD_CTRL_0, + EMC_PMACRO_CMD_CTRL_1, + EMC_PMACRO_CMD_CTRL_2, + EMC_TR_TIMING_0, + EMC_TR_DVFS, + EMC_TR_CTRL_1, + EMC_TR_RDV, + EMC_TR_QPOP, + EMC_TR_RDV_MASK, + EMC_MRW14, + EMC_TR_QSAFE, + EMC_TR_QRST, + EMC_TRAINING_CTRL, + EMC_TRAINING_SETTLE, + EMC_TRAINING_VREF_SETTLE, + EMC_TRAINING_CA_FINE_CTRL, + EMC_TRAINING_CA_CTRL_MISC, + EMC_TRAINING_CA_CTRL_MISC1, + EMC_TRAINING_CA_VREF_CTRL, + EMC_TRAINING_QUSE_CORS_CTRL, + EMC_TRAINING_QUSE_FINE_CTRL, + EMC_TRAINING_QUSE_CTRL_MISC, + EMC_TRAINING_QUSE_VREF_CTRL, + EMC_TRAINING_READ_FINE_CTRL, + EMC_TRAINING_READ_CTRL_MISC, + EMC_TRAINING_READ_VREF_CTRL, + EMC_TRAINING_WRITE_FINE_CTRL, + EMC_TRAINING_WRITE_CTRL_MISC, + EMC_TRAINING_WRITE_VREF_CTRL, + EMC_TRAINING_MPC, + EMC_MRW15, + }, + .trim_regs_off = { + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_0, + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_1, + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_2, + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_3, + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_0, + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_1, + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_2, + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_3, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_2, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_0, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_1, + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_2, + EMC_PMACRO_IB_VREF_DQS_0, + EMC_PMACRO_IB_VREF_DQS_1, + EMC_PMACRO_IB_VREF_DQ_0, + EMC_PMACRO_IB_VREF_DQ_1, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_4, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_5, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_2, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_0, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_1, + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_2, + EMC_PMACRO_QUSE_DDLL_RANK0_0, + EMC_PMACRO_QUSE_DDLL_RANK0_1, + EMC_PMACRO_QUSE_DDLL_RANK0_2, + EMC_PMACRO_QUSE_DDLL_RANK0_3, + EMC_PMACRO_QUSE_DDLL_RANK1_0, + EMC_PMACRO_QUSE_DDLL_RANK1_1, + EMC_PMACRO_QUSE_DDLL_RANK1_2, + EMC_PMACRO_QUSE_DDLL_RANK1_3 + }, + .burst_mc_regs_off = { + MC_EMEM_ARB_CFG, + MC_EMEM_ARB_OUTSTANDING_REQ, + MC_EMEM_ARB_REFPB_HP_CTRL, + MC_EMEM_ARB_REFPB_BANK_CTRL, + MC_EMEM_ARB_TIMING_RCD, + MC_EMEM_ARB_TIMING_RP, + MC_EMEM_ARB_TIMING_RC, + MC_EMEM_ARB_TIMING_RAS, + MC_EMEM_ARB_TIMING_FAW, + MC_EMEM_ARB_TIMING_RRD, + MC_EMEM_ARB_TIMING_RAP2PRE, + MC_EMEM_ARB_TIMING_WAP2PRE, + MC_EMEM_ARB_TIMING_R2R, + MC_EMEM_ARB_TIMING_W2W, + MC_EMEM_ARB_TIMING_R2W, + MC_EMEM_ARB_TIMING_CCDMW, + MC_EMEM_ARB_TIMING_W2R, + MC_EMEM_ARB_TIMING_RFCPB, + MC_EMEM_ARB_DA_TURNS, + MC_EMEM_ARB_DA_COVERS, + MC_EMEM_ARB_MISC0, + MC_EMEM_ARB_MISC1, + MC_EMEM_ARB_MISC2, + MC_EMEM_ARB_RING1_THROTTLE, + MC_EMEM_ARB_DHYST_CTRL, + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_0, + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_1, + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_2, + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_3, + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_4, + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_5, + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_6, + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_7, + }, + .la_scale_regs_off = { + MC_MLL_MPCORER_PTSA_RATE, + MC_FTOP_PTSA_RATE, + MC_PTSA_GRANT_DECREMENT, + MC_LATENCY_ALLOWANCE_XUSB_0, + MC_LATENCY_ALLOWANCE_XUSB_1, + MC_LATENCY_ALLOWANCE_TSEC_0, + MC_LATENCY_ALLOWANCE_SDMMCA_0, + MC_LATENCY_ALLOWANCE_SDMMCAA_0, + MC_LATENCY_ALLOWANCE_SDMMC_0, + MC_LATENCY_ALLOWANCE_SDMMCAB_0, + MC_LATENCY_ALLOWANCE_PPCS_0, + MC_LATENCY_ALLOWANCE_PPCS_1, + MC_LATENCY_ALLOWANCE_MPCORE_0, + MC_LATENCY_ALLOWANCE_HC_0, + MC_LATENCY_ALLOWANCE_HC_1, + MC_LATENCY_ALLOWANCE_AVPC_0, + MC_LATENCY_ALLOWANCE_GPU_0, + MC_LATENCY_ALLOWANCE_GPU2_0, + MC_LATENCY_ALLOWANCE_NVENC_0, + MC_LATENCY_ALLOWANCE_NVDEC_0, + MC_LATENCY_ALLOWANCE_VIC_0, + MC_LATENCY_ALLOWANCE_VI2_0, + MC_LATENCY_ALLOWANCE_ISP2_0, + MC_LATENCY_ALLOWANCE_ISP2_1, + }, + .burst_regs_per_ch_off = { + { .bank = REG_EMC0, .offset = EMC_MRW10, }, + { .bank = REG_EMC1, .offset = EMC_MRW10, }, + { .bank = REG_EMC0, .offset = EMC_MRW11, }, + { .bank = REG_EMC1, .offset = EMC_MRW11, }, + { .bank = REG_EMC0, .offset = EMC_MRW12, }, + { .bank = REG_EMC1, .offset = EMC_MRW12, }, + { .bank = REG_EMC0, .offset = EMC_MRW13, }, + { .bank = REG_EMC1, .offset = EMC_MRW13, }, + }, + .trim_regs_per_ch_off = { + { .bank = REG_EMC0, .offset = EMC_CMD_BRLSHFT_0, }, + { .bank = REG_EMC1, .offset = EMC_CMD_BRLSHFT_1, }, + { .bank = REG_EMC0, .offset = EMC_DATA_BRLSHFT_0, }, + { .bank = REG_EMC1, .offset = EMC_DATA_BRLSHFT_0, }, + { .bank = REG_EMC0, .offset = EMC_DATA_BRLSHFT_1, }, + { .bank = REG_EMC1, .offset = EMC_DATA_BRLSHFT_1, }, + { .bank = REG_EMC0, .offset = EMC_QUSE_BRLSHFT_0, }, + { .bank = REG_EMC1, .offset = EMC_QUSE_BRLSHFT_1, }, + { .bank = REG_EMC0, .offset = EMC_QUSE_BRLSHFT_2, }, + { .bank = REG_EMC1, .offset = EMC_QUSE_BRLSHFT_3, }, + }, + .vref_regs_per_ch_off = { + { .bank = REG_EMC0, + .offset = EMC_TRAINING_OPT_DQS_IB_VREF_RANK0, }, + { .bank = REG_EMC1, + .offset = EMC_TRAINING_OPT_DQS_IB_VREF_RANK0, }, + { .bank = REG_EMC0, + .offset = EMC_TRAINING_OPT_DQS_IB_VREF_RANK1, }, + { .bank = REG_EMC1, + .offset = EMC_TRAINING_OPT_DQS_IB_VREF_RANK1, }, + }, +}; static inline struct tegra_emc *clk_hw_to_emc(struct clk_hw *hw) { return container_of(hw, struct tegra_emc, hw); } +void emc_writel(struct tegra_emc *emc, u32 val, unsigned long offset) +{ + writel_relaxed(val, emc->emc_base[REG_EMC] + offset); +} + u32 emc_readl(struct tegra_emc *emc, unsigned long offset) { return readl_relaxed(emc->emc_base[REG_EMC] + offset); } +u32 emc1_readl(struct tegra_emc *emc, unsigned long offset) +{ + return readl_relaxed(emc->emc_base[REG_EMC1] + offset); +} + +void emc_writel_per_ch(struct tegra_emc *emc, u32 val, int type, + unsigned long offset) +{ + switch (type) { + case REG_EMC: + case REG_EMC0: + return writel_relaxed(val, emc->emc_base[REG_EMC] + offset); + case REG_EMC1: + return writel_relaxed(val, emc->emc_base[REG_EMC1] + offset); + } +} + u32 emc_readl_per_ch(struct tegra_emc *emc, int type, unsigned long offset) { @@ -109,6 +646,14 @@ u32 emc_readl_per_ch(struct tegra_emc *emc, int type, return val; } +void ccfifo_writel(struct tegra_emc *emc, u32 val, unsigned long addr, + u32 delay) +{ + writel_relaxed(val, emc->emc_base[REG_EMC] + EMC_CCFIFO_DATA); + writel_relaxed((addr & 0xffff) | ((delay & 0x7fff) << 16) | (1 << 31), + emc->emc_base[REG_EMC] + EMC_CCFIFO_ADDR); +} + static inline u32 emc_src_val(u32 val) { return (val & EMC_CLK_EMC_2X_CLK_SRC_MASK) >> @@ -176,6 +721,826 @@ static inline unsigned long emc_get_src_clk_rate(void) return rate; } +static void tegra210_change_dll_src(struct tegra_emc *emc, + u32 clksrc) +{ + u32 out_enb_x; + u32 dll_setting = emc->next_timing->dll_clk_src; + u32 emc_clk_src; + u32 emc_clk_div; + + out_enb_x = 0; + emc_clk_src = (clksrc & EMC_CLK_EMC_2X_CLK_SRC_MASK) >> + EMC_CLK_EMC_2X_CLK_SRC_SHIFT; + emc_clk_div = (clksrc & EMC_CLK_EMC_2X_CLK_DIVISOR_MASK) >> + EMC_CLK_EMC_2X_CLK_DIVISOR_SHIFT; + + dll_setting &= ~(DLL_CLK_EMC_DLL_CLK_SRC_MASK | + DLL_CLK_EMC_DLL_CLK_DIVISOR_MASK); + dll_setting |= emc_clk_src << DLL_CLK_EMC_DLL_CLK_SRC_SHIFT; + dll_setting |= emc_clk_div << DLL_CLK_EMC_DLL_CLK_DIVISOR_SHIFT; + + dll_setting &= ~DLL_CLK_EMC_DLL_DDLL_CLK_SEL_MASK; + if (emc_clk_src == EMC_CLK_SOURCE_PLLMB_LJ) + dll_setting |= (PLLM_VCOB << + DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT); + else if (emc_clk_src == EMC_CLK_SOURCE_PLLM_LJ) + dll_setting |= (PLLM_VCOA << + DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT); + else + dll_setting |= (EMC_DLL_SWITCH_OUT << + DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT); + + tegra210_clk_emc_dll_update_setting(dll_setting); + + if (emc->next_timing->clk_out_enb_x_0_clk_enb_emc_dll) + tegra210_clk_emc_dll_enable(true); + else + tegra210_clk_emc_dll_enable(false); +} + +void emc_do_clock_change(struct tegra_emc *emc, u32 clksrc) +{ + int err; + + mc_readl(emc->mc, MC_EMEM_ADR_CFG); + emc_readl(emc, EMC_INTSTATUS); + + tegra210_clk_emc_update_setting(clksrc); + + err = emc_wait_for_update(emc, EMC_INTSTATUS, + EMC_INTSTATUS_CLKCHANGE_COMPLETE, true, + REG_EMC); + if (err) + pr_warn("%s: clock change completion error: %d", __func__, err); +} + +struct emc_table *emc_get_timing_from_freq(struct tegra_emc *emc, + unsigned long rate) +{ + int i; + + for (i = 0; i < emc->emc_table_size; i++) + if (emc->emc_table[i].rate == rate) + return &emc->emc_table[i]; + + return NULL; +} + +int emc_wait_for_update(struct tegra_emc *emc, u32 status_reg, u32 bit_mask, + bool updated_state, int chan) +{ + int i, err = -ETIMEDOUT; + u32 reg; + + for (i = 0; i < EMC_STATUS_UPDATE_TIMEOUT; i++) { + reg = emc_readl_per_ch(emc, chan, status_reg); + if (!!(reg & bit_mask) == updated_state) { + err = 0; + goto done; + } + udelay(1); + } + +done: + return err; +} + +void emc_set_shadow_bypass(struct tegra_emc *emc, int set) +{ + u32 emc_dbg = emc_readl(emc, EMC_DBG); + + if (set) + emc_writel(emc, emc_dbg | EMC_DBG_WRITE_MUX_ACTIVE, EMC_DBG); + else + emc_writel(emc, emc_dbg & ~EMC_DBG_WRITE_MUX_ACTIVE, EMC_DBG); +} + +u32 emc_get_dll_state(struct emc_table *next_timing) +{ + bool next_dll_enabled; + + next_dll_enabled = !(next_timing->emc_emrs & 0x1); + if (next_dll_enabled) + return DLL_ON; + else + return DLL_OFF; +} + +u32 div_o3(u32 a, u32 b) +{ + u32 result = a / b; + + if ((b * result) < a) + return result + 1; + else + return result; +} + +void emc_timing_update(struct tegra_emc *emc, int dual_chan) +{ + int err = 0; + + emc_writel(emc, 0x1, EMC_TIMING_CONTROL); + err |= emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_TIMING_UPDATE_STALLED, + false, REG_EMC); + if (dual_chan) + err |= emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_TIMING_UPDATE_STALLED, + false, REG_EMC1); + if (err) + pr_warn("%s: timing update error: %d", __func__, err); +} + +u32 tegra210_actual_osc_clocks(u32 in) +{ + if (in < 0x40) + return in * 16; + else if (in < 0x80) + return 2048; + else if (in < 0xc0) + return 4096; + else + return 8192; +} + +void tegra210_start_periodic_compensation(struct tegra_emc *emc) +{ + u32 mpc_req = 0x4b; + + emc_writel(emc, mpc_req, EMC_MPC); + mpc_req = emc_readl(emc, EMC_MPC); +} + +u32 tegra210_apply_periodic_compensation_trimmer(struct emc_table *next_timing, + u32 offset) +{ + u32 i, temp = 0; + u32 next_timing_rate_mhz = next_timing->rate / 1000; + s32 tree_delta[4]; + s32 tree_delta_taps[4]; + s32 new[] = { + TRIM_REG(0, 0, 0, 0), + TRIM_REG(0, 0, 0, 1), + TRIM_REG(0, 0, 1, 2), + TRIM_REG(0, 0, 1, 3), + + TRIM_REG(1, 0, 2, 4), + TRIM_REG(1, 0, 2, 5), + TRIM_REG(1, 0, 3, 6), + TRIM_REG(1, 0, 3, 7), + + TRIM_REG(0, 1, 0, 0), + TRIM_REG(0, 1, 0, 1), + TRIM_REG(0, 1, 1, 2), + TRIM_REG(0, 1, 1, 3), + + TRIM_REG(1, 1, 2, 4), + TRIM_REG(1, 1, 2, 5), + TRIM_REG(1, 1, 3, 6), + TRIM_REG(1, 1, 3, 7) + }; + + switch (offset) { + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0: + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1: + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2: + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3: + case EMC_DATA_BRLSHFT_0: + tree_delta[0] = 128 * + (next_timing->current_dram_clktree[C0D0U0] - + next_timing->trained_dram_clktree[C0D0U0]); + tree_delta[1] = 128 * + (next_timing->current_dram_clktree[C0D0U1] - + next_timing->trained_dram_clktree[C0D0U1]); + tree_delta[2] = 128 * + (next_timing->current_dram_clktree[C1D0U0] - + next_timing->trained_dram_clktree[C1D0U0]); + tree_delta[3] = 128 * + (next_timing->current_dram_clktree[C1D0U1] - + next_timing->trained_dram_clktree[C1D0U1]); + + tree_delta_taps[0] = (tree_delta[0] * + (s32)next_timing_rate_mhz) / 1000000; + tree_delta_taps[1] = (tree_delta[1] * + (s32)next_timing_rate_mhz) / 1000000; + tree_delta_taps[2] = (tree_delta[2] * + (s32)next_timing_rate_mhz) / 1000000; + tree_delta_taps[3] = (tree_delta[3] * + (s32)next_timing_rate_mhz) / 1000000; + + for (i = 0; i < 4; i++) { + if ((tree_delta_taps[i] > next_timing->tree_margin) || + (tree_delta_taps[i] < + (-1 * next_timing->tree_margin))) { + new[i * 2] = new[i * 2] + tree_delta_taps[i]; + new[i * 2 + 1] = new[i * 2 + 1] + + tree_delta_taps[i]; + } + } + + if (offset == EMC_DATA_BRLSHFT_0) { + for (i = 0; i < 8; i++) + new[i] = new[i] / 64; + } else { + for (i = 0; i < 8; i++) + new[i] = new[i] % 64; + } + break; + + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0: + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1: + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2: + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3: + case EMC_DATA_BRLSHFT_1: + tree_delta[0] = 128 * + (next_timing->current_dram_clktree[C0D1U0] - + next_timing->trained_dram_clktree[C0D1U0]); + tree_delta[1] = 128 * + (next_timing->current_dram_clktree[C0D1U1] - + next_timing->trained_dram_clktree[C0D1U1]); + tree_delta[2] = 128 * + (next_timing->current_dram_clktree[C1D1U0] - + next_timing->trained_dram_clktree[C1D1U0]); + tree_delta[3] = 128 * + (next_timing->current_dram_clktree[C1D1U1] - + next_timing->trained_dram_clktree[C1D1U1]); + + tree_delta_taps[0] = (tree_delta[0] * + (s32)next_timing_rate_mhz) / 1000000; + tree_delta_taps[1] = (tree_delta[1] * + (s32)next_timing_rate_mhz) / 1000000; + tree_delta_taps[2] = (tree_delta[2] * + (s32)next_timing_rate_mhz) / 1000000; + tree_delta_taps[3] = (tree_delta[3] * + (s32)next_timing_rate_mhz) / 1000000; + + for (i = 0; i < 4; i++) { + if ((tree_delta_taps[i] > next_timing->tree_margin) || + (tree_delta_taps[i] < + (-1 * next_timing->tree_margin))) { + new[8 + i * 2] = new[8 + i * 2] + + tree_delta_taps[i]; + new[8 + i * 2 + 1] = new[8 + i * 2 + 1] + + tree_delta_taps[i]; + } + } + + if (offset == EMC_DATA_BRLSHFT_1) { + for (i = 0; i < 8; i++) + new[i + 8] = new[i + 8] / 64; + } else { + for (i = 0; i < 8; i++) + new[i + 8] = new[i + 8] % 64; + } + break; + } + + switch (offset) { + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0: + temp = CALC_TEMP(0, 0, 0, 1, 0); + break; + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1: + temp = CALC_TEMP(0, 1, 2, 3, 2); + break; + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2: + temp = CALC_TEMP(0, 2, 4, 5, 4); + break; + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3: + temp = CALC_TEMP(0, 3, 6, 7, 6); + break; + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0: + temp = CALC_TEMP(1, 0, 0, 1, 8); + break; + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1: + temp = CALC_TEMP(1, 1, 2, 3, 10); + break; + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2: + temp = CALC_TEMP(1, 2, 4, 5, 12); + break; + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3: + temp = CALC_TEMP(1, 3, 6, 7, 14); + break; + case EMC_DATA_BRLSHFT_0: + temp = ((new[0] << + EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_MASK) | + ((new[1] << + EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_MASK) | + ((new[2] << + EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_MASK) | + ((new[3] << + EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_MASK) | + ((new[4] << + EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_MASK) | + ((new[5] << + EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_MASK) | + ((new[6] << + EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_MASK) | + ((new[7] << + EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_MASK); + break; + case EMC_DATA_BRLSHFT_1: + temp = ((new[8] << + EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_MASK) | + ((new[9] << + EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_MASK) | + ((new[10] << + EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_MASK) | + ((new[11] << + EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_MASK) | + ((new[12] << + EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_MASK) | + ((new[13] << + EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_MASK) | + ((new[14] << + EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_MASK) | + ((new[15] << + EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_SHIFT) & + EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_MASK); + break; + default: + break; + } + + return temp; +} + +u32 tegra210_dll_prelock(struct tegra_emc *emc, int dvfs_with_training, + u32 clksrc) +{ + u32 emc_dig_dll_status; + u32 dll_locked; + u32 dll_out; + u32 emc_cfg_dig_dll; + u32 emc_dll_cfg_0; + u32 emc_dll_cfg_1; + u32 ddllcal_ctrl_start_trim_val; + u32 dll_en; + u32 dual_channel_lpddr4_case; + u32 dll_priv_updated; + + dual_channel_lpddr4_case = + !!(emc_readl(emc, EMC_FBIO_CFG7) & EMC_FBIO_CFG7_CH1_ENABLE) & + !!(emc_readl(emc, EMC_FBIO_CFG7) & EMC_FBIO_CFG7_CH0_ENABLE); + + emc_dig_dll_status = 0; + dll_locked = 0; + dll_out = 0; + emc_cfg_dig_dll = 0; + emc_dll_cfg_0 = 0; + emc_dll_cfg_1 = 0; + ddllcal_ctrl_start_trim_val = 0; + dll_en = 0; + + emc_cfg_dig_dll = emc_readl(emc, EMC_CFG_DIG_DLL) & + ~EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_MASK; + emc_cfg_dig_dll |= (3 << EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_SHIFT); + emc_cfg_dig_dll &= ~EMC_CFG_DIG_DLL_CFG_DLL_EN; + emc_cfg_dig_dll &= ~EMC_CFG_DIG_DLL_CFG_DLL_MODE_MASK; + emc_cfg_dig_dll |= (3 << EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT); + emc_cfg_dig_dll |= EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_TRAFFIC; + emc_cfg_dig_dll &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_RW_UNTIL_LOCK; + emc_cfg_dig_dll &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_UNTIL_LOCK; + + emc_writel(emc, emc_cfg_dig_dll, EMC_CFG_DIG_DLL); + emc_writel(emc, 1, EMC_TIMING_CONTROL); + + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_TIMING_UPDATE_STALLED, + 0, REG_EMC); + if (dual_channel_lpddr4_case) + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_TIMING_UPDATE_STALLED, + 0, REG_EMC1); + + do { + emc_cfg_dig_dll = emc_readl(emc, EMC_CFG_DIG_DLL); + dll_en = emc_cfg_dig_dll & EMC_CFG_DIG_DLL_CFG_DLL_EN; + } while (dll_en == 1); + + if (dual_channel_lpddr4_case) { + do { + emc_cfg_dig_dll = emc1_readl(emc, EMC_CFG_DIG_DLL); + dll_en = emc_cfg_dig_dll & EMC_CFG_DIG_DLL_CFG_DLL_EN; + } while (dll_en == 1); + } + + emc_dll_cfg_0 = emc->next_timing->burst_regs[EMC_DLL_CFG_0_INDEX]; + + emc_writel(emc, emc_dll_cfg_0, EMC_DLL_CFG_0); + + if (emc->next_timing->rate >= 400000 + && emc->next_timing->rate < 600000) + ddllcal_ctrl_start_trim_val = 150; + else if (emc->next_timing->rate >= 600000 + && emc->next_timing->rate < 800000) + ddllcal_ctrl_start_trim_val = 100; + else if (emc->next_timing->rate >= 800000 + && emc->next_timing->rate < 1000000) + ddllcal_ctrl_start_trim_val = 70; + else if (emc->next_timing->rate >= 1000000 + && emc->next_timing->rate < 1200000) + ddllcal_ctrl_start_trim_val = 30; + else + ddllcal_ctrl_start_trim_val = 20; + + emc_dll_cfg_1 = emc_readl(emc, EMC_DLL_CFG_1); + emc_dll_cfg_1 &= EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_MASK; + emc_dll_cfg_1 |= ddllcal_ctrl_start_trim_val; + emc_writel(emc, emc_dll_cfg_1, EMC_DLL_CFG_1); + + tegra210_change_dll_src(emc, clksrc); + + emc_cfg_dig_dll = emc_readl(emc, EMC_CFG_DIG_DLL); + emc_cfg_dig_dll |= EMC_CFG_DIG_DLL_CFG_DLL_EN; + emc_writel(emc, emc_cfg_dig_dll, EMC_CFG_DIG_DLL); + + emc_timing_update(emc, dual_channel_lpddr4_case ? + DUAL_CHANNEL : SINGLE_CHANNEL); + + do { + emc_cfg_dig_dll = emc_readl(emc, EMC_CFG_DIG_DLL); + dll_en = emc_cfg_dig_dll & EMC_CFG_DIG_DLL_CFG_DLL_EN; + } while (dll_en == 0); + + if (dual_channel_lpddr4_case) { + do { + emc_cfg_dig_dll = emc1_readl(emc, EMC_CFG_DIG_DLL); + dll_en = emc_cfg_dig_dll & EMC_CFG_DIG_DLL_CFG_DLL_EN; + } while (dll_en == 0); + } + + do { + emc_dig_dll_status = emc_readl(emc, EMC_DIG_DLL_STATUS); + dll_locked = emc_dig_dll_status & EMC_DIG_DLL_STATUS_DLL_LOCK; + dll_priv_updated = emc_dig_dll_status & + EMC_DIG_DLL_STATUS_DLL_PRIV_UPDATED; + } while (!dll_locked || !dll_priv_updated); + + emc_dig_dll_status = emc_readl(emc, EMC_DIG_DLL_STATUS); + return emc_dig_dll_status & EMC_DIG_DLL_STATUS_DLL_OUT_MASK; +} + +u32 tegra210_dvfs_power_ramp_up(struct tegra_emc *emc, u32 clk, + int flip_backward) +{ + u32 pmacro_cmd_pad; + u32 pmacro_dq_pad; + u32 pmacro_rfu1; + u32 pmacro_cfg5; + u32 pmacro_common_tx; + u32 ramp_up_wait = 0; + + if (flip_backward) { + pmacro_cmd_pad = emc->current_timing->burst_regs[ + EMC_PMACRO_CMD_PAD_TX_CTRL_INDEX]; + pmacro_dq_pad = emc->current_timing->burst_regs[ + EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; + pmacro_rfu1 = emc->current_timing->burst_regs[ + EMC_PMACRO_BRICK_CTRL_RFU1_INDEX]; + pmacro_cfg5 = emc->current_timing->burst_regs[ + EMC_FBIO_CFG5_INDEX]; + pmacro_common_tx = emc->current_timing->burst_regs[ + EMC_PMACRO_COMMON_PAD_TX_CTRL_INDEX]; + } else { + pmacro_cmd_pad = emc->next_timing->burst_regs[ + EMC_PMACRO_CMD_PAD_TX_CTRL_INDEX]; + pmacro_dq_pad = emc->next_timing->burst_regs[ + EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; + pmacro_rfu1 = emc->next_timing->burst_regs[ + EMC_PMACRO_BRICK_CTRL_RFU1_INDEX]; + pmacro_cfg5 = emc->next_timing->burst_regs[ + EMC_FBIO_CFG5_INDEX]; + pmacro_common_tx = emc->next_timing->burst_regs[ + EMC_PMACRO_COMMON_PAD_TX_CTRL_INDEX]; + } + pmacro_cmd_pad |= EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON; + + if (clk < 1000000 / DVFS_FGCG_MID_SPEED_THRESHOLD) { + ccfifo_writel(emc, pmacro_common_tx & 0xa, + EMC_PMACRO_COMMON_PAD_TX_CTRL, 0); + ccfifo_writel(emc, pmacro_common_tx & 0xf, + EMC_PMACRO_COMMON_PAD_TX_CTRL, + (100000 / clk) + 1); + ramp_up_wait += 100000; + } else { + ccfifo_writel(emc, pmacro_common_tx | 0x8, + EMC_PMACRO_COMMON_PAD_TX_CTRL, 0); + } + + if (clk < 1000000 / DVFS_FGCG_HIGH_SPEED_THRESHOLD) { + if (clk < 1000000 / IOBRICK_DCC_THRESHOLD) { + pmacro_cmd_pad |= + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC; + pmacro_cmd_pad &= + ~(EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC); + ccfifo_writel(emc, pmacro_cmd_pad, + EMC_PMACRO_CMD_PAD_TX_CTRL, + (100000 / clk) + 1); + ramp_up_wait += 100000; + + pmacro_dq_pad |= + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC; + pmacro_dq_pad &= + ~(EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC); + ccfifo_writel(emc, pmacro_dq_pad, + EMC_PMACRO_DATA_PAD_TX_CTRL, 0); + ccfifo_writel(emc, pmacro_rfu1 & 0xfe40fe40, + EMC_PMACRO_BRICK_CTRL_RFU1, 0); + } else { + ccfifo_writel(emc, pmacro_rfu1 & 0xfe40fe40, + EMC_PMACRO_BRICK_CTRL_RFU1, + (100000 / clk) + 1); + ramp_up_wait += 100000; + } + + ccfifo_writel(emc, pmacro_rfu1 & 0xfeedfeed, + EMC_PMACRO_BRICK_CTRL_RFU1, (100000 / clk) + 1); + ramp_up_wait += 100000; + + if (clk < 1000000 / IOBRICK_DCC_THRESHOLD) { + pmacro_cmd_pad |= + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC; + ccfifo_writel(emc, pmacro_cmd_pad, + EMC_PMACRO_CMD_PAD_TX_CTRL, + (100000 / clk) + 1); + ramp_up_wait += 100000; + + pmacro_dq_pad |= + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC; + ccfifo_writel(emc, pmacro_dq_pad, + EMC_PMACRO_DATA_PAD_TX_CTRL, 0); + ccfifo_writel(emc, pmacro_rfu1, + EMC_PMACRO_BRICK_CTRL_RFU1, 0); + } else { + ccfifo_writel(emc, pmacro_rfu1, + EMC_PMACRO_BRICK_CTRL_RFU1, + (100000 / clk) + 1); + ramp_up_wait += 100000; + } + + ccfifo_writel(emc, pmacro_cfg5 & ~EMC_FBIO_CFG5_CMD_TX_DIS, + EMC_FBIO_CFG5, (100000 / clk) + 10); + ramp_up_wait += 100000 + (10 * clk); + } else if (clk < 1000000 / DVFS_FGCG_MID_SPEED_THRESHOLD) { + ccfifo_writel(emc, pmacro_rfu1 | 0x06000600, + EMC_PMACRO_BRICK_CTRL_RFU1, (100000 / clk) + 1); + ccfifo_writel(emc, pmacro_cfg5 & ~EMC_FBIO_CFG5_CMD_TX_DIS, + EMC_FBIO_CFG5, (100000 / clk) + 10); + ramp_up_wait += 100000 + 10 * clk; + } else { + ccfifo_writel(emc, pmacro_rfu1 | 0x00000600, + EMC_PMACRO_BRICK_CTRL_RFU1, 0); + ccfifo_writel(emc, pmacro_cfg5 & ~EMC_FBIO_CFG5_CMD_TX_DIS, + EMC_FBIO_CFG5, 12); + ramp_up_wait += 12 * clk; + } + + pmacro_cmd_pad &= ~EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON; + ccfifo_writel(emc, pmacro_cmd_pad, EMC_PMACRO_CMD_PAD_TX_CTRL, 5); + + return ramp_up_wait; +} + +u32 tegra210_dvfs_power_ramp_down(struct tegra_emc *emc, u32 clk, + int flip_backward) +{ + u32 ramp_down_wait = 0; + u32 pmacro_cmd_pad; + u32 pmacro_dq_pad; + u32 pmacro_rfu1; + u32 pmacro_cfg5; + u32 pmacro_common_tx; + u32 seq_wait; + + if (flip_backward) { + pmacro_cmd_pad = emc->next_timing->burst_regs[ + EMC_PMACRO_CMD_PAD_TX_CTRL_INDEX]; + pmacro_dq_pad = emc->next_timing->burst_regs[ + EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; + pmacro_rfu1 = emc->next_timing->burst_regs[ + EMC_PMACRO_BRICK_CTRL_RFU1_INDEX]; + pmacro_cfg5 = emc->next_timing->burst_regs[ + EMC_FBIO_CFG5_INDEX]; + pmacro_common_tx = emc->next_timing->burst_regs[ + EMC_PMACRO_COMMON_PAD_TX_CTRL_INDEX]; + } else { + pmacro_cmd_pad = emc->current_timing->burst_regs[ + EMC_PMACRO_CMD_PAD_TX_CTRL_INDEX]; + pmacro_dq_pad = emc->current_timing->burst_regs[ + EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; + pmacro_rfu1 = emc->current_timing->burst_regs[ + EMC_PMACRO_BRICK_CTRL_RFU1_INDEX]; + pmacro_cfg5 = emc->current_timing->burst_regs[ + EMC_FBIO_CFG5_INDEX]; + pmacro_common_tx = emc->current_timing->burst_regs[ + EMC_PMACRO_COMMON_PAD_TX_CTRL_INDEX]; + } + + pmacro_cmd_pad |= EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON; + + ccfifo_writel(emc, pmacro_cmd_pad, EMC_PMACRO_CMD_PAD_TX_CTRL, 0); + ccfifo_writel(emc, pmacro_cfg5 | EMC_FBIO_CFG5_CMD_TX_DIS, + EMC_FBIO_CFG5, 12); + ramp_down_wait = 12 * clk; + + seq_wait = (100000 / clk) + 1; + + if (clk < (1000000 / DVFS_FGCG_HIGH_SPEED_THRESHOLD)) { + if (clk < (1000000 / IOBRICK_DCC_THRESHOLD)) { + pmacro_cmd_pad &= + ~(EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC); + pmacro_cmd_pad |= + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC; + ccfifo_writel(emc, pmacro_cmd_pad, + EMC_PMACRO_CMD_PAD_TX_CTRL, seq_wait); + ramp_down_wait += 100000; + + pmacro_dq_pad &= + ~(EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC); + pmacro_dq_pad |= + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC; + ccfifo_writel(emc, pmacro_dq_pad, + EMC_PMACRO_DATA_PAD_TX_CTRL, 0); + ccfifo_writel(emc, pmacro_rfu1 & ~0x01120112, + EMC_PMACRO_BRICK_CTRL_RFU1, 0); + } else { + ccfifo_writel(emc, pmacro_rfu1 & ~0x01120112, + EMC_PMACRO_BRICK_CTRL_RFU1, seq_wait); + ramp_down_wait += 100000; + } + + ccfifo_writel(emc, pmacro_rfu1 & ~0x01bf01bf, + EMC_PMACRO_BRICK_CTRL_RFU1, seq_wait); + ramp_down_wait += 100000; + + if (clk < (1000000 / IOBRICK_DCC_THRESHOLD)) { + pmacro_cmd_pad &= + ~(EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC); + ccfifo_writel(emc, pmacro_cmd_pad, + EMC_PMACRO_CMD_PAD_TX_CTRL, seq_wait); + ramp_down_wait += 100000; + + pmacro_dq_pad &= + ~(EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC); + ccfifo_writel(emc, pmacro_dq_pad, + EMC_PMACRO_DATA_PAD_TX_CTRL, 0); + ccfifo_writel(emc, pmacro_rfu1 & ~0x07ff07ff, + EMC_PMACRO_BRICK_CTRL_RFU1, 0); + } else { + ccfifo_writel(emc, pmacro_rfu1 & ~0x07ff07ff, + EMC_PMACRO_BRICK_CTRL_RFU1, seq_wait); + ramp_down_wait += 100000; + } + } else { + ccfifo_writel(emc, pmacro_rfu1 & ~0xffff07ff, + EMC_PMACRO_BRICK_CTRL_RFU1, seq_wait + 19); + ramp_down_wait += 100000 + (20 * clk); + } + + if (clk < (1000000 / DVFS_FGCG_MID_SPEED_THRESHOLD)) { + ramp_down_wait += 100000; + ccfifo_writel(emc, pmacro_common_tx & ~0x5, + EMC_PMACRO_COMMON_PAD_TX_CTRL, seq_wait); + ramp_down_wait += 100000; + ccfifo_writel(emc, pmacro_common_tx & ~0xf, + EMC_PMACRO_COMMON_PAD_TX_CTRL, seq_wait); + ramp_down_wait += 100000; + ccfifo_writel(emc, 0, 0, seq_wait); + ramp_down_wait += 100000; + } else { + ccfifo_writel(emc, pmacro_common_tx & ~0xf, + EMC_PMACRO_COMMON_PAD_TX_CTRL, seq_wait); + } + + return ramp_down_wait; +} + +void tegra210_reset_dram_clktree_values(struct emc_table *table) +{ + table->current_dram_clktree[C0D0U0] = + table->trained_dram_clktree[C0D0U0]; + table->current_dram_clktree[C0D0U1] = + table->trained_dram_clktree[C0D0U1]; + table->current_dram_clktree[C1D0U0] = + table->trained_dram_clktree[C1D0U0]; + table->current_dram_clktree[C1D0U1] = + table->trained_dram_clktree[C1D0U1]; + table->current_dram_clktree[C1D1U0] = + table->trained_dram_clktree[C1D1U0]; + table->current_dram_clktree[C1D1U1] = + table->trained_dram_clktree[C1D1U1]; +} + +static void update_dll_control(struct tegra_emc *emc, u32 emc_cfg_dig_dll, + int channel_mode, bool updated_state) +{ + emc_writel(emc, emc_cfg_dig_dll, EMC_CFG_DIG_DLL); + emc_timing_update(emc, channel_mode); + + emc_wait_for_update(emc, EMC_CFG_DIG_DLL, EMC_CFG_DIG_DLL_CFG_DLL_EN, + updated_state, REG_EMC); + if (channel_mode == DUAL_CHANNEL) + emc_wait_for_update(emc, EMC_CFG_DIG_DLL, + EMC_CFG_DIG_DLL_CFG_DLL_EN, + updated_state, REG_EMC1); +} + +void tegra210_dll_disable(struct tegra_emc *emc, int channel_mode) +{ + u32 emc_cfg_dig_dll; + + emc_cfg_dig_dll = emc_readl(emc, EMC_CFG_DIG_DLL); + emc_cfg_dig_dll &= ~EMC_CFG_DIG_DLL_CFG_DLL_EN; + + update_dll_control(emc, emc_cfg_dig_dll, channel_mode, false); +} + +void tegra210_dll_enable(struct tegra_emc *emc, int channel_mode) +{ + u32 emc_cfg_dig_dll; + + emc_cfg_dig_dll = emc_readl(emc, EMC_CFG_DIG_DLL); + emc_cfg_dig_dll |= EMC_CFG_DIG_DLL_CFG_DLL_EN; + + update_dll_control(emc, emc_cfg_dig_dll, channel_mode, true); +} + +void emc_set_over_temp_timing(struct tegra_emc *emc, struct emc_table *timing, + unsigned long state) +{ +#define REFRESH_X2 1 +#define REFRESH_X4 2 +#define REFRESH_SPEEDUP(val, speedup) \ + (val = ((val) & 0xFFFF0000) | (((val) & 0xFFFF) >> (speedup))) + + u32 ref = timing->burst_regs[EMC_REFRESH_INDEX]; + u32 pre_ref = timing->burst_regs[EMC_PRE_REFRESH_REQ_CNT_INDEX]; + u32 dsr_cntrl = timing->burst_regs[EMC_DYN_SELF_REF_CONTROL_INDEX]; + + switch (state) { + case TEGRA_DRAM_OVER_TEMP_NONE: + case TEGRA_DRAM_OVER_TEMP_THROTTLE: + break; + case TEGRA_DRAM_OVER_TEMP_REFRESH_X2: + REFRESH_SPEEDUP(ref, REFRESH_X2); + REFRESH_SPEEDUP(pre_ref, REFRESH_X2); + REFRESH_SPEEDUP(dsr_cntrl, REFRESH_X2); + break; + case TEGRA_DRAM_OVER_TEMP_REFRESH_X4: + REFRESH_SPEEDUP(ref, REFRESH_X4); + REFRESH_SPEEDUP(pre_ref, REFRESH_X4); + REFRESH_SPEEDUP(dsr_cntrl, REFRESH_X4); + break; + default: + pr_warn("%s: Failed to set dram over temp state %lu\n", + __func__, state); + return; + } + + emc_writel(emc, ref, reg_off.burst_regs_off[EMC_REFRESH_INDEX]); + emc_writel(emc, pre_ref, + reg_off.burst_regs_off[EMC_PRE_REFRESH_REQ_CNT_INDEX]); + emc_writel(emc, dsr_cntrl, + reg_off.burst_regs_off[EMC_DYN_SELF_REF_CONTROL_INDEX]); +} + static int emc_table_lookup(struct tegra_emc *emc, unsigned long rate) { int i; diff --git a/drivers/memory/tegra/tegra210-emc.h b/drivers/memory/tegra/tegra210-emc.h index 029f8afb2d66..494f1cbdb289 100644 --- a/drivers/memory/tegra/tegra210-emc.h +++ b/drivers/memory/tegra/tegra210-emc.h @@ -12,10 +12,677 @@ #include "mc.h" +#define DVFS_FGCG_HIGH_SPEED_THRESHOLD 1000 +#define IOBRICK_DCC_THRESHOLD 2400 +#define DVFS_FGCG_MID_SPEED_THRESHOLD 600 + +#define EMC_STATUS_UPDATE_TIMEOUT 1000 + +#define MC_EMEM_ADR_CFG 0x54 +#define MC_EMEM_ARB_CFG 0x90 +#define MC_EMEM_ARB_OUTSTANDING_REQ 0x94 +#define MC_EMEM_ARB_TIMING_RCD 0x98 +#define MC_EMEM_ARB_TIMING_RP 0x9c +#define MC_EMEM_ARB_TIMING_RC 0xa0 +#define MC_EMEM_ARB_TIMING_RAS 0xa4 +#define MC_EMEM_ARB_TIMING_FAW 0xa8 +#define MC_EMEM_ARB_TIMING_RRD 0xac +#define MC_EMEM_ARB_TIMING_RAP2PRE 0xb0 +#define MC_EMEM_ARB_TIMING_WAP2PRE 0xb4 +#define MC_EMEM_ARB_TIMING_R2R 0xb8 +#define MC_EMEM_ARB_TIMING_W2W 0xbc +#define MC_EMEM_ARB_TIMING_R2W 0xc0 +#define MC_EMEM_ARB_TIMING_W2R 0xc4 +#define MC_EMEM_ARB_MISC2 0xc8 +#define MC_EMEM_ARB_DA_TURNS 0xd0 +#define MC_EMEM_ARB_DA_COVERS 0xd4 +#define MC_EMEM_ARB_MISC0 0xd8 +#define MC_EMEM_ARB_MISC1 0xdc +#define MC_EMEM_ARB_RING1_THROTTLE 0xe0 +#define MC_LATENCY_ALLOWANCE_AVPC_0 0x2e4 +#define MC_LATENCY_ALLOWANCE_HC_0 0x310 +#define MC_LATENCY_ALLOWANCE_HC_1 0x314 +#define MC_LATENCY_ALLOWANCE_MPCORE_0 0x320 +#define MC_LATENCY_ALLOWANCE_NVENC_0 0x328 +#define MC_LATENCY_ALLOWANCE_PPCS_0 0x344 +#define MC_LATENCY_ALLOWANCE_PPCS_1 0x348 +#define MC_LATENCY_ALLOWANCE_ISP2_0 0x370 +#define MC_LATENCY_ALLOWANCE_ISP2_1 0x374 +#define MC_LATENCY_ALLOWANCE_XUSB_0 0x37c +#define MC_LATENCY_ALLOWANCE_XUSB_1 0x380 +#define MC_LATENCY_ALLOWANCE_TSEC_0 0x390 +#define MC_LATENCY_ALLOWANCE_VIC_0 0x394 +#define MC_LATENCY_ALLOWANCE_VI2_0 0x398 +#define MC_LATENCY_ALLOWANCE_GPU_0 0x3ac +#define MC_LATENCY_ALLOWANCE_SDMMCA_0 0x3b8 +#define MC_LATENCY_ALLOWANCE_SDMMCAA_0 0x3bc +#define MC_LATENCY_ALLOWANCE_SDMMC_0 0x3c0 +#define MC_LATENCY_ALLOWANCE_SDMMCAB_0 0x3c4 +#define MC_LATENCY_ALLOWANCE_GPU2_0 0x3e8 +#define MC_LATENCY_ALLOWANCE_NVDEC_0 0x3d8 +#define MC_MLL_MPCORER_PTSA_RATE 0x44c +#define MC_FTOP_PTSA_RATE 0x50c +#define MC_EMEM_ARB_TIMING_RFCPB 0x6c0 +#define MC_EMEM_ARB_TIMING_CCDMW 0x6c4 +#define MC_EMEM_ARB_REFPB_HP_CTRL 0x6f0 +#define MC_EMEM_ARB_REFPB_BANK_CTRL 0x6f4 +#define MC_PTSA_GRANT_DECREMENT 0x960 +#define MC_EMEM_ARB_DHYST_CTRL 0xbcc +#define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_0 0xbd0 +#define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_1 0xbd4 +#define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_2 0xbd8 +#define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_3 0xbdc +#define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_4 0xbe0 +#define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_5 0xbe4 +#define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_6 0xbe8 +#define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_7 0xbec + +#define EMC_INTSTATUS 0x0 +#define EMC_INTSTATUS_CLKCHANGE_COMPLETE BIT(4) +#define EMC_DBG 0x8 +#define EMC_DBG_WRITE_MUX_ACTIVE BIT(1) +#define EMC_CFG 0xc +#define EMC_TIMING_CONTROL 0x28 +#define EMC_RC 0x2c +#define EMC_RFC 0x30 +#define EMC_RAS 0x34 +#define EMC_RP 0x38 +#define EMC_R2W 0x3c +#define EMC_W2R 0x40 +#define EMC_R2P 0x44 +#define EMC_W2P 0x48 +#define EMC_RD_RCD 0x4c +#define EMC_WR_RCD 0x50 +#define EMC_RRD 0x54 +#define EMC_REXT 0x58 +#define EMC_WDV 0x5c +#define EMC_QUSE 0x60 +#define EMC_QRST 0x64 +#define EMC_QSAFE 0x68 +#define EMC_RDV 0x6c +#define EMC_REFRESH 0x70 +#define EMC_BURST_REFRESH_NUM 0x74 +#define EMC_PDEX2WR 0x78 +#define EMC_PDEX2RD 0x7c +#define EMC_PCHG2PDEN 0x80 +#define EMC_ACT2PDEN 0x84 +#define EMC_AR2PDEN 0x88 +#define EMC_RW2PDEN 0x8c +#define EMC_TXSR 0x90 +#define EMC_TCKE 0x94 +#define EMC_TFAW 0x98 +#define EMC_TRPAB 0x9c +#define EMC_TCLKSTABLE 0xa0 +#define EMC_TCLKSTOP 0xa4 +#define EMC_TREFBW 0xa8 +#define EMC_TPPD 0xac +#define EMC_ODT_WRITE 0xb0 +#define EMC_PDEX2MRR 0xb4 +#define EMC_WEXT 0xb8 +#define EMC_RFC_SLR 0xc0 +#define EMC_MRS_WAIT_CNT2 0xc4 +#define EMC_MRS_WAIT_CNT 0xc8 +#define EMC_FBIO_SPARE 0x100 +#define EMC_FBIO_CFG5 0x104 +#define EMC_FBIO_CFG5_DRAM_TYPE_SHIFT 0 +#define EMC_FBIO_CFG5_DRAM_TYPE_MASK \ + (0x3 << EMC_FBIO_CFG5_DRAM_TYPE_SHIFT) +#define EMC_FBIO_CFG5_CMD_TX_DIS BIT(8) + +#define EMC_PDEX2CKE 0x118 +#define EMC_CKE2PDEN 0x11c +#define EMC_MPC 0x128 +#define EMC_R2R 0x144 +#define EMC_EINPUT 0x14c +#define EMC_EINPUT_DURATION 0x150 +#define EMC_PUTERM_EXTRA 0x154 +#define EMC_TCKESR 0x158 +#define EMC_TPD 0x15c +#define EMC_EMC_STATUS 0x2b4 +#define EMC_EMC_STATUS_TIMING_UPDATE_STALLED BIT(23) +#define EMC_CFG_DIG_DLL 0x2bc +#define EMC_CFG_DIG_DLL_CFG_DLL_EN BIT(0) +#define EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_UNTIL_LOCK BIT(1) +#define EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_TRAFFIC BIT(3) +#define EMC_CFG_DIG_DLL_CFG_DLL_STALL_RW_UNTIL_LOCK BIT(4) +#define EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT 6 +#define EMC_CFG_DIG_DLL_CFG_DLL_MODE_MASK \ + (0x3 << EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT) +#define EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_SHIFT 8 +#define EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_MASK \ + (0x7 << EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_SHIFT) + +#define EMC_CFG_DIG_DLL_PERIOD 0x2c0 +#define EMC_DIG_DLL_STATUS 0x2c4 +#define EMC_DIG_DLL_STATUS_DLL_LOCK BIT(15) +#define EMC_DIG_DLL_STATUS_DLL_PRIV_UPDATED BIT(17) +#define EMC_DIG_DLL_STATUS_DLL_OUT_SHIFT 0 +#define EMC_DIG_DLL_STATUS_DLL_OUT_MASK \ + (0x7ff << EMC_DIG_DLL_STATUS_DLL_OUT_SHIFT) + +#define EMC_CFG_DIG_DLL_1 0x2c8 +#define EMC_RDV_MASK 0x2cc +#define EMC_WDV_MASK 0x2d0 +#define EMC_RDV_EARLY_MASK 0x2d4 +#define EMC_RDV_EARLY 0x2d8 +#define EMC_ZCAL_INTERVAL 0x2e0 +#define EMC_ZCAL_WAIT_CNT 0x2e4 +#define EMC_FDPD_CTRL_DQ 0x310 +#define EMC_FDPD_CTRL_CMD 0x314 +#define EMC_PMACRO_CMD_BRICK_CTRL_FDPD 0x318 +#define EMC_PMACRO_DATA_BRICK_CTRL_FDPD 0x31c +#define EMC_PMACRO_BRICK_CTRL_RFU1 0x330 +#define EMC_PMACRO_BRICK_CTRL_RFU2 0x334 +#define EMC_TR_TIMING_0 0x3b4 +#define EMC_TR_CTRL_1 0x3bc +#define EMC_TR_RDV 0x3c4 +#define EMC_PRE_REFRESH_REQ_CNT 0x3dc +#define EMC_DYN_SELF_REF_CONTROL 0x3e0 +#define EMC_TXSRDLL 0x3e4 +#define EMC_CCFIFO_ADDR 0x3e8 +#define EMC_CCFIFO_DATA 0x3ec +#define EMC_TR_QPOP 0x3f4 +#define EMC_TR_RDV_MASK 0x3f8 +#define EMC_TR_QSAFE 0x3fc +#define EMC_TR_QRST 0x400 +#define EMC_TR_DVFS 0x460 +#define EMC_AUTO_CAL_CHANNEL 0x464 +#define EMC_IBDLY 0x468 +#define EMC_OBDLY 0x46c +#define EMC_TXDSRVTTGEN 0x480 +#define EMC_WE_DURATION 0x48c +#define EMC_WS_DURATION 0x490 +#define EMC_WEV 0x494 +#define EMC_WSV 0x498 +#define EMC_CFG_3 0x49c +#define EMC_MRW6 0x4a4 +#define EMC_MRW7 0x4a8 +#define EMC_MRW8 0x4ac +#define EMC_MRW10 0x4b4 +#define EMC_MRW11 0x4b8 +#define EMC_MRW12 0x4bc +#define EMC_MRW13 0x4c0 +#define EMC_MRW14 0x4c4 +#define EMC_MRW15 0x4d0 +#define EMC_WDV_CHK 0x4e0 +#define EMC_CFG_PIPE_2 0x554 +#define EMC_CFG_PIPE_1 0x55c +#define EMC_CFG_PIPE 0x560 +#define EMC_QPOP 0x564 +#define EMC_QUSE_WIDTH 0x568 +#define EMC_PUTERM_WIDTH 0x56c +#define EMC_REFCTRL2 0x580 +#define EMC_FBIO_CFG7 0x584 +#define EMC_FBIO_CFG7_CH0_ENABLE BIT(1) +#define EMC_FBIO_CFG7_CH1_ENABLE BIT(2) +#define EMC_DATA_BRLSHFT_0 0x588 +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_SHIFT 21 +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_SHIFT 18 +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_SHIFT 15 +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_SHIFT 12 +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_SHIFT 9 +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_SHIFT 6 +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_SHIFT 3 +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_SHIFT 0 +#define EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_SHIFT) + +#define EMC_DATA_BRLSHFT_1 0x58c +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_SHIFT 21 +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_SHIFT 18 +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_SHIFT 15 +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_SHIFT 12 +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_SHIFT 9 +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_SHIFT 6 +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_SHIFT 3 +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_SHIFT) +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_SHIFT 0 +#define EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_MASK \ + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_SHIFT) + +#define EMC_RFCPB 0x590 +#define EMC_DQS_BRLSHFT_0 0x594 +#define EMC_DQS_BRLSHFT_1 0x598 +#define EMC_CMD_BRLSHFT_0 0x59c +#define EMC_CMD_BRLSHFT_1 0x5a0 +#define EMC_CMD_BRLSHFT_2 0x5a4 +#define EMC_CMD_BRLSHFT_3 0x5a8 +#define EMC_QUSE_BRLSHFT_0 0x5ac +#define EMC_QUSE_BRLSHFT_1 0x5b8 +#define EMC_QUSE_BRLSHFT_2 0x5bc +#define EMC_CCDMW 0x5c0 +#define EMC_QUSE_BRLSHFT_3 0x5c4 +#define EMC_DLL_CFG_0 0x5e4 +#define EMC_DLL_CFG_1 0x5e8 +#define EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_SHIFT 10 +#define EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_MASK \ + (0x7ff << EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_SHIFT) + +#define EMC_CONFIG_SAMPLE_DELAY 0x5f0 +#define EMC_PMACRO_QUSE_DDLL_RANK0_0 0x600 +#define EMC_PMACRO_QUSE_DDLL_RANK0_1 0x604 +#define EMC_PMACRO_QUSE_DDLL_RANK0_2 0x608 +#define EMC_PMACRO_QUSE_DDLL_RANK0_3 0x60c +#define EMC_PMACRO_QUSE_DDLL_RANK0_4 0x610 +#define EMC_PMACRO_QUSE_DDLL_RANK0_5 0x614 +#define EMC_PMACRO_QUSE_DDLL_RANK1_0 0x620 +#define EMC_PMACRO_QUSE_DDLL_RANK1_1 0x624 +#define EMC_PMACRO_QUSE_DDLL_RANK1_2 0x628 +#define EMC_PMACRO_QUSE_DDLL_RANK1_3 0x62c +#define EMC_PMACRO_QUSE_DDLL_RANK1_4 0x630 +#define EMC_PMACRO_QUSE_DDLL_RANK1_5 0x634 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0 0x640 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE1_SHIFT \ + 16 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE1_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE1_SHIFT) +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE0_SHIFT \ + 0 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE0_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE0_SHIFT) + +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1 0x644 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE3_SHIFT \ + 16 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE3_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE3_SHIFT) +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE2_SHIFT \ + 0 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE2_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE2_SHIFT) + +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2 0x648 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE5_SHIFT \ + 16 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE5_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE5_SHIFT) +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE4_SHIFT \ + 0 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE4_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE4_SHIFT) + +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3 0x64c +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE7_SHIFT \ + 16 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE7_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE7_SHIFT) +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE6_SHIFT \ + 0 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE6_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE6_SHIFT) + +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_4 0x650 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_5 0x654 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0 0x660 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE1_SHIFT \ + 16 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE1_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE1_SHIFT) +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE0_SHIFT \ + 0 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE0_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE0_SHIFT) + +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1 0x664 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE3_SHIFT \ + 16 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE3_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE3_SHIFT) +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE2_SHIFT \ + 0 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE2_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE2_SHIFT) + +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2 0x668 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE5_SHIFT \ + 16 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE5_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE5_SHIFT) +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE4_SHIFT \ + 0 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE4_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE4_SHIFT) + +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3 0x66c +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE7_SHIFT \ + 16 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE7_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE7_SHIFT) +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE6_SHIFT \ + 0 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE6_MASK \ + (0x3ff << \ + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE6_SHIFT) + +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_4 0x670 +#define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_5 0x674 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_0 0x680 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_1 0x684 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_2 0x688 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_3 0x68c +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_4 0x690 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_5 0x694 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_0 0x6a0 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_1 0x6a4 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_2 0x6a8 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_3 0x6ac +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_4 0x6b0 +#define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_5 0x6b4 +#define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_0 0x6c0 +#define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_1 0x6c4 +#define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_2 0x6c8 +#define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_3 0x6cc +#define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_0 0x6e0 +#define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_1 0x6e4 +#define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_2 0x6e8 +#define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_3 0x6ec +#define EMC_PMACRO_TX_PWRD_0 0x720 +#define EMC_PMACRO_TX_PWRD_1 0x724 +#define EMC_PMACRO_TX_PWRD_2 0x728 +#define EMC_PMACRO_TX_PWRD_3 0x72c +#define EMC_PMACRO_TX_PWRD_4 0x730 +#define EMC_PMACRO_TX_PWRD_5 0x734 +#define EMC_PMACRO_TX_SEL_CLK_SRC_0 0x740 +#define EMC_PMACRO_TX_SEL_CLK_SRC_1 0x744 +#define EMC_PMACRO_TX_SEL_CLK_SRC_3 0x74c +#define EMC_PMACRO_TX_SEL_CLK_SRC_2 0x748 +#define EMC_PMACRO_TX_SEL_CLK_SRC_4 0x750 +#define EMC_PMACRO_TX_SEL_CLK_SRC_5 0x754 +#define EMC_PMACRO_DDLL_BYPASS 0x760 +#define EMC_PMACRO_DDLL_PWRD_0 0x770 +#define EMC_PMACRO_DDLL_PWRD_1 0x774 +#define EMC_PMACRO_DDLL_PWRD_2 0x778 +#define EMC_PMACRO_CMD_CTRL_0 0x780 +#define EMC_PMACRO_CMD_CTRL_1 0x784 +#define EMC_PMACRO_CMD_CTRL_2 0x788 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_0 0x800 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_1 0x804 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_2 0x808 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_3 0x80c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_0 0x810 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_1 0x814 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_2 0x818 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_3 0x81c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_0 0x820 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_1 0x824 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_2 0x828 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_3 0x82c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_0 0x830 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_1 0x834 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_2 0x838 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_3 0x83c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_0 0x840 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_1 0x844 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_2 0x848 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_3 0x84c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_0 0x850 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_1 0x854 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_2 0x858 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_3 0x85c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_0 0x860 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_1 0x864 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_2 0x868 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_3 0x86c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_0 0x870 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_1 0x874 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_2 0x878 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_3 0x87c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_0 0x880 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_1 0x884 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_2 0x888 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_3 0x88c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_0 0x890 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_1 0x894 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_2 0x898 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_3 0x89c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_0 0x8a0 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_1 0x8a4 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_2 0x8a8 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_3 0x8ac +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_0 0x8b0 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_1 0x8b4 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_2 0x8b8 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_3 0x8bc +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_0 0x900 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_1 0x904 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_2 0x908 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_3 0x90c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_0 0x910 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_1 0x914 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_2 0x918 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_3 0x91c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_0 0x920 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_1 0x924 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_2 0x928 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_3 0x92c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_0 0x930 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_1 0x934 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_2 0x938 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_3 0x93c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_0 0x940 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_1 0x944 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_2 0x948 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_3 0x94c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_0 0x950 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_1 0x954 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_2 0x958 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_3 0x95c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_0 0x960 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_1 0x964 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_2 0x968 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_3 0x96c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_0 0x970 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_1 0x974 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_2 0x978 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_3 0x97c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_0 0x980 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_1 0x984 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_2 0x988 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_3 0x98c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_0 0x990 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_1 0x994 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_2 0x998 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_3 0x99c +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_0 0x9a0 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_1 0x9a4 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_2 0x9a8 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_3 0x9ac +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_0 0x9b0 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_1 0x9b4 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_2 0x9b8 +#define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_3 0x9bc +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_0 0xa00 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_1 0xa04 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_2 0xa08 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_0 0xa10 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_1 0xa14 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_2 0xa18 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_0 0xa20 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_1 0xa24 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_2 0xa28 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_0 0xa30 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_1 0xa34 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_2 0xa38 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_0 0xa40 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_1 0xa44 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_2 0xa48 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_0 0xa50 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_1 0xa54 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_2 0xa58 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_0 0xa60 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_1 0xa64 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_2 0xa68 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_0 0xa70 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_1 0xa74 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_2 0xa78 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_0 0xb00 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_1 0xb04 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_2 0xb08 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_0 0xb10 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_1 0xb14 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_2 0xb18 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_0 0xb20 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_1 0xb24 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_2 0xb28 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_0 0xb30 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_1 0xb34 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_2 0xb38 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_0 0xb40 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_1 0xb44 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_2 0xb48 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_0 0xb50 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_1 0xb54 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_2 0xb58 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_0 0xb60 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_1 0xb64 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_2 0xb68 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_0 0xb70 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_1 0xb74 +#define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_2 0xb78 +#define EMC_PMACRO_IB_VREF_DQ_0 0xbe0 +#define EMC_PMACRO_IB_VREF_DQ_1 0xbe4 +#define EMC_PMACRO_IB_VREF_DQS_0 0xbf0 +#define EMC_PMACRO_IB_VREF_DQS_1 0xbf4 +#define EMC_PMACRO_DDLL_LONG_CMD_0 0xc00 +#define EMC_PMACRO_DDLL_LONG_CMD_1 0xc04 +#define EMC_PMACRO_DDLL_LONG_CMD_2 0xc08 +#define EMC_PMACRO_DDLL_LONG_CMD_3 0xc0c +#define EMC_PMACRO_DDLL_LONG_CMD_4 0xc10 +#define EMC_PMACRO_DDLL_LONG_CMD_5 0xc14 +#define EMC_PMACRO_DDLL_SHORT_CMD_0 0xc20 +#define EMC_PMACRO_DDLL_SHORT_CMD_1 0xc24 +#define EMC_PMACRO_DDLL_SHORT_CMD_2 0xc28 +#define EMC_PMACRO_VTTGEN_CTRL_0 0xc34 +#define EMC_PMACRO_VTTGEN_CTRL_1 0xc38 +#define EMC_PMACRO_BG_BIAS_CTRL_0 0xc3c +#define EMC_PMACRO_PAD_CFG_CTRL 0xc40 +#define EMC_PMACRO_ZCTRL 0xc44 +#define EMC_PMACRO_CMD_PAD_RX_CTRL 0xc50 +#define EMC_PMACRO_DATA_PAD_RX_CTRL 0xc54 +#define EMC_PMACRO_CMD_RX_TERM_MODE 0xc58 +#define EMC_PMACRO_DATA_RX_TERM_MODE 0xc5c +#define EMC_PMACRO_CMD_PAD_TX_CTRL 0xc60 +#define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC BIT(1) +#define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC BIT(9) +#define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC BIT(16) +#define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC BIT(24) +#define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON BIT(26) + +#define EMC_PMACRO_DATA_PAD_TX_CTRL 0xc64 +#define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC BIT(1) +#define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC BIT(9) +#define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC BIT(16) +#define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC BIT(24) + +#define EMC_PMACRO_COMMON_PAD_TX_CTRL 0xc68 +#define EMC_PMACRO_AUTOCAL_CFG_COMMON 0xc78 +#define EMC_PMACRO_VTTGEN_CTRL_2 0xcf0 +#define EMC_PMACRO_IB_RXRT 0xcf4 +#define EMC_TRAINING_CTRL 0xe04 +#define EMC_TRAINING_QUSE_CORS_CTRL 0xe0c +#define EMC_TRAINING_QUSE_FINE_CTRL 0xe10 +#define EMC_TRAINING_QUSE_CTRL_MISC 0xe14 +#define EMC_TRAINING_WRITE_FINE_CTRL 0xe18 +#define EMC_TRAINING_WRITE_CTRL_MISC 0xe1c +#define EMC_TRAINING_WRITE_VREF_CTRL 0xe20 +#define EMC_TRAINING_READ_FINE_CTRL 0xe24 +#define EMC_TRAINING_READ_CTRL_MISC 0xe28 +#define EMC_TRAINING_READ_VREF_CTRL 0xe2c +#define EMC_TRAINING_CA_FINE_CTRL 0xe30 +#define EMC_TRAINING_CA_CTRL_MISC 0xe34 +#define EMC_TRAINING_CA_CTRL_MISC1 0xe38 +#define EMC_TRAINING_CA_VREF_CTRL 0xe3c +#define EMC_TRAINING_SETTLE 0xe44 +#define EMC_TRAINING_MPC 0xe5c +#define EMC_TRAINING_VREF_SETTLE 0xe6c +#define EMC_TRAINING_QUSE_VREF_CTRL 0xed0 +#define EMC_TRAINING_OPT_DQS_IB_VREF_RANK0 0xed4 +#define EMC_TRAINING_OPT_DQS_IB_VREF_RANK1 0xed8 + +#define EMC_COPY_TABLE_PARAM_PERIODIC_FIELDS BIT(0) +#define EMC_COPY_TABLE_PARAM_TRIM_REGS BIT(1) + +enum burst_regs_list { + EMC_REFRESH_INDEX = 41, + EMC_PRE_REFRESH_REQ_CNT_INDEX = 43, + EMC_FBIO_CFG5_INDEX = 65, + EMC_DLL_CFG_0_INDEX = 144, + EMC_DYN_SELF_REF_CONTROL_INDEX = 150, + EMC_PMACRO_CMD_PAD_TX_CTRL_INDEX = 161, + EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX, + EMC_PMACRO_COMMON_PAD_TX_CTRL_INDEX, + EMC_PMACRO_BRICK_CTRL_RFU1_INDEX = 167, +}; + +enum trim_regs_list { + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_INDEX = 60, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_INDEX, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_INDEX, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_INDEX, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_4_INDEX, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_5_INDEX, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_INDEX, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_INDEX, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_INDEX, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_INDEX, +}; + enum burst_mc_regs_list { MC_EMEM_ARB_MISC0_INDEX = 20, }; +enum { + SINGLE_CHANNEL = 0, + DUAL_CHANNEL, +}; + +enum { + DLL_OFF, + DLL_ON, +}; + enum { REG_EMC, REG_EMC0, @@ -49,6 +716,21 @@ enum { BURST_REGS_SIZE = 221, }; +struct per_ch_regs { + u16 bank; + u16 offset; +}; + +struct emc_table_register_offset { + u16 burst_regs_off[BURST_REGS_SIZE]; + u16 trim_regs_off[TRIM_REGS_SIZE]; + u16 burst_mc_regs_off[BURST_MC_REGS_SIZE]; + u16 la_scale_regs_off[BURST_UP_DOWN_REGS_SIZE]; + struct per_ch_regs burst_regs_per_ch_off[BURST_REGS_PER_CH_SIZE]; + struct per_ch_regs trim_regs_per_ch_off[TRIM_REGS_PER_CH_SIZE]; + struct per_ch_regs vref_regs_per_ch_off[VREF_REGS_PER_CH_SIZE]; +}; + struct emc_table { u32 rev; const char dvfs_ver[60]; @@ -153,4 +835,40 @@ struct supported_sequence { char *seq_rev; }; +extern const struct emc_table_register_offset reg_off; +extern unsigned long dram_over_temp_state; + +void ccfifo_writel(struct tegra_emc *emc, u32 val, unsigned long addr, + u32 delay); +u32 div_o3(u32 a, u32 b); +void emc_writel(struct tegra_emc *emc, u32 val, unsigned long offset); +u32 emc_readl(struct tegra_emc *emc, unsigned long offset); +void emc_writel_per_ch(struct tegra_emc *emc, u32 val, int type, + unsigned long offset); +u32 emc1_readl(struct tegra_emc *emc, unsigned long offset); + +void emc_do_clock_change(struct tegra_emc *emc, u32 clksrc); +void emc_set_shadow_bypass(struct tegra_emc *emc, int set); +void emc_timing_update(struct tegra_emc *emc, int dual_chan); +u32 emc_get_dll_state(struct emc_table *next_timing); +struct emc_table *emc_get_timing_from_freq(struct tegra_emc *emc, + unsigned long rate); +void emc_set_over_temp_timing(struct tegra_emc *emc, struct emc_table *timing, + unsigned long state); +int emc_wait_for_update(struct tegra_emc *emc, u32 status_reg, u32 bit_mask, + bool updated_state, int chan); +u32 tegra210_actual_osc_clocks(u32 in); +u32 tegra210_apply_periodic_compensation_trimmer(struct emc_table *next_timing, + u32 offset); +void tegra210_dll_disable(struct tegra_emc *emc, int channel_mode); +void tegra210_dll_enable(struct tegra_emc *emc, int channel_mode); +u32 tegra210_dll_prelock(struct tegra_emc *emc, int dvfs_with_training, + u32 clksrc); +u32 tegra210_dvfs_power_ramp_down(struct tegra_emc *emc, u32 clk, + int flip_backward); +u32 tegra210_dvfs_power_ramp_up(struct tegra_emc *emc, u32 clk, + int flip_backward); +void tegra210_reset_dram_clktree_values(struct emc_table *table); +void tegra210_start_periodic_compensation(struct tegra_emc *emc); + #endif From patchwork Wed May 29 08:21:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Lo X-Patchwork-Id: 1106952 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="N/6HcSq+"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45DNx32xcXz9sBb for ; Wed, 29 May 2019 18:22:15 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726579AbfE2IWM (ORCPT ); Wed, 29 May 2019 04:22:12 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:2662 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726102AbfE2IWL (ORCPT ); Wed, 29 May 2019 04:22:11 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 29 May 2019 01:22:06 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 29 May 2019 01:22:06 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 29 May 2019 01:22:06 -0700 Received: from HQMAIL106.nvidia.com (172.18.146.12) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 29 May 2019 08:22:05 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 29 May 2019 08:22:05 +0000 Received: from josephl-linux.nvidia.com (Not Verified[10.19.108.132]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 29 May 2019 01:22:05 -0700 From: Joseph Lo To: Thierry Reding , Peter De Schrijver , Jonathan Hunter , Rob Herring , Stephen Boyd CC: , , , , Joseph Lo Subject: [PATCH V4 6/8] memory: tegra: Add EMC scaling sequence code for Tegra210 Date: Wed, 29 May 2019 16:21:37 +0800 Message-ID: <20190529082139.5581-7-josephl@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190529082139.5581-1-josephl@nvidia.com> References: <20190529082139.5581-1-josephl@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1559118126; bh=byJJk9A0cF+Ms3a1ViHjINsarn7yECY2jz044Sa3rfk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=N/6HcSq+dEPdQRFGyxwAZvFJBXcEKz+usq4fCchuap799RKhIaM3TRzuVtnRBMzbX dVzxkUvE5tFVBu+kv7LuQ6AXNZJMpnOPoU+/ILftIygyoOETX0ACbMYcVQhoYYHM3o ABeLIZ4prr0nYaaH68qLD1w+QH3PZv6XdDY2l5YiqoYy/ifCFtmXzi9r5FK1eNG8Yu jPwdUL+xOCv8oXmyqqVjdL2L2m8dDvJzT3UcC4QxNASFEDJVBVuvRHTTGyUSL9hUAY HoHFbykFF0ZWiLAB7gqhW8NG8YXH8JgrKBTdVPYwDFVN5oUpEv90JyNgAZp7/eDQtH nlrnDVIOCDTLA== Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org This patch includes the sequence for clock tuning and the dynamic training mechanism for the clock above 800MHz. And historically there have been different sequences to change the EMC clock. The sequence to be used is specified in the EMC table. However, for the currently supported upstreaming platform, only the most recent sequence is used. So only support that in this patch. Based on the work of Peter De Schrijver . Signed-off-by: Joseph Lo --- v4: - no change --- drivers/memory/tegra/Makefile | 2 +- drivers/memory/tegra/tegra210-emc-cc-r21021.c | 1953 +++++++++++++++++ drivers/memory/tegra/tegra210-emc.c | 5 + drivers/memory/tegra/tegra210-emc.h | 157 ++ 4 files changed, 2116 insertions(+), 1 deletion(-) create mode 100644 drivers/memory/tegra/tegra210-emc-cc-r21021.c diff --git a/drivers/memory/tegra/Makefile b/drivers/memory/tegra/Makefile index f78bbb7cd16f..def087f13a09 100644 --- a/drivers/memory/tegra/Makefile +++ b/drivers/memory/tegra/Makefile @@ -12,5 +12,5 @@ obj-$(CONFIG_TEGRA_MC) += tegra-mc.o obj-$(CONFIG_TEGRA20_EMC) += tegra20-emc.o obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o -obj-$(CONFIG_TEGRA210_EMC) += tegra210-emc.o +obj-$(CONFIG_TEGRA210_EMC) += tegra210-emc.o tegra210-emc-cc-r21021.o obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o diff --git a/drivers/memory/tegra/tegra210-emc-cc-r21021.c b/drivers/memory/tegra/tegra210-emc-cc-r21021.c new file mode 100644 index 000000000000..ec5e1db71896 --- /dev/null +++ b/drivers/memory/tegra/tegra210-emc-cc-r21021.c @@ -0,0 +1,1953 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2014-2019, NVIDIA CORPORATION. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include + +#include "mc.h" +#include "tegra210-emc.h" + +#define DVFS_CLOCK_CHANGE_VERSION 21021 +#define EMC_PRELOCK_VERSION 2101 + +#define emc_cc_dbg(t, ...) pr_debug(__VA_ARGS__) + +/* + * Enable flags for specifying verbosity. + */ +#define INFO (1 << 0) +#define STEPS (1 << 1) +#define SUB_STEPS (1 << 2) +#define PRELOCK (1 << 3) +#define PRELOCK_STEPS (1 << 4) +#define ACTIVE_EN (1 << 5) +#define PRAMP_UP (1 << 6) +#define PRAMP_DN (1 << 7) +#define EMA_WRITES (1 << 10) +#define EMA_UPDATES (1 << 11) +#define PER_TRAIN (1 << 16) +#define CC_PRINT (1 << 17) +#define CCFIFO (1 << 29) +#define REGS (1 << 30) +#define REG_LISTS (1 << 31) + +enum { + DVFS_SEQUENCE = 1, + WRITE_TRAINING_SEQUENCE = 2, + PERIODIC_TRAINING_SEQUENCE = 3, + DVFS_PT1 = 10, + DVFS_UPDATE = 11, + TRAINING_PT1 = 12, + TRAINING_UPDATE = 13, + PERIODIC_TRAINING_UPDATE = 14 +}; + +/* + * PTFV defines - basically just indexes into the per table PTFV array. + */ +#define PTFV_DQSOSC_MOVAVG_C0D0U0_INDEX 0 +#define PTFV_DQSOSC_MOVAVG_C0D0U1_INDEX 1 +#define PTFV_DQSOSC_MOVAVG_C0D1U0_INDEX 2 +#define PTFV_DQSOSC_MOVAVG_C0D1U1_INDEX 3 +#define PTFV_DQSOSC_MOVAVG_C1D0U0_INDEX 4 +#define PTFV_DQSOSC_MOVAVG_C1D0U1_INDEX 5 +#define PTFV_DQSOSC_MOVAVG_C1D1U0_INDEX 6 +#define PTFV_DQSOSC_MOVAVG_C1D1U1_INDEX 7 +#define PTFV_DVFS_SAMPLES_INDEX 9 +#define PTFV_MOVAVG_WEIGHT_INDEX 10 +#define PTFV_CONFIG_CTRL_INDEX 11 + +#define PTFV_CONFIG_CTRL_USE_PREVIOUS_EMA (1 << 0) + +/* + * Do arithmetic in fixed point. + */ +#define MOVAVG_PRECISION_FACTOR 100 + +/* + * The division portion of the average operation. + */ +#define __AVERAGE_PTFV(dev) \ + ({ next_timing->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX] = \ + next_timing->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX] / \ + next_timing->ptfv_list[PTFV_DVFS_SAMPLES_INDEX]; }) + +/* + * Convert val to fixed point and add it to the temporary average. + */ +#define __INCREMENT_PTFV(dev, val) \ + ({ next_timing->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX] += \ + ((val) * MOVAVG_PRECISION_FACTOR); }) + +/* + * Convert a moving average back to integral form and return the value. + */ +#define __MOVAVG_AC(timing, dev) \ + ((timing)->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX] / \ + MOVAVG_PRECISION_FACTOR) + +/* Weighted update. */ +#define __WEIGHTED_UPDATE_PTFV(dev, nval) \ + do { \ + int w = PTFV_MOVAVG_WEIGHT_INDEX; \ + int dqs = PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX; \ + \ + next_timing->ptfv_list[dqs] = \ + ((nval * MOVAVG_PRECISION_FACTOR) + \ + (next_timing->ptfv_list[dqs] * \ + next_timing->ptfv_list[w])) / \ + (next_timing->ptfv_list[w] + 1); \ + \ + emc_cc_dbg(EMA_UPDATES, "%s: (s=%u) EMA: %u\n", \ + __stringify(dev), nval, \ + next_timing->ptfv_list[dqs]); \ + } while (0) + +/* Access a particular average. */ +#define __MOVAVG(timing, dev) \ + ((timing)->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX]) + +static u32 update_clock_tree_delay(struct tegra_emc *emc, + u32 dram_dev_num, u32 channel_mode, int type) +{ + u32 mrr_req = 0, mrr_data = 0; + u32 temp0_0 = 0, temp0_1 = 0, temp1_0 = 0, temp1_1 = 0; + s32 tdel = 0, tmdel = 0, adel = 0; + u32 cval = 0; + struct emc_table *last_timing = emc->current_timing; + struct emc_table *next_timing = emc->next_timing; + u32 last_timing_rate_mhz = last_timing->rate / 1000; + u32 next_timing_rate_mhz = next_timing->rate / 1000; + int dvfs_pt1 = type == DVFS_PT1; + int dvfs_update = type == DVFS_UPDATE; + int periodic_training_update = type == PERIODIC_TRAINING_UPDATE; + + /* + * Dev0 MSB. + */ + if (dvfs_pt1 || periodic_training_update) { + mrr_req = (2 << EMC_MRR_DEV_SEL_SHIFT) | + (19 << EMC_MRR_MA_SHIFT); + emc_writel(emc, mrr_req, EMC_MRR); + + WARN(emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_MRR_DIVLD, 1, REG_EMC), + "Timed out waiting for MRR 19 (ch=0)\n"); + if (channel_mode == DUAL_CHANNEL) + WARN(emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_MRR_DIVLD, 1, + REG_EMC1), + "Timed out waiting for MRR 19 (ch=1)\n"); + + mrr_data = (emc_readl(emc, EMC_MRR) & EMC_MRR_DATA_MASK) << + EMC_MRR_DATA_SHIFT; + + temp0_0 = (mrr_data & 0xff) << 8; + temp0_1 = mrr_data & 0xff00; + + if (channel_mode == DUAL_CHANNEL) { + mrr_data = (emc1_readl(emc, EMC_MRR) & + EMC_MRR_DATA_MASK) << + EMC_MRR_DATA_SHIFT; + temp1_0 = (mrr_data & 0xff) << 8; + temp1_1 = mrr_data & 0xff00; + } + + /* + * Dev0 LSB. + */ + mrr_req = (mrr_req & ~EMC_MRR_MA_MASK) | + (18 << EMC_MRR_MA_SHIFT); + emc_writel(emc, mrr_req, EMC_MRR); + + WARN(emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_MRR_DIVLD, 1, REG_EMC), + "Timed out waiting for MRR 18 (ch=0)\n"); + if (channel_mode == DUAL_CHANNEL) + WARN(emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_MRR_DIVLD, 1, + REG_EMC1), + "Timed out waiting for MRR 18 (ch=1)\n"); + + mrr_data = (emc_readl(emc, EMC_MRR) & EMC_MRR_DATA_MASK) << + EMC_MRR_DATA_SHIFT; + + temp0_0 |= mrr_data & 0xff; + temp0_1 |= (mrr_data & 0xff00) >> 8; + + if (channel_mode == DUAL_CHANNEL) { + mrr_data = (emc1_readl(emc, EMC_MRR) & + EMC_MRR_DATA_MASK) << + EMC_MRR_DATA_SHIFT; + temp1_0 |= (mrr_data & 0xff); + temp1_1 |= (mrr_data & 0xff00) >> 8; + } + } + + if (dvfs_pt1 || periodic_training_update) + cval = (1000000 * tegra210_actual_osc_clocks( + last_timing->run_clocks)) / + (last_timing_rate_mhz * 2 * temp0_0); + + if (dvfs_pt1) + __INCREMENT_PTFV(C0D0U0, cval); + else if (dvfs_update) + __AVERAGE_PTFV(C0D0U0); + else if (periodic_training_update) + __WEIGHTED_UPDATE_PTFV(C0D0U0, cval); + + if (dvfs_update || periodic_training_update) { + tdel = next_timing->current_dram_clktree[C0D0U0] - + __MOVAVG_AC(next_timing, C0D0U0); + tmdel = (tdel < 0) ? -1 * tdel : tdel; + adel = tmdel; + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > + next_timing->tree_margin) + next_timing->current_dram_clktree[C0D0U0] = + __MOVAVG_AC(next_timing, C0D0U0); + } + + if (dvfs_pt1 || periodic_training_update) + cval = (1000000 * tegra210_actual_osc_clocks( + last_timing->run_clocks)) / + (last_timing_rate_mhz * 2 * temp0_1); + + if (dvfs_pt1) + __INCREMENT_PTFV(C0D0U1, cval); + else if (dvfs_update) + __AVERAGE_PTFV(C0D0U1); + else if (periodic_training_update) + __WEIGHTED_UPDATE_PTFV(C0D0U1, cval); + + if (dvfs_update || periodic_training_update) { + tdel = next_timing->current_dram_clktree[C0D0U1] - + __MOVAVG_AC(next_timing, C0D0U1); + tmdel = (tdel < 0) ? -1 * tdel : tdel; + + if (tmdel > adel) + adel = tmdel; + + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > + next_timing->tree_margin) + next_timing->current_dram_clktree[C0D0U1] = + __MOVAVG_AC(next_timing, C0D0U1); + } + + if (channel_mode == DUAL_CHANNEL) { + if (dvfs_pt1 || periodic_training_update) + cval = (1000000 * tegra210_actual_osc_clocks( + last_timing->run_clocks)) / + (last_timing_rate_mhz * 2 * temp1_0); + if (dvfs_pt1) + __INCREMENT_PTFV(C1D0U0, cval); + else if (dvfs_update) + __AVERAGE_PTFV(C1D0U0); + else if (periodic_training_update) + __WEIGHTED_UPDATE_PTFV(C1D0U0, cval); + + if (dvfs_update || periodic_training_update) { + tdel = next_timing->current_dram_clktree[C1D0U0] - + __MOVAVG_AC(next_timing, C1D0U0); + tmdel = (tdel < 0) ? -1 * tdel : tdel; + + if (tmdel > adel) + adel = tmdel; + + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > + next_timing->tree_margin) + next_timing->current_dram_clktree[C1D0U0] = + __MOVAVG_AC(next_timing, C1D0U0); + } + + if (dvfs_pt1 || periodic_training_update) + cval = (1000000 * tegra210_actual_osc_clocks( + last_timing->run_clocks)) / + (last_timing_rate_mhz * 2 * temp1_1); + if (dvfs_pt1) + __INCREMENT_PTFV(C1D0U1, cval); + else if (dvfs_update) + __AVERAGE_PTFV(C1D0U1); + else if (periodic_training_update) + __WEIGHTED_UPDATE_PTFV(C1D0U1, cval); + + if (dvfs_update || periodic_training_update) { + tdel = next_timing->current_dram_clktree[C1D0U1] - + __MOVAVG_AC(next_timing, C1D0U1); + tmdel = (tdel < 0) ? -1 * tdel : tdel; + + if (tmdel > adel) + adel = tmdel; + + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > + next_timing->tree_margin) + next_timing->current_dram_clktree[C1D0U1] = + __MOVAVG_AC(next_timing, C1D0U1); + } + } + + if (dram_dev_num != TWO_RANK) + goto done; + + /* + * Dev1 MSB. + */ + if (dvfs_pt1 || periodic_training_update) { + mrr_req = (1 << EMC_MRR_DEV_SEL_SHIFT) | + (19 << EMC_MRR_MA_SHIFT); + emc_writel(emc, mrr_req, EMC_MRR); + + WARN(emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_MRR_DIVLD, 1, REG_EMC), + "Timed out waiting for MRR 19 (ch=0)\n"); + if (channel_mode == DUAL_CHANNEL) + WARN(emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_MRR_DIVLD, 1, + REG_EMC1), + "Timed out waiting for MRR 19 (ch=1)\n"); + + mrr_data = (emc_readl(emc, EMC_MRR) & EMC_MRR_DATA_MASK) << + EMC_MRR_DATA_SHIFT; + + temp0_0 = (mrr_data & 0xff) << 8; + temp0_1 = mrr_data & 0xff00; + + if (channel_mode == DUAL_CHANNEL) { + mrr_data = (emc1_readl(emc, EMC_MRR) & + EMC_MRR_DATA_MASK) << + EMC_MRR_DATA_SHIFT; + temp1_0 = (mrr_data & 0xff) << 8; + temp1_1 = mrr_data & 0xff00; + } + + /* + * Dev1 LSB. + */ + mrr_req = (mrr_req & ~EMC_MRR_MA_MASK) | + (18 << EMC_MRR_MA_SHIFT); + emc_writel(emc, mrr_req, EMC_MRR); + + WARN(emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_MRR_DIVLD, 1, REG_EMC), + "Timed out waiting for MRR 18 (ch=0)\n"); + if (channel_mode == DUAL_CHANNEL) + WARN(emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_MRR_DIVLD, 1, + REG_EMC1), + "Timed out waiting for MRR 18 (ch=1)\n"); + + mrr_data = (emc_readl(emc, EMC_MRR) & EMC_MRR_DATA_MASK) << + EMC_MRR_DATA_SHIFT; + + temp0_0 |= mrr_data & 0xff; + temp0_1 |= (mrr_data & 0xff00) >> 8; + + if (channel_mode == DUAL_CHANNEL) { + mrr_data = (emc1_readl(emc, EMC_MRR) & + EMC_MRR_DATA_MASK) << + EMC_MRR_DATA_SHIFT; + temp1_0 |= (mrr_data & 0xff); + temp1_1 |= (mrr_data & 0xff00) >> 8; + } + } + + if (dvfs_pt1 || periodic_training_update) + cval = (1000000 * tegra210_actual_osc_clocks( + last_timing->run_clocks)) / + (last_timing_rate_mhz * 2 * temp0_0); + + if (dvfs_pt1) + __INCREMENT_PTFV(C0D1U0, cval); + else if (dvfs_update) + __AVERAGE_PTFV(C0D1U0); + else if (periodic_training_update) + __WEIGHTED_UPDATE_PTFV(C0D1U0, cval); + + if (dvfs_update || periodic_training_update) { + tdel = next_timing->current_dram_clktree[C0D1U0] - + __MOVAVG_AC(next_timing, C0D1U0); + tmdel = (tdel < 0) ? -1 * tdel : tdel; + if (tmdel > adel) + adel = tmdel; + + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > + next_timing->tree_margin) + next_timing->current_dram_clktree[C0D1U0] = + __MOVAVG_AC(next_timing, C0D1U0); + } + + if (dvfs_pt1 || periodic_training_update) + cval = (1000000 * tegra210_actual_osc_clocks( + last_timing->run_clocks)) / + (last_timing_rate_mhz * 2 * temp0_1); + + if (dvfs_pt1) + __INCREMENT_PTFV(C0D1U1, cval); + else if (dvfs_update) + __AVERAGE_PTFV(C0D1U1); + else if (periodic_training_update) + __WEIGHTED_UPDATE_PTFV(C0D1U1, cval); + + if (dvfs_update || periodic_training_update) { + tdel = next_timing->current_dram_clktree[C0D1U1] - + __MOVAVG_AC(next_timing, C0D1U1); + tmdel = (tdel < 0) ? -1 * tdel : tdel; + if (tmdel > adel) + adel = tmdel; + + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > + next_timing->tree_margin) + next_timing->current_dram_clktree[C0D1U1] = + __MOVAVG_AC(next_timing, C0D1U1); + } + + if (channel_mode == DUAL_CHANNEL) { + if (dvfs_pt1 || periodic_training_update) + cval = (1000000 * tegra210_actual_osc_clocks( + last_timing->run_clocks)) / + (last_timing_rate_mhz * 2 * temp1_0); + + if (dvfs_pt1) + __INCREMENT_PTFV(C1D1U0, cval); + else if (dvfs_update) + __AVERAGE_PTFV(C1D1U0); + else if (periodic_training_update) + __WEIGHTED_UPDATE_PTFV(C1D1U0, cval); + + if (dvfs_update || periodic_training_update) { + tdel = next_timing->current_dram_clktree[C1D1U0] - + __MOVAVG_AC(next_timing, C1D1U0); + tmdel = (tdel < 0) ? -1 * tdel : tdel; + if (tmdel > adel) + adel = tmdel; + + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > + next_timing->tree_margin) + next_timing->current_dram_clktree[C1D1U0] = + __MOVAVG_AC(next_timing, C1D1U0); + } + + if (dvfs_pt1 || periodic_training_update) + cval = (1000000 * tegra210_actual_osc_clocks( + last_timing->run_clocks)) / + (last_timing_rate_mhz * 2 * temp1_1); + + if (dvfs_pt1) + __INCREMENT_PTFV(C1D1U1, cval); + else if (dvfs_update) + __AVERAGE_PTFV(C1D1U1); + else if (periodic_training_update) + __WEIGHTED_UPDATE_PTFV(C1D1U1, cval); + + if (dvfs_update || periodic_training_update) { + tdel = next_timing->current_dram_clktree[C1D1U1] - + __MOVAVG_AC(next_timing, C1D1U1); + tmdel = (tdel < 0) ? -1 * tdel : tdel; + if (tmdel > adel) + adel = tmdel; + + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > + next_timing->tree_margin) + next_timing->current_dram_clktree[C1D1U1] = + __MOVAVG_AC(next_timing, C1D1U1); + } + } + +done: + return adel; +} + +static u32 periodic_compensation_handler(struct tegra_emc *emc, u32 type, + u32 dram_dev_num, + u32 channel_mode, + struct emc_table *last_timing, + struct emc_table *next_timing) +{ +#define __COPY_EMA(nt, lt, dev) \ + ({ __MOVAVG(nt, dev) = __MOVAVG(lt, dev) * \ + (nt)->ptfv_list[PTFV_DVFS_SAMPLES_INDEX]; }) + + u32 i; + u32 adel = 0; + u32 samples = next_timing->ptfv_list[PTFV_DVFS_SAMPLES_INDEX]; + u32 delay = 2 + + (1000 * tegra210_actual_osc_clocks(last_timing->run_clocks) / + last_timing->rate); + + if (!next_timing->periodic_training) + return 0; + + if (type == DVFS_SEQUENCE) { + if (last_timing->periodic_training && + (next_timing->ptfv_list[PTFV_CONFIG_CTRL_INDEX] & + PTFV_CONFIG_CTRL_USE_PREVIOUS_EMA)) { + /* + * If the previous frequency was using periodic + * calibration then we can reuse the previous + * frequencies EMA data. + */ + __COPY_EMA(next_timing, last_timing, C0D0U0); + __COPY_EMA(next_timing, last_timing, C0D0U1); + __COPY_EMA(next_timing, last_timing, C1D0U0); + __COPY_EMA(next_timing, last_timing, C1D0U1); + __COPY_EMA(next_timing, last_timing, C0D1U0); + __COPY_EMA(next_timing, last_timing, C0D1U1); + __COPY_EMA(next_timing, last_timing, C1D1U0); + __COPY_EMA(next_timing, last_timing, C1D1U1); + } else { + /* Reset the EMA.*/ + __MOVAVG(next_timing, C0D0U0) = 0; + __MOVAVG(next_timing, C0D0U1) = 0; + __MOVAVG(next_timing, C1D0U0) = 0; + __MOVAVG(next_timing, C1D0U1) = 0; + __MOVAVG(next_timing, C0D1U0) = 0; + __MOVAVG(next_timing, C0D1U1) = 0; + __MOVAVG(next_timing, C1D1U0) = 0; + __MOVAVG(next_timing, C1D1U1) = 0; + + for (i = 0; i < samples; i++) { + tegra210_start_periodic_compensation(emc); + udelay(delay); + + /* + * Generate next sample of data. + */ + adel = update_clock_tree_delay(emc, + dram_dev_num, + channel_mode, + DVFS_PT1); + } + } + + /* + * Seems like it should be part of the + * 'if (last_timing->periodic_training)' conditional + * since is already done for the else clause. + */ + adel = update_clock_tree_delay(emc, + dram_dev_num, + channel_mode, + DVFS_UPDATE); + } + + if (type == PERIODIC_TRAINING_SEQUENCE) { + tegra210_start_periodic_compensation(emc); + udelay(delay); + + adel = update_clock_tree_delay(emc, + dram_dev_num, + channel_mode, + PERIODIC_TRAINING_UPDATE); + } + + return adel; +} + +u32 __do_periodic_emc_compensation_r21021(struct tegra_emc *emc) +{ + u32 dram_dev_num; + u32 channel_mode; + u32 emc_cfg, emc_cfg_o; + u32 emc_dbg_o; + u32 del, i; + u32 list[] = { + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2, + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3, + EMC_DATA_BRLSHFT_0, + EMC_DATA_BRLSHFT_1 + }; + u32 items = ARRAY_SIZE(list); + u32 emc_cfg_update; + struct emc_table *current_timing = emc->current_timing; + + if (current_timing->periodic_training) { + channel_mode = + !!(current_timing->burst_regs[EMC_FBIO_CFG7_INDEX] & + (1 << 2)); + dram_dev_num = 1 + (mc_readl(emc->mc, MC_EMEM_ADR_CFG) & 0x1); + + emc_cc_dbg(PER_TRAIN, "Periodic training starting\n"); + + emc_dbg_o = emc_readl(emc, EMC_DBG); + emc_cfg_o = emc_readl(emc, EMC_CFG); + emc_cfg = emc_cfg_o & ~(EMC_CFG_DYN_SELF_REF | + EMC_CFG_DRAM_ACPD | + EMC_CFG_DRAM_CLKSTOP_PD | + EMC_CFG_DRAM_CLKSTOP_PD); + + + /* + * 1. Power optimizations should be off. + */ + emc_writel(emc, emc_cfg, EMC_CFG); + + /* Does emc_timing_update() for above changes. */ + tegra210_dll_disable(emc, channel_mode); + + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_DRAM_IN_POWERDOWN_MASK, 0, + REG_EMC); + if (channel_mode) + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_DRAM_IN_POWERDOWN_MASK, + 0, REG_EMC1); + + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_MASK, + 0, REG_EMC); + if (channel_mode) + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_MASK, 0, + REG_EMC1); + + emc_cfg_update = emc_readl(emc, EMC_CFG_UPDATE); + emc_writel(emc, (emc_cfg_update & + ~EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_MASK) | + (2 << EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_SHIFT), + EMC_CFG_UPDATE); + + /* + * 2. osc kick off - this assumes training and dvfs have set + * correct MR23. + */ + tegra210_start_periodic_compensation(emc); + + /* + * 3. Let dram capture its clock tree delays. + */ + udelay((tegra210_actual_osc_clocks(current_timing->run_clocks) * + 1000) / + current_timing->rate + 1); + + /* + * 4. Check delta wrt previous values (save value if margin + * exceeds what is set in table). + */ + del = periodic_compensation_handler(emc, + PERIODIC_TRAINING_SEQUENCE, + dram_dev_num, + channel_mode, + current_timing, + current_timing); + + /* + * 5. Apply compensation w.r.t. trained values (if clock tree + * has drifted more than the set margin). + */ + if (current_timing->tree_margin < + ((del * 128 * (current_timing->rate / 1000)) / 1000000)) { + for (i = 0; i < items; i++) { + u32 tmp = + tegra210_apply_periodic_compensation_trimmer( + current_timing, list[i]); + + emc_cc_dbg(EMA_WRITES, "0x%08x <= 0x%08x\n", + list[i], tmp); + emc_writel(emc, tmp, list[i]); + } + } + + emc_writel(emc, emc_cfg_o, EMC_CFG); + + /* + * 6. Timing update actally applies the new trimmers. + */ + emc_timing_update(emc, channel_mode); + + /* 6.1. Restore the UPDATE_DLL_IN_UPDATE field. */ + emc_writel(emc, emc_cfg_update, EMC_CFG_UPDATE); + + /* 6.2. Restore the DLL. */ + tegra210_dll_enable(emc, channel_mode); + } + + return 0; +} + +/* + * Do the clock change sequence. + */ +void emc_set_clock_r21021(struct tegra_emc *emc, u32 clksrc) +{ + /* + * This is the timing table for the source frequency. It does _not_ + * necessarily correspond to the actual timing values in the EMC at the + * moment. If the boot BCT differs from the table then this can happen. + * However, we need it for accessing the dram_timings (which are not + * really registers) array for the current frequency. + */ + struct emc_table *fake_timing; + struct emc_table *last_timing = emc->current_timing; + struct emc_table *next_timing = emc->next_timing; + + u32 i, tmp; + + u32 cya_allow_ref_cc = 0, ref_b4_sref_en = 0, cya_issue_pc_ref = 0; + + u32 zqcal_before_cc_cutoff = 2400; /* In picoseconds */ + u32 ref_delay_mult; + u32 ref_delay; + s32 zq_latch_dvfs_wait_time; + s32 tZQCAL_lpddr4_fc_adj; + /* Scaled by x1000 */ + u32 tFC_lpddr4 = 1000 * next_timing->dram_timings[T_FC_LPDDR4]; + u32 tZQCAL_lpddr4 = 1000000; + + u32 dram_type, dram_dev_num, shared_zq_resistor; + u32 channel_mode; + u32 is_lpddr3; + + u32 emc_cfg, emc_sel_dpd_ctrl, emc_cfg_reg; + + u32 emc_dbg; + u32 emc_zcal_interval; + u32 emc_zcal_wait_cnt_old; + u32 emc_zcal_wait_cnt_new; + u32 emc_dbg_active; + u32 zq_op; + u32 zcal_wait_time_clocks; + u32 zcal_wait_time_ps; + + u32 emc_auto_cal_config; + u32 auto_cal_en; + + u32 mr13_catr_enable; + + u32 ramp_up_wait = 0, ramp_down_wait = 0; + + /* In picoseconds. */ + u32 source_clock_period; + u32 destination_clock_period; + + u32 emc_dbg_o; + u32 emc_cfg_pipe_clk_o; + u32 emc_pin_o; + + u32 mr13_flip_fspwr; + u32 mr13_flip_fspop; + + u32 opt_zcal_en_cc; + u32 opt_do_sw_qrst = 1; + u32 opt_dvfs_mode; + u32 opt_dll_mode; + u32 opt_cc_short_zcal = 1; + u32 opt_short_zcal = 1; + u32 save_restore_clkstop_pd = 1; + + u32 prelock_dll_en = 0, dll_out; + + int next_push, next_dq_e_ivref, next_dqs_e_ivref; + + u32 opt_war_200024907; + u32 zq_wait_long; + u32 zq_wait_short; + + u32 bg_regulator_switch_complete_wait_clks; + u32 bg_regulator_mode_change; + u32 enable_bglp_regulator; + u32 enable_bg_regulator; + + u32 tRTM; + u32 RP_war; + u32 R2P_war; + u32 TRPab_war; + s32 nRTP; + u32 deltaTWATM; + u32 W2P_war; + u32 tRPST; + + u32 mrw_req; + u32 adel = 0, compensate_trimmer_applicable = 0; + u32 next_timing_rate_mhz = next_timing->rate / 1000; + + static u32 fsp_for_next_freq; + + emc_cc_dbg(INFO, "Running clock change.\n"); + + fake_timing = emc_get_timing_from_freq(emc, last_timing->rate); + + fsp_for_next_freq = !fsp_for_next_freq; + + dram_type = emc_readl(emc, EMC_FBIO_CFG5) & + EMC_FBIO_CFG5_DRAM_TYPE_MASK >> + EMC_FBIO_CFG5_DRAM_TYPE_SHIFT; + shared_zq_resistor = last_timing->burst_regs[EMC_ZCAL_WAIT_CNT_INDEX] & + 1 << 31; + channel_mode = !!(last_timing->burst_regs[EMC_FBIO_CFG7_INDEX] & + 1 << 2); + opt_zcal_en_cc = (next_timing->burst_regs[EMC_ZCAL_INTERVAL_INDEX] && + !last_timing->burst_regs[EMC_ZCAL_INTERVAL_INDEX]) || + dram_type == DRAM_TYPE_LPDDR4; + opt_dll_mode = (dram_type == DRAM_TYPE_DDR3) ? + emc_get_dll_state(next_timing) : DLL_OFF; + is_lpddr3 = (dram_type == DRAM_TYPE_LPDDR2) && + next_timing->burst_regs[EMC_FBIO_CFG5_INDEX] & + 1 << 25; + opt_war_200024907 = (dram_type == DRAM_TYPE_LPDDR4); + opt_dvfs_mode = MAN_SR; + dram_dev_num = (mc_readl(emc->mc, MC_EMEM_ADR_CFG) & 0x1) + 1; + + emc_cfg_reg = emc_readl(emc, EMC_CFG); + emc_auto_cal_config = emc_readl(emc, EMC_AUTO_CAL_CONFIG); + + source_clock_period = 1000000000 / last_timing->rate; + destination_clock_period = 1000000000 / next_timing->rate; + + tZQCAL_lpddr4_fc_adj = (destination_clock_period > + zqcal_before_cc_cutoff) ? + tZQCAL_lpddr4 / destination_clock_period : + (tZQCAL_lpddr4 - tFC_lpddr4) / destination_clock_period; + emc_dbg_o = emc_readl(emc, EMC_DBG); + emc_pin_o = emc_readl(emc, EMC_PIN); + emc_cfg_pipe_clk_o = emc_readl(emc, EMC_CFG_PIPE_CLK); + emc_dbg = emc_dbg_o; + + emc_cfg = next_timing->burst_regs[EMC_CFG_INDEX]; + emc_cfg &= ~(EMC_CFG_DYN_SELF_REF | EMC_CFG_DRAM_ACPD | + EMC_CFG_DRAM_CLKSTOP_SR | EMC_CFG_DRAM_CLKSTOP_PD); + emc_sel_dpd_ctrl = next_timing->emc_sel_dpd_ctrl; + emc_sel_dpd_ctrl &= ~(EMC_SEL_DPD_CTRL_CLK_SEL_DPD_EN | + EMC_SEL_DPD_CTRL_CA_SEL_DPD_EN | + EMC_SEL_DPD_CTRL_RESET_SEL_DPD_EN | + EMC_SEL_DPD_CTRL_ODT_SEL_DPD_EN | + EMC_SEL_DPD_CTRL_DATA_SEL_DPD_EN); + + emc_cc_dbg(INFO, "Clock change version: %d\n", + DVFS_CLOCK_CHANGE_VERSION); + emc_cc_dbg(INFO, "DRAM type = %d\n", dram_type); + emc_cc_dbg(INFO, "DRAM dev #: %d\n", dram_dev_num); + emc_cc_dbg(INFO, "Next EMC clksrc: 0x%08x\n", clksrc); + emc_cc_dbg(INFO, "DLL clksrc: 0x%08x\n", next_timing->dll_clk_src); + emc_cc_dbg(INFO, "last rate: %u, next rate %u\n", last_timing->rate, + next_timing->rate); + emc_cc_dbg(INFO, "last period: %u, next period: %u\n", + source_clock_period, destination_clock_period); + emc_cc_dbg(INFO, " shared_zq_resistor: %d\n", !!shared_zq_resistor); + emc_cc_dbg(INFO, " channel_mode: %d\n", channel_mode); + emc_cc_dbg(INFO, " opt_dll_mode: %d\n", opt_dll_mode); + + /* + * Step 1: + * Pre DVFS SW sequence. + */ + emc_cc_dbg(STEPS, "Step 1\n"); + emc_cc_dbg(STEPS, "Step 1.1: Disable DLL temporarily.\n"); + tmp = emc_readl(emc, EMC_CFG_DIG_DLL); + tmp &= ~EMC_CFG_DIG_DLL_CFG_DLL_EN; + emc_writel(emc, tmp, EMC_CFG_DIG_DLL); + + emc_timing_update(emc, channel_mode); + emc_wait_for_update(emc, EMC_CFG_DIG_DLL, + EMC_CFG_DIG_DLL_CFG_DLL_EN, 0, REG_EMC); + if (channel_mode) + emc_wait_for_update(emc, EMC_CFG_DIG_DLL, + EMC_CFG_DIG_DLL_CFG_DLL_EN, 0, REG_EMC1); + + emc_cc_dbg(STEPS, "Step 1.2: Disable AUTOCAL temporarily.\n"); + emc_auto_cal_config = next_timing->emc_auto_cal_config; + auto_cal_en = emc_auto_cal_config & EMC_AUTO_CAL_CONFIG_AUTO_CAL_ENABLE; + emc_auto_cal_config &= ~EMC_AUTO_CAL_CONFIG_AUTO_CAL_START; + emc_auto_cal_config |= EMC_AUTO_CAL_CONFIG_AUTO_CAL_MEASURE_STALL; + emc_auto_cal_config |= EMC_AUTO_CAL_CONFIG_AUTO_CAL_UPDATE_STALL; + emc_auto_cal_config |= auto_cal_en; + emc_writel(emc, emc_auto_cal_config, EMC_AUTO_CAL_CONFIG); + emc_readl(emc, EMC_AUTO_CAL_CONFIG); /* Flush write. */ + + emc_cc_dbg(STEPS, "Step 1.3: Disable other power features.\n"); + emc_set_shadow_bypass(emc, ACTIVE); + emc_writel(emc, emc_cfg, EMC_CFG); + emc_writel(emc, emc_sel_dpd_ctrl, EMC_SEL_DPD_CTRL); + emc_set_shadow_bypass(emc, ASSEMBLY); + + if (next_timing->periodic_training) { + tegra210_reset_dram_clktree_values(next_timing); + + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_DRAM_IN_POWERDOWN_MASK, 0, + REG_EMC); + if (channel_mode) + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_DRAM_IN_POWERDOWN_MASK, + 0, REG_EMC1); + + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_MASK, 0, + REG_EMC); + if (channel_mode) + emc_wait_for_update(emc, EMC_EMC_STATUS, + EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_MASK, 0, + REG_EMC1); + + tegra210_start_periodic_compensation(emc); + + udelay(((1000 * + tegra210_actual_osc_clocks(last_timing->run_clocks)) / + last_timing->rate) + 2); + adel = periodic_compensation_handler(emc, DVFS_SEQUENCE, + dram_dev_num, + channel_mode, + fake_timing, next_timing); + compensate_trimmer_applicable = + next_timing->periodic_training && + ((adel * 128 * next_timing_rate_mhz) / 1000000) > + next_timing->tree_margin; + } + + emc_writel(emc, EMC_INTSTATUS_CLKCHANGE_COMPLETE, EMC_INTSTATUS); + emc_set_shadow_bypass(emc, ACTIVE); + emc_writel(emc, emc_cfg, EMC_CFG); + emc_writel(emc, emc_sel_dpd_ctrl, EMC_SEL_DPD_CTRL); + emc_writel(emc, emc_cfg_pipe_clk_o | EMC_CFG_PIPE_CLK_CLK_ALWAYS_ON, + EMC_CFG_PIPE_CLK); + emc_writel(emc, next_timing->emc_fdpd_ctrl_cmd_no_ramp & + ~EMC_FDPD_CTRL_CMD_NO_RAMP_CMD_DPD_NO_RAMP_ENABLE, + EMC_FDPD_CTRL_CMD_NO_RAMP); + + bg_regulator_mode_change = + ((next_timing->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD) ^ + (last_timing->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD)) || + ((next_timing->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD) ^ + (last_timing->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD)); + enable_bglp_regulator = + (next_timing->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD) == 0; + enable_bg_regulator = + (next_timing->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD) == 0; + + if (bg_regulator_mode_change) { + if (enable_bg_regulator) + emc_writel(emc, last_timing->burst_regs + [EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + ~EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD, + EMC_PMACRO_BG_BIAS_CTRL_0); + else + emc_writel(emc, last_timing->burst_regs + [EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + ~EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD, + EMC_PMACRO_BG_BIAS_CTRL_0); + } + + /* Check if we need to turn on VREF generator. */ + if ((((last_timing->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX] & + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF) == 0) && + ((next_timing->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX] & + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF) == 1)) || + (((last_timing->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX] & + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF) == 0) && + ((next_timing->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX] & + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF) == 1))) { + u32 pad_tx_ctrl = + next_timing->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; + u32 last_pad_tx_ctrl = + last_timing->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; + + next_dqs_e_ivref = pad_tx_ctrl & + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF; + next_dq_e_ivref = pad_tx_ctrl & + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF; + next_push = (last_pad_tx_ctrl & + ~EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF & + ~EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF) | + next_dq_e_ivref | next_dqs_e_ivref; + emc_writel(emc, next_push, EMC_PMACRO_DATA_PAD_TX_CTRL); + udelay(1); + } else if (bg_regulator_mode_change) { + udelay(1); + } + + emc_set_shadow_bypass(emc, ASSEMBLY); + + /* + * Step 2: + * Prelock the DLL. + */ + emc_cc_dbg(STEPS, "Step 2\n"); + if (next_timing->burst_regs[EMC_CFG_DIG_DLL_INDEX] & + EMC_CFG_DIG_DLL_CFG_DLL_EN) { + emc_cc_dbg(INFO, "Prelock enabled for target frequency.\n"); + dll_out = tegra210_dll_prelock(emc, 0, clksrc); + emc_cc_dbg(INFO, "DLL out: 0x%03x\n", dll_out); + prelock_dll_en = 1; + } else { + emc_cc_dbg(INFO, "Disabling DLL for target frequency.\n"); + tegra210_dll_disable(emc, channel_mode); + } + + /* + * Step 3: + * Prepare autocal for the clock change. + */ + emc_cc_dbg(STEPS, "Step 3\n"); + emc_set_shadow_bypass(emc, ACTIVE); + emc_writel(emc, next_timing->emc_auto_cal_config2, + EMC_AUTO_CAL_CONFIG2); + emc_writel(emc, next_timing->emc_auto_cal_config3, + EMC_AUTO_CAL_CONFIG3); + emc_writel(emc, next_timing->emc_auto_cal_config4, + EMC_AUTO_CAL_CONFIG4); + emc_writel(emc, next_timing->emc_auto_cal_config5, + EMC_AUTO_CAL_CONFIG5); + emc_writel(emc, next_timing->emc_auto_cal_config6, + EMC_AUTO_CAL_CONFIG6); + emc_writel(emc, next_timing->emc_auto_cal_config7, + EMC_AUTO_CAL_CONFIG7); + emc_writel(emc, next_timing->emc_auto_cal_config8, + EMC_AUTO_CAL_CONFIG8); + emc_set_shadow_bypass(emc, ASSEMBLY); + + emc_auto_cal_config |= (EMC_AUTO_CAL_CONFIG_AUTO_CAL_COMPUTE_START | + auto_cal_en); + emc_writel(emc, emc_auto_cal_config, EMC_AUTO_CAL_CONFIG); + + /* + * Step 4: + * Update EMC_CFG. (??) + */ + emc_cc_dbg(STEPS, "Step 4\n"); + if (source_clock_period > 50000 && dram_type == DRAM_TYPE_LPDDR4) + ccfifo_writel(emc, 1, EMC_SELF_REF, 0); + else + emc_writel(emc, next_timing->emc_cfg_2, EMC_CFG_2); + + /* + * Step 5: + * Prepare reference variables for ZQCAL regs. + */ + emc_cc_dbg(STEPS, "Step 5\n"); + emc_zcal_interval = 0; + emc_zcal_wait_cnt_old = + last_timing->burst_regs[EMC_ZCAL_WAIT_CNT_INDEX]; + emc_zcal_wait_cnt_new = + next_timing->burst_regs[EMC_ZCAL_WAIT_CNT_INDEX]; + emc_zcal_wait_cnt_old &= ~EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_MASK; + emc_zcal_wait_cnt_new &= ~EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_MASK; + + if (dram_type == DRAM_TYPE_LPDDR4) + zq_wait_long = max((u32)1, + div_o3(1000000, destination_clock_period)); + else if (dram_type == DRAM_TYPE_LPDDR2 || is_lpddr3) + zq_wait_long = max(next_timing->min_mrs_wait, + div_o3(360000, destination_clock_period)) + + 4; + else if (dram_type == DRAM_TYPE_DDR3) + zq_wait_long = max((u32)256, + div_o3(320000, destination_clock_period) + + 2); + else + zq_wait_long = 0; + + if (dram_type == DRAM_TYPE_LPDDR2 || is_lpddr3) + zq_wait_short = max(max(next_timing->min_mrs_wait, (u32)6), + div_o3(90000, destination_clock_period)) + + 4; + else if (dram_type == DRAM_TYPE_DDR3) + zq_wait_short = max((u32)64, + div_o3(80000, destination_clock_period)) + + 2; + else + zq_wait_short = 0; + + /* + * Step 6: + * Training code - removed. + */ + emc_cc_dbg(STEPS, "Step 6\n"); + + /* + * Step 7: + * Program FSP reference registers and send MRWs to new FSPWR. + */ + emc_cc_dbg(STEPS, "Step 7\n"); + emc_cc_dbg(SUB_STEPS, "Step 7.1: Bug 200024907 - Patch RP R2P"); + if (opt_war_200024907) { + nRTP = 16; + if (source_clock_period >= 1000000/1866) /* 535.91 ps */ + nRTP = 14; + if (source_clock_period >= 1000000/1600) /* 625.00 ps */ + nRTP = 12; + if (source_clock_period >= 1000000/1333) /* 750.19 ps */ + nRTP = 10; + if (source_clock_period >= 1000000/1066) /* 938.09 ps */ + nRTP = 8; + + deltaTWATM = max_t(u32, div_o3(7500, source_clock_period), 8); + + /* + * Originally there was a + .5 in the tRPST calculation. + * However since we can't do FP in the kernel and the tRTM + * computation was in a floating point ceiling function, adding + * one to tRTP should be ok. There is no other source of non + * integer values, so the result was always going to be + * something for the form: f_ceil(N + .5) = N + 1; + */ + tRPST = ((last_timing->emc_mrw & 0x80) >> 7); + tRTM = fake_timing->dram_timings[RL] + + div_o3(3600, source_clock_period) + + max_t(u32, div_o3(7500, source_clock_period), 8) + + tRPST + 1 + nRTP; + + emc_cc_dbg(INFO, "tRTM = %u, EMC_RP = %u\n", tRTM, + next_timing->burst_regs[EMC_RP_INDEX]); + + if (last_timing->burst_regs[EMC_RP_INDEX] < tRTM) { + if (tRTM > (last_timing->burst_regs[EMC_R2P_INDEX] + + last_timing->burst_regs[EMC_RP_INDEX])) { + R2P_war = tRTM - + last_timing->burst_regs[EMC_RP_INDEX]; + RP_war = last_timing->burst_regs[EMC_RP_INDEX]; + TRPab_war = last_timing->burst_regs[ + EMC_TRPAB_INDEX]; + if (R2P_war > 63) { + RP_war = R2P_war + + last_timing->burst_regs[ + EMC_RP_INDEX] - 63; + if (TRPab_war < RP_war) + TRPab_war = RP_war; + R2P_war = 63; + } + } else { + R2P_war = last_timing->burst_regs[ + EMC_R2P_INDEX]; + RP_war = last_timing->burst_regs[EMC_RP_INDEX]; + TRPab_war = last_timing->burst_regs[ + EMC_TRPAB_INDEX]; + } + + if (RP_war < deltaTWATM) { + W2P_war = last_timing->burst_regs[EMC_W2P_INDEX] + + deltaTWATM - RP_war; + if (W2P_war > 63) { + RP_war = RP_war + W2P_war - 63; + if (TRPab_war < RP_war) + TRPab_war = RP_war; + W2P_war = 63; + } + } else { + W2P_war = last_timing->burst_regs[ + EMC_W2P_INDEX]; + } + + if ((last_timing->burst_regs[EMC_W2P_INDEX] ^ + W2P_war) || + (last_timing->burst_regs[EMC_R2P_INDEX] ^ + R2P_war) || + (last_timing->burst_regs[EMC_RP_INDEX] ^ + RP_war) || + (last_timing->burst_regs[EMC_TRPAB_INDEX] ^ + TRPab_war)) { + emc_writel(emc, RP_war, EMC_RP); + emc_writel(emc, R2P_war, EMC_R2P); + emc_writel(emc, W2P_war, EMC_W2P); + emc_writel(emc, TRPab_war, EMC_TRPAB); + } + emc_timing_update(emc, DUAL_CHANNEL); + } else { + emc_cc_dbg(INFO, "Skipped WAR\n"); + } + } + + if (!fsp_for_next_freq) { + mr13_flip_fspwr = (next_timing->emc_mrw3 & 0xffffff3f) | 0x80; + mr13_flip_fspop = (next_timing->emc_mrw3 & 0xffffff3f) | 0x00; + } else { + mr13_flip_fspwr = (next_timing->emc_mrw3 & 0xffffff3f) | 0x40; + mr13_flip_fspop = (next_timing->emc_mrw3 & 0xffffff3f) | 0xc0; + } + + mr13_catr_enable = (mr13_flip_fspwr & 0xFFFFFFFE) | 0x01; + if (dram_dev_num == TWO_RANK) + mr13_catr_enable = (mr13_catr_enable & 0x3fffffff) | 0x80000000; + + if (dram_type == DRAM_TYPE_LPDDR4) { + emc_writel(emc, mr13_flip_fspwr, EMC_MRW3); + emc_writel(emc, next_timing->emc_mrw, EMC_MRW); + emc_writel(emc, next_timing->emc_mrw2, EMC_MRW2); + } + + /* + * Step 8: + * Program the shadow registers. + */ + emc_cc_dbg(STEPS, "Step 8\n"); + emc_cc_dbg(SUB_STEPS, "Writing burst_regs\n"); + for (i = 0; i < next_timing->num_burst; i++) { + u32 var; + u32 wval; + const u16 *burst_regs_off = ®_off.burst_regs_off[0]; + + if (!burst_regs_off[i]) + continue; + + var = burst_regs_off[i]; + wval = next_timing->burst_regs[i]; + + if (dram_type != DRAM_TYPE_LPDDR4 && + (var == EMC_MRW6 || var == EMC_MRW7 || + var == EMC_MRW8 || var == EMC_MRW9 || + var == EMC_MRW10 || var == EMC_MRW11 || + var == EMC_MRW12 || var == EMC_MRW13 || + var == EMC_MRW14 || var == EMC_MRW15 || + var == EMC_TRAINING_CTRL)) + continue; + + /* Pain... And suffering. */ + if (var == EMC_CFG) { + wval &= ~EMC_CFG_DRAM_ACPD; + wval &= ~EMC_CFG_DYN_SELF_REF; + if (dram_type == DRAM_TYPE_LPDDR4) { + wval &= ~EMC_CFG_DRAM_CLKSTOP_SR; + wval &= ~EMC_CFG_DRAM_CLKSTOP_PD; + } + } else if (var == EMC_MRS_WAIT_CNT && + dram_type == DRAM_TYPE_LPDDR2 && + opt_zcal_en_cc && !opt_cc_short_zcal && + opt_short_zcal) { + wval = (wval & ~(EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK << + EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT)) | + ((zq_wait_long & EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK) << + EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT); + } else if (var == EMC_ZCAL_WAIT_CNT && + dram_type == DRAM_TYPE_DDR3 && opt_zcal_en_cc && + !opt_cc_short_zcal && opt_short_zcal) { + wval = (wval & ~(EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_MASK << + EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_SHIFT)) + | ((zq_wait_long & + EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_MASK) << + EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT); + } else if (var == EMC_ZCAL_INTERVAL && opt_zcal_en_cc) { + wval = 0; /* EMC_ZCAL_INTERVAL reset value. */ + } else if (var == EMC_PMACRO_AUTOCAL_CFG_COMMON) { + wval |= EMC_PMACRO_AUTOCAL_CFG_COMMON_E_CAL_BYPASS_DVFS; + } else if (var == EMC_PMACRO_DATA_PAD_TX_CTRL) { + wval &= + ~(EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC); + } else if (var == EMC_PMACRO_CMD_PAD_TX_CTRL) { + wval |= EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON; + wval &= ~(EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC); + } else if (var == EMC_PMACRO_BRICK_CTRL_RFU1) { + wval &= 0xf800f800; + } else if (var == EMC_PMACRO_COMMON_PAD_TX_CTRL) { + wval &= 0xfffffff0; + } + + emc_writel(emc, wval, var); + } + + /* SW addition: do EMC refresh adjustment here. */ + emc_set_over_temp_timing(emc, next_timing, dram_over_temp_state); + + if (dram_type == DRAM_TYPE_LPDDR4) { + mrw_req = (23 << EMC_MRW_MRW_MA_SHIFT) | + (next_timing->run_clocks & EMC_MRW_MRW_OP_MASK); + emc_writel(emc, mrw_req, EMC_MRW); + } + + /* Per channel burst registers. */ + emc_cc_dbg(SUB_STEPS, "Writing burst_regs_per_ch\n"); + for (i = 0; i < next_timing->num_burst_per_ch; i++) { + const struct per_ch_regs *burst_regs_per_ch_off = + ®_off.burst_regs_per_ch_off[0]; + + if (!burst_regs_per_ch_off[i].offset) + continue; + + if (dram_type != DRAM_TYPE_LPDDR4 && + (burst_regs_per_ch_off[i].offset == EMC_MRW6 || + burst_regs_per_ch_off[i].offset == EMC_MRW7 || + burst_regs_per_ch_off[i].offset == EMC_MRW8 || + burst_regs_per_ch_off[i].offset == EMC_MRW9 || + burst_regs_per_ch_off[i].offset == EMC_MRW10 || + burst_regs_per_ch_off[i].offset == EMC_MRW11 || + burst_regs_per_ch_off[i].offset == EMC_MRW12 || + burst_regs_per_ch_off[i].offset == EMC_MRW13 || + burst_regs_per_ch_off[i].offset == EMC_MRW14 || + burst_regs_per_ch_off[i].offset == EMC_MRW15)) + continue; + + /* Filter out second channel if not in DUAL_CHANNEL mode. */ + if (channel_mode != DUAL_CHANNEL && + burst_regs_per_ch_off[i].bank >= REG_EMC1) + continue; + + emc_cc_dbg(REG_LISTS, "(%u) 0x%08x => 0x%08x\n", + i, next_timing->burst_reg_per_ch[i], + burst_regs_per_ch_off[i].offset); + emc_writel_per_ch(emc, next_timing->burst_reg_per_ch[i], + burst_regs_per_ch_off[i].bank, + burst_regs_per_ch_off[i].offset); + } + + /* Vref regs. */ + emc_cc_dbg(SUB_STEPS, "Writing vref_regs\n"); + for (i = 0; i < next_timing->vref_num; i++) { + const struct per_ch_regs *vref_regs_per_ch_off = + ®_off.vref_regs_per_ch_off[0]; + + if (!vref_regs_per_ch_off[i].offset) + continue; + + if (channel_mode != DUAL_CHANNEL && + vref_regs_per_ch_off[i].bank >= REG_EMC1) + continue; + + emc_cc_dbg(REG_LISTS, "(%u) 0x%08x => 0x%08x\n", + i, next_timing->vref_perch_regs[i], + vref_regs_per_ch_off[i].offset); + emc_writel_per_ch(emc, next_timing->vref_perch_regs[i], + vref_regs_per_ch_off[i].bank, + vref_regs_per_ch_off[i].offset); + } + + /* Trimmers. */ + emc_cc_dbg(SUB_STEPS, "Writing trim_regs\n"); + for (i = 0; i < next_timing->num_trim; i++) { + u64 trim_reg; + const u16 *trim_regs_off = ®_off.trim_regs_off[0]; + + if (!trim_regs_off[i]) + continue; + + trim_reg = trim_regs_off[i]; + if (compensate_trimmer_applicable && + (trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3 || + trim_reg == EMC_DATA_BRLSHFT_0 || + trim_reg == EMC_DATA_BRLSHFT_1)) { + u32 reg = tegra210_apply_periodic_compensation_trimmer( + next_timing, trim_reg); + emc_cc_dbg(REG_LISTS, "(%u) 0x%08x => 0x%08x\n", i, reg, + trim_regs_off[i]); + emc_cc_dbg(EMA_WRITES, "0x%08x <= 0x%08x\n", + (u32)(u64)trim_regs_off[i], reg); + emc_writel(emc, reg, trim_regs_off[i]); + } else { + emc_cc_dbg(REG_LISTS, "(%u) 0x%08x => 0x%08x\n", + i, next_timing->trim_regs[i], + trim_regs_off[i]); + emc_writel(emc, next_timing->trim_regs[i], + trim_regs_off[i]); + } + } + + /* Per channel trimmers. */ + emc_cc_dbg(SUB_STEPS, "Writing trim_regs_per_ch\n"); + for (i = 0; i < next_timing->num_trim_per_ch; i++) { + u32 trim_reg; + const struct per_ch_regs *trim_regs_per_ch_off = + ®_off.trim_regs_per_ch_off[0]; + + if (!trim_regs_per_ch_off[i].offset) + continue; + + if (channel_mode != DUAL_CHANNEL && + trim_regs_per_ch_off[i].bank >= REG_EMC1) + continue; + + trim_reg = trim_regs_per_ch_off[i].offset; + if (compensate_trimmer_applicable && + (trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2 || + trim_reg == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3 || + trim_reg == EMC_DATA_BRLSHFT_0 || + trim_reg == EMC_DATA_BRLSHFT_1)) { + u32 reg = + tegra210_apply_periodic_compensation_trimmer( + next_timing, trim_reg); + emc_cc_dbg(REG_LISTS, "(%u) 0x%08x => 0x%08x\n", + i, reg, trim_regs_per_ch_off[i].offset); + emc_cc_dbg(EMA_WRITES, "0x%08x <= 0x%08x\n", + trim_regs_per_ch_off[i].offset, reg); + emc_writel_per_ch(emc, reg, + trim_regs_per_ch_off[i].bank, + trim_regs_per_ch_off[i].offset); + } else { + emc_cc_dbg(REG_LISTS, "(%u) 0x%08x => 0x%08x\n", + i, next_timing->trim_perch_regs[i], + trim_regs_per_ch_off[i].offset); + emc_writel_per_ch(emc, next_timing->trim_perch_regs[i], + trim_regs_per_ch_off[i].bank, + trim_regs_per_ch_off[i].offset); + } + } + + emc_cc_dbg(SUB_STEPS, "Writing burst_mc_regs\n"); + for (i = 0; i < next_timing->num_mc_regs; i++) { + emc_cc_dbg(REG_LISTS, "(%u) 0x%08x => 0x%08x\n", + i, next_timing->burst_mc_regs[i], + reg_off.burst_mc_regs_off[i]); + mc_writel(emc->mc, next_timing->burst_mc_regs[i], + reg_off.burst_mc_regs_off[i]); + } + + /* Registers to be programmed on the faster clock. */ + if (next_timing->rate < last_timing->rate) { + emc_cc_dbg(SUB_STEPS, "Writing la_scale_regs\n"); + for (i = 0; i < next_timing->num_up_down; i++) { + emc_cc_dbg(REG_LISTS, "(%u) 0x%08x => 0x%08x\n", + i, next_timing->la_scale_regs[i], + reg_off.la_scale_regs_off[i]); + mc_writel(emc->mc, next_timing->la_scale_regs[i], + reg_off.la_scale_regs_off[i]); + } + } + + /* Flush all the burst register writes. */ + mc_readl(emc->mc, MC_EMEM_ADR_CFG); + + /* + * Step 9: + * LPDDR4 section A. + */ + emc_cc_dbg(STEPS, "Step 9\n"); + if (dram_type == DRAM_TYPE_LPDDR4) { + emc_writel(emc, emc_zcal_interval, EMC_ZCAL_INTERVAL); + emc_writel(emc, emc_zcal_wait_cnt_new, EMC_ZCAL_WAIT_CNT); + + emc_dbg |= (EMC_DBG_WRITE_MUX_ACTIVE | + EMC_DBG_WRITE_ACTIVE_ONLY); + + emc_writel(emc, emc_dbg, EMC_DBG); + emc_writel(emc, emc_zcal_interval, EMC_ZCAL_INTERVAL); + emc_writel(emc, emc_dbg_o, EMC_DBG); + } + + /* + * Step 10: + * LPDDR4 and DDR3 common section. + */ + emc_cc_dbg(STEPS, "Step 10\n"); + if (opt_dvfs_mode == MAN_SR || dram_type == DRAM_TYPE_LPDDR4) { + if (dram_type == DRAM_TYPE_LPDDR4) + ccfifo_writel(emc, 0x101, EMC_SELF_REF, 0); + else + ccfifo_writel(emc, 0x1, EMC_SELF_REF, 0); + + if (dram_type == DRAM_TYPE_LPDDR4 && + destination_clock_period <= zqcal_before_cc_cutoff) { + ccfifo_writel(emc, mr13_flip_fspwr ^ 0x40, EMC_MRW3, 0); + ccfifo_writel(emc, + (next_timing->burst_regs[EMC_MRW6_INDEX] & + 0xFFFF3F3F) | + (last_timing->burst_regs[EMC_MRW6_INDEX] & + 0x0000C0C0), EMC_MRW6, 0); + ccfifo_writel(emc, + (next_timing->burst_regs[EMC_MRW14_INDEX] + & 0xFFFF0707) | + (last_timing->burst_regs[EMC_MRW14_INDEX] + & 0x00003838), EMC_MRW14, 0); + + if (dram_dev_num == TWO_RANK) { + ccfifo_writel(emc, + (next_timing->burst_regs[EMC_MRW7_INDEX] & + 0xFFFF3F3F) | + (last_timing->burst_regs[EMC_MRW7_INDEX] & + 0x0000C0C0), EMC_MRW7, 0); + ccfifo_writel(emc, + (next_timing->burst_regs[EMC_MRW15_INDEX] & + 0xFFFF0707) | + (last_timing->burst_regs[EMC_MRW15_INDEX] & + 0x00003838), EMC_MRW15, 0); + } + + if (opt_zcal_en_cc) { + if (dram_dev_num == ONE_RANK) + ccfifo_writel(emc, + 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT + | EMC_ZQ_CAL_ZQ_CAL_CMD, + EMC_ZQ_CAL, 0); + else if (shared_zq_resistor) + ccfifo_writel(emc, + 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT + | EMC_ZQ_CAL_ZQ_CAL_CMD, + EMC_ZQ_CAL, 0); + else + ccfifo_writel(emc, + EMC_ZQ_CAL_ZQ_CAL_CMD, + EMC_ZQ_CAL, 0); + } + } + } + + emc_dbg = emc_dbg_o; + if (dram_type == DRAM_TYPE_LPDDR4) { + ccfifo_writel(emc, mr13_flip_fspop | 0x8, EMC_MRW3, + (1000 * fake_timing->dram_timings[T_RP]) / + source_clock_period); + ccfifo_writel(emc, 0, 0, tFC_lpddr4 / source_clock_period); + } + + if (dram_type == DRAM_TYPE_LPDDR4 || opt_dvfs_mode != MAN_SR) { + u32 t = 30 + (cya_allow_ref_cc ? + (4000 * fake_timing->dram_timings[T_RFC]) + + ((1000 * fake_timing->dram_timings[T_RP]) / + source_clock_period) : 0); + + ccfifo_writel(emc, emc_pin_o & ~(EMC_PIN_PIN_CKE_PER_DEV | + EMC_PIN_PIN_CKEB | EMC_PIN_PIN_CKE), + EMC_PIN, t); + } + + ref_delay_mult = 1; + ref_b4_sref_en = 0; + cya_issue_pc_ref = 0; + + ref_delay_mult += ref_b4_sref_en ? 1 : 0; + ref_delay_mult += cya_allow_ref_cc ? 1 : 0; + ref_delay_mult += cya_issue_pc_ref ? 1 : 0; + ref_delay = ref_delay_mult * + ((1000 * fake_timing->dram_timings[T_RP] / + source_clock_period) + + (1000 * fake_timing->dram_timings[T_RFC] / + source_clock_period)) + 20; + + /* + * Step 11: + * Ramp down. + */ + emc_cc_dbg(STEPS, "Step 11\n"); + ccfifo_writel(emc, 0x0, EMC_CFG_SYNC, + dram_type == DRAM_TYPE_LPDDR4 ? 0 : ref_delay); + + emc_dbg_active = emc_dbg | (EMC_DBG_WRITE_MUX_ACTIVE | + EMC_DBG_WRITE_ACTIVE_ONLY); + ccfifo_writel(emc, emc_dbg_active, EMC_DBG, 0); + + ramp_down_wait = tegra210_dvfs_power_ramp_down(emc, + source_clock_period, 0); + + /* + * Step 12: + * And finally - trigger the clock change. + */ + emc_cc_dbg(STEPS, "Step 12\n"); + ccfifo_writel(emc, 1, EMC_STALL_THEN_EXE_AFTER_CLKCHANGE, 0); + emc_dbg_active &= ~EMC_DBG_WRITE_ACTIVE_ONLY; + ccfifo_writel(emc, emc_dbg_active, EMC_DBG, 0); + + /* + * Step 13: + * Ramp up. + */ + emc_cc_dbg(STEPS, "Step 13\n"); + ramp_up_wait = tegra210_dvfs_power_ramp_up(emc, + destination_clock_period, 0); + ccfifo_writel(emc, emc_dbg, EMC_DBG, 0); + + /* + * Step 14: + * Bringup CKE pins. + */ + emc_cc_dbg(STEPS, "Step 14\n"); + if (dram_type == DRAM_TYPE_LPDDR4) { + u32 r = emc_pin_o | EMC_PIN_PIN_CKE; + + if (dram_dev_num == TWO_RANK) + ccfifo_writel(emc, r | EMC_PIN_PIN_CKEB | + EMC_PIN_PIN_CKE_PER_DEV, EMC_PIN, 0); + else + ccfifo_writel(emc, r & ~(EMC_PIN_PIN_CKEB | + EMC_PIN_PIN_CKE_PER_DEV), EMC_PIN, 0); + } + + /* + * Step 15: (two step 15s ??) + * Calculate zqlatch wait time; has dependency on ramping times. + */ + emc_cc_dbg(STEPS, "Step 15\n"); + + if (destination_clock_period <= zqcal_before_cc_cutoff) { + s32 t = (s32)(ramp_up_wait + ramp_down_wait) / + (s32)destination_clock_period; + zq_latch_dvfs_wait_time = (s32)tZQCAL_lpddr4_fc_adj - t; + } else { + zq_latch_dvfs_wait_time = tZQCAL_lpddr4_fc_adj - + div_o3(1000 * next_timing->dram_timings[T_PDEX], + destination_clock_period); + } + + emc_cc_dbg(INFO, "tZQCAL_lpddr4_fc_adj = %u\n", tZQCAL_lpddr4_fc_adj); + emc_cc_dbg(INFO, "destination_clock_period = %u\n", + destination_clock_period); + emc_cc_dbg(INFO, "next_timing->dram_timings[T_PDEX] = %u\n", + next_timing->dram_timings[T_PDEX]); + emc_cc_dbg(INFO, "zq_latch_dvfs_wait_time = %d\n", + max_t(s32, 0, zq_latch_dvfs_wait_time)); + + if (dram_type == DRAM_TYPE_LPDDR4 && opt_zcal_en_cc) { + if (dram_dev_num == ONE_RANK) { + if (destination_clock_period > zqcal_before_cc_cutoff) + ccfifo_writel(emc, + 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | + EMC_ZQ_CAL_ZQ_CAL_CMD, EMC_ZQ_CAL, + div_o3(1000 * + next_timing->dram_timings + [T_PDEX], + destination_clock_period)); + + ccfifo_writel(emc, (mr13_flip_fspop & 0xFFFFFFF7) | + 0x0C000000, EMC_MRW3, + div_o3(1000 * + next_timing->dram_timings[T_PDEX], + destination_clock_period)); + ccfifo_writel(emc, 0, EMC_SELF_REF, 0); + ccfifo_writel(emc, 0, EMC_REF, 0); + ccfifo_writel(emc, 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | + EMC_ZQ_CAL_ZQ_LATCH_CMD, + EMC_ZQ_CAL, + max_t(s32, 0, zq_latch_dvfs_wait_time)); + } else if (shared_zq_resistor) { + if (destination_clock_period > zqcal_before_cc_cutoff) + ccfifo_writel(emc, + 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | + EMC_ZQ_CAL_ZQ_CAL_CMD, EMC_ZQ_CAL, + div_o3(1000 * + next_timing->dram_timings + [T_PDEX], + destination_clock_period)); + + ccfifo_writel(emc, 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | + EMC_ZQ_CAL_ZQ_LATCH_CMD, EMC_ZQ_CAL, + max_t(s32, 0, zq_latch_dvfs_wait_time) + + div_o3(1000 * + next_timing->dram_timings[T_PDEX], + destination_clock_period)); + ccfifo_writel(emc, 1UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | + EMC_ZQ_CAL_ZQ_LATCH_CMD, + EMC_ZQ_CAL, 0); + + ccfifo_writel(emc, (mr13_flip_fspop & 0xfffffff7) | + 0x0c000000, EMC_MRW3, 0); + ccfifo_writel(emc, 0, EMC_SELF_REF, 0); + ccfifo_writel(emc, 0, EMC_REF, 0); + + ccfifo_writel(emc, 1UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | + EMC_ZQ_CAL_ZQ_LATCH_CMD, EMC_ZQ_CAL, + tZQCAL_lpddr4 / destination_clock_period); + } else { + if (destination_clock_period > zqcal_before_cc_cutoff) + ccfifo_writel(emc, EMC_ZQ_CAL_ZQ_CAL_CMD, + EMC_ZQ_CAL, + div_o3(1000 * + next_timing->dram_timings + [T_PDEX], + destination_clock_period)); + + ccfifo_writel(emc, (mr13_flip_fspop & 0xfffffff7) | + 0x0c000000, EMC_MRW3, + div_o3(1000 * + next_timing->dram_timings[T_PDEX], + destination_clock_period)); + ccfifo_writel(emc, 0, EMC_SELF_REF, 0); + ccfifo_writel(emc, 0, EMC_REF, 0); + + ccfifo_writel(emc, EMC_ZQ_CAL_ZQ_LATCH_CMD, EMC_ZQ_CAL, + max_t(s32, 0, zq_latch_dvfs_wait_time)); + } + } + + /* WAR: delay for zqlatch */ + ccfifo_writel(emc, 0, 0, 10); + + /* + * Step 16: + * LPDDR4 Conditional Training Kickoff. Removed. + */ + + /* + * Step 17: + * MANSR exit self refresh. + */ + emc_cc_dbg(STEPS, "Step 17\n"); + if (opt_dvfs_mode == MAN_SR && dram_type != DRAM_TYPE_LPDDR4) + ccfifo_writel(emc, 0, EMC_SELF_REF, 0); + + /* + * Step 18: + * Send MRWs to LPDDR3/DDR3. + */ + emc_cc_dbg(STEPS, "Step 18\n"); + if (dram_type == DRAM_TYPE_LPDDR2) { + ccfifo_writel(emc, next_timing->emc_mrw2, EMC_MRW2, 0); + ccfifo_writel(emc, next_timing->emc_mrw, EMC_MRW, 0); + if (is_lpddr3) + ccfifo_writel(emc, next_timing->emc_mrw4, EMC_MRW4, 0); + } else if (dram_type == DRAM_TYPE_DDR3) { + if (opt_dll_mode == DLL_ON) + ccfifo_writel(emc, next_timing->emc_emrs & + ~EMC_EMRS_USE_EMRS_LONG_CNT, EMC_EMRS, 0); + ccfifo_writel(emc, next_timing->emc_emrs2 & + ~EMC_EMRS2_USE_EMRS2_LONG_CNT, EMC_EMRS2, 0); + ccfifo_writel(emc, next_timing->emc_mrs | + EMC_EMRS_USE_EMRS_LONG_CNT, EMC_MRS, 0); + } + + /* + * Step 19: + * ZQCAL for LPDDR3/DDR3 + */ + emc_cc_dbg(STEPS, "Step 19\n"); + if (opt_zcal_en_cc) { + if (dram_type == DRAM_TYPE_LPDDR2) { + u32 r; + + zq_op = opt_cc_short_zcal ? 0x56 : 0xAB; + zcal_wait_time_ps = opt_cc_short_zcal ? 90000 : 360000; + zcal_wait_time_clocks = div_o3(zcal_wait_time_ps, + destination_clock_period); + r = zcal_wait_time_clocks << + EMC_MRS_WAIT_CNT2_MRS_EXT2_WAIT_CNT_SHIFT | + zcal_wait_time_clocks << + EMC_MRS_WAIT_CNT2_MRS_EXT1_WAIT_CNT_SHIFT; + ccfifo_writel(emc, r, EMC_MRS_WAIT_CNT2, 0); + ccfifo_writel(emc, 2 << EMC_MRW_MRW_DEV_SELECTN_SHIFT | + EMC_MRW_USE_MRW_EXT_CNT | + 10 << EMC_MRW_MRW_MA_SHIFT | + zq_op << EMC_MRW_MRW_OP_SHIFT, + EMC_MRW, 0); + if (dram_dev_num == TWO_RANK) { + r = 1 << EMC_MRW_MRW_DEV_SELECTN_SHIFT | + EMC_MRW_USE_MRW_EXT_CNT | + 10 << EMC_MRW_MRW_MA_SHIFT | + zq_op << EMC_MRW_MRW_OP_SHIFT; + ccfifo_writel(emc, r, EMC_MRW, 0); + } + } else if (dram_type == DRAM_TYPE_DDR3) { + zq_op = opt_cc_short_zcal ? 0 : EMC_ZQ_CAL_LONG; + ccfifo_writel(emc, zq_op | 2 << + EMC_ZQ_CAL_DEV_SEL_SHIFT | + EMC_ZQ_CAL_ZQ_CAL_CMD, EMC_ZQ_CAL, 0); + if (dram_dev_num == TWO_RANK) + ccfifo_writel(emc, zq_op | + 1 << EMC_ZQ_CAL_DEV_SEL_SHIFT | + EMC_ZQ_CAL_ZQ_CAL_CMD, + EMC_ZQ_CAL, 0); + } + } + + if (bg_regulator_mode_change) { + emc_set_shadow_bypass(emc, ACTIVE); + bg_regulator_switch_complete_wait_clks = + ramp_up_wait > 1250000 ? 0 : + (1250000 - ramp_up_wait) / destination_clock_period; + ccfifo_writel(emc, next_timing->burst_regs + [EMC_PMACRO_BG_BIAS_CTRL_0_INDEX], + EMC_PMACRO_BG_BIAS_CTRL_0, + bg_regulator_switch_complete_wait_clks); + emc_set_shadow_bypass(emc, ASSEMBLY); + } + + /* + * Step 20: + * Issue ref and optional QRST. + */ + emc_cc_dbg(STEPS, "Step 20\n"); + if (dram_type != DRAM_TYPE_LPDDR4) + ccfifo_writel(emc, 0, EMC_REF, 0); + + if (opt_do_sw_qrst) { + ccfifo_writel(emc, 1, EMC_ISSUE_QRST, 0); + ccfifo_writel(emc, 0, EMC_ISSUE_QRST, 2); + } + + /* + * Step 21: + * Restore ZCAL and ZCAL interval. + */ + emc_cc_dbg(STEPS, "Step 21\n"); + if (save_restore_clkstop_pd || opt_zcal_en_cc) { + ccfifo_writel(emc, emc_dbg_o | EMC_DBG_WRITE_MUX_ACTIVE, + EMC_DBG, 0); + if (opt_zcal_en_cc && dram_type != DRAM_TYPE_LPDDR4) + ccfifo_writel(emc, next_timing->burst_regs[ + EMC_ZCAL_INTERVAL_INDEX], + EMC_ZCAL_INTERVAL, 0); + + if (save_restore_clkstop_pd) + ccfifo_writel(emc, + next_timing->burst_regs[EMC_CFG_INDEX] & + ~EMC_CFG_DYN_SELF_REF, EMC_CFG, 0); + ccfifo_writel(emc, emc_dbg_o, EMC_DBG, 0); + } + + /* + * Step 22: + * Restore EMC_CFG_PIPE_CLK. + */ + emc_cc_dbg(STEPS, "Step 22\n"); + ccfifo_writel(emc, emc_cfg_pipe_clk_o, EMC_CFG_PIPE_CLK, 0); + + if (bg_regulator_mode_change) { + if (enable_bg_regulator) + emc_writel(emc, next_timing->burst_regs[ + EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + ~EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD, + EMC_PMACRO_BG_BIAS_CTRL_0); + else + emc_writel(emc, next_timing->burst_regs[ + EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & + ~EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD, + EMC_PMACRO_BG_BIAS_CTRL_0); + } + + /* + * Step 23: + */ + emc_cc_dbg(STEPS, "Step 23\n"); + + tmp = emc_readl(emc, EMC_CFG_DIG_DLL); + tmp |= EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_TRAFFIC; + tmp &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_RW_UNTIL_LOCK; + tmp &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_UNTIL_LOCK; + tmp &= ~EMC_CFG_DIG_DLL_CFG_DLL_EN; + tmp = (tmp & ~EMC_CFG_DIG_DLL_CFG_DLL_MODE_MASK) | + (2 << EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT); + emc_writel(emc, tmp, EMC_CFG_DIG_DLL); + + emc_do_clock_change(emc, clksrc); + + /* + * Step 24: + * Save training results. Removed. + */ + + /* + * Step 25: + * Program MC updown registers. + */ + emc_cc_dbg(STEPS, "Step 25\n"); + + if (next_timing->rate > last_timing->rate) { + for (i = 0; i < next_timing->num_up_down; i++) + mc_writel(emc->mc, next_timing->la_scale_regs[i], + reg_off.la_scale_regs_off[i]); + emc_timing_update(emc, channel_mode); + } + + /* + * Step 26: + * Restore ZCAL registers. + */ + emc_cc_dbg(STEPS, "Step 26\n"); + if (dram_type == DRAM_TYPE_LPDDR4) { + emc_set_shadow_bypass(emc, ACTIVE); + emc_writel(emc, next_timing->burst_regs[ + EMC_ZCAL_WAIT_CNT_INDEX], EMC_ZCAL_WAIT_CNT); + emc_writel(emc, next_timing->burst_regs[ + EMC_ZCAL_INTERVAL_INDEX], EMC_ZCAL_INTERVAL); + emc_set_shadow_bypass(emc, ASSEMBLY); + } + + if (dram_type != DRAM_TYPE_LPDDR4 && + opt_zcal_en_cc && !opt_short_zcal && opt_cc_short_zcal) { + udelay(2); + + emc_set_shadow_bypass(emc, ACTIVE); + if (dram_type == DRAM_TYPE_LPDDR2) + emc_writel(emc, next_timing->burst_regs[ + EMC_MRS_WAIT_CNT_INDEX], EMC_MRS_WAIT_CNT); + else if (dram_type == DRAM_TYPE_DDR3) + emc_writel(emc, next_timing->burst_regs[ + EMC_ZCAL_WAIT_CNT_INDEX], EMC_ZCAL_WAIT_CNT); + emc_set_shadow_bypass(emc, ASSEMBLY); + } + + /* + * Step 27: + * Restore EMC_CFG, FDPD registers. + */ + emc_cc_dbg(STEPS, "Step 27\n"); + emc_set_shadow_bypass(emc, ACTIVE); + emc_writel(emc, next_timing->burst_regs[EMC_CFG_INDEX], EMC_CFG); + emc_set_shadow_bypass(emc, ASSEMBLY); + emc_writel(emc, next_timing->emc_fdpd_ctrl_cmd_no_ramp, + EMC_FDPD_CTRL_CMD_NO_RAMP); + emc_writel(emc, next_timing->emc_sel_dpd_ctrl, EMC_SEL_DPD_CTRL); + + /* + * Step 28: + * Training recover. Removed. + */ + emc_cc_dbg(STEPS, "Step 28\n"); + + emc_set_shadow_bypass(emc, ACTIVE); + emc_writel(emc, + next_timing->burst_regs[EMC_PMACRO_AUTOCAL_CFG_COMMON_INDEX], + EMC_PMACRO_AUTOCAL_CFG_COMMON); + emc_set_shadow_bypass(emc, ASSEMBLY); + + /* + * Step 29: + * Power fix WAR. + */ + emc_cc_dbg(STEPS, "Step 29\n"); + emc_writel(emc, EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE0 | + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE1 | + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE2 | + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE3 | + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE4 | + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE5 | + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE6 | + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE7, + EMC_PMACRO_CFG_PM_GLOBAL_0); + emc_writel(emc, EMC_PMACRO_TRAINING_CTRL_0_CH0_TRAINING_E_WRPTR, + EMC_PMACRO_TRAINING_CTRL_0); + emc_writel(emc, EMC_PMACRO_TRAINING_CTRL_1_CH1_TRAINING_E_WRPTR, + EMC_PMACRO_TRAINING_CTRL_1); + emc_writel(emc, 0, EMC_PMACRO_CFG_PM_GLOBAL_0); + + /* + * Step 30: + * Re-enable autocal. + */ + emc_cc_dbg(STEPS, "Step 30: Re-enable DLL and AUTOCAL\n"); + if (next_timing->burst_regs[EMC_CFG_DIG_DLL_INDEX] & + EMC_CFG_DIG_DLL_CFG_DLL_EN) { + tmp = emc_readl(emc, EMC_CFG_DIG_DLL); + tmp |= EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_TRAFFIC; + tmp |= EMC_CFG_DIG_DLL_CFG_DLL_EN; + tmp &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_RW_UNTIL_LOCK; + tmp &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_UNTIL_LOCK; + tmp = (tmp & ~EMC_CFG_DIG_DLL_CFG_DLL_MODE_MASK) | + (2 << EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT); + emc_writel(emc, tmp, EMC_CFG_DIG_DLL); + emc_timing_update(emc, channel_mode); + } + + emc_auto_cal_config = next_timing->emc_auto_cal_config; + emc_writel(emc, emc_auto_cal_config, EMC_AUTO_CAL_CONFIG); + + /* Done! Yay. */ +} diff --git a/drivers/memory/tegra/tegra210-emc.c b/drivers/memory/tegra/tegra210-emc.c index 6eb26cdd9679..271a0d017073 100644 --- a/drivers/memory/tegra/tegra210-emc.c +++ b/drivers/memory/tegra/tegra210-emc.c @@ -126,6 +126,11 @@ static const char *emc_src_names[TEGRA_EMC_SRC_COUNT] = { }; static const struct supported_sequence supported_seqs[] = { + { + 0x7, + emc_set_clock_r21021, + __do_periodic_emc_compensation_r21021, + }, { 0, NULL, diff --git a/drivers/memory/tegra/tegra210-emc.h b/drivers/memory/tegra/tegra210-emc.h index 494f1cbdb289..621ec53ea02f 100644 --- a/drivers/memory/tegra/tegra210-emc.h +++ b/drivers/memory/tegra/tegra210-emc.h @@ -81,7 +81,16 @@ #define EMC_INTSTATUS_CLKCHANGE_COMPLETE BIT(4) #define EMC_DBG 0x8 #define EMC_DBG_WRITE_MUX_ACTIVE BIT(1) +#define EMC_DBG_WRITE_ACTIVE_ONLY BIT(30) #define EMC_CFG 0xc +#define EMC_CFG_DRAM_CLKSTOP_PD BIT(31) +#define EMC_CFG_DRAM_CLKSTOP_SR BIT(30) +#define EMC_CFG_DRAM_ACPD BIT(29) +#define EMC_CFG_DYN_SELF_REF BIT(28) +#define EMC_PIN 0x24 +#define EMC_PIN_PIN_CKE BIT(0) +#define EMC_PIN_PIN_CKEB BIT(1) +#define EMC_PIN_PIN_CKE_PER_DEV BIT(2) #define EMC_TIMING_CONTROL 0x28 #define EMC_RC 0x2c #define EMC_RFC 0x30 @@ -121,7 +130,35 @@ #define EMC_WEXT 0xb8 #define EMC_RFC_SLR 0xc0 #define EMC_MRS_WAIT_CNT2 0xc4 +#define EMC_MRS_WAIT_CNT2_MRS_EXT2_WAIT_CNT_SHIFT 16 +#define EMC_MRS_WAIT_CNT2_MRS_EXT1_WAIT_CNT_SHIFT 0 #define EMC_MRS_WAIT_CNT 0xc8 +#define EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT 0 +#define EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK \ + (0x3FF << EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT) + +#define EMC_MRS 0xcc +#define EMC_EMRS 0xd0 +#define EMC_EMRS_USE_EMRS_LONG_CNT BIT(26) +#define EMC_REF 0xd4 +#define EMC_SELF_REF 0xe0 +#define EMC_MRW 0xe8 +#define EMC_MRW_MRW_OP_SHIFT 0 +#define EMC_MRW_MRW_OP_MASK \ + (0xff << EMC_MRW_MRW_OP_SHIFT) +#define EMC_MRW_MRW_MA_SHIFT 16 +#define EMC_MRW_USE_MRW_EXT_CNT 27 +#define EMC_MRW_MRW_DEV_SELECTN_SHIFT 30 + +#define EMC_MRR 0xec +#define EMC_MRR_DEV_SEL_SHIFT 30 +#define EMC_MRR_MA_SHIFT 16 +#define EMC_MRR_MA_MASK \ + (0xff << EMC_MRR_MA_SHIFT) +#define EMC_MRR_DATA_SHIFT 0 +#define EMC_MRR_DATA_MASK \ + (0xffff << EMC_MRR_DATA_SHIFT) + #define EMC_FBIO_SPARE 0x100 #define EMC_FBIO_CFG5 0x104 #define EMC_FBIO_CFG5_DRAM_TYPE_SHIFT 0 @@ -132,14 +169,34 @@ #define EMC_PDEX2CKE 0x118 #define EMC_CKE2PDEN 0x11c #define EMC_MPC 0x128 +#define EMC_EMRS2 0x12c +#define EMC_EMRS2_USE_EMRS2_LONG_CNT BIT(26) +#define EMC_MRW2 0x134 +#define EMC_MRW3 0x138 +#define EMC_MRW4 0x13c #define EMC_R2R 0x144 #define EMC_EINPUT 0x14c #define EMC_EINPUT_DURATION 0x150 #define EMC_PUTERM_EXTRA 0x154 #define EMC_TCKESR 0x158 #define EMC_TPD 0x15c +#define EMC_AUTO_CAL_CONFIG 0x2a4 +#define EMC_AUTO_CAL_CONFIG_AUTO_CAL_COMPUTE_START BIT(0) +#define EMC_AUTO_CAL_CONFIG_AUTO_CAL_MEASURE_STALL BIT(9) +#define EMC_AUTO_CAL_CONFIG_AUTO_CAL_UPDATE_STALL BIT(10) +#define EMC_AUTO_CAL_CONFIG_AUTO_CAL_ENABLE BIT(29) +#define EMC_AUTO_CAL_CONFIG_AUTO_CAL_START BIT(31) #define EMC_EMC_STATUS 0x2b4 +#define EMC_EMC_STATUS_MRR_DIVLD BIT(20) #define EMC_EMC_STATUS_TIMING_UPDATE_STALLED BIT(23) +#define EMC_EMC_STATUS_DRAM_IN_POWERDOWN_SHIFT 4 +#define EMC_EMC_STATUS_DRAM_IN_POWERDOWN_MASK \ + (0x3 << EMC_EMC_STATUS_DRAM_IN_POWERDOWN_SHIFT) +#define EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_SHIFT 8 +#define EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_MASK \ + (0x3 << EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_SHIFT) + +#define EMC_CFG_2 0x2b8 #define EMC_CFG_DIG_DLL 0x2bc #define EMC_CFG_DIG_DLL_CFG_DLL_EN BIT(0) #define EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_UNTIL_LOCK BIT(1) @@ -165,8 +222,17 @@ #define EMC_WDV_MASK 0x2d0 #define EMC_RDV_EARLY_MASK 0x2d4 #define EMC_RDV_EARLY 0x2d8 +#define EMC_AUTO_CAL_CONFIG8 0x2dc #define EMC_ZCAL_INTERVAL 0x2e0 #define EMC_ZCAL_WAIT_CNT 0x2e4 +#define EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_MASK 0x7ff +#define EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_SHIFT 0 + +#define EMC_ZQ_CAL 0x2ec +#define EMC_ZQ_CAL_DEV_SEL_SHIFT 30 +#define EMC_ZQ_CAL_LONG BIT(4) +#define EMC_ZQ_CAL_ZQ_LATCH_CMD BIT(1) +#define EMC_ZQ_CAL_ZQ_CAL_CMD BIT(0) #define EMC_FDPD_CTRL_DQ 0x310 #define EMC_FDPD_CTRL_CMD 0x314 #define EMC_PMACRO_CMD_BRICK_CTRL_FDPD 0x318 @@ -176,6 +242,13 @@ #define EMC_TR_TIMING_0 0x3b4 #define EMC_TR_CTRL_1 0x3bc #define EMC_TR_RDV 0x3c4 +#define EMC_STALL_THEN_EXE_AFTER_CLKCHANGE 0x3cc +#define EMC_SEL_DPD_CTRL 0x3d8 +#define EMC_SEL_DPD_CTRL_DATA_SEL_DPD_EN BIT(8) +#define EMC_SEL_DPD_CTRL_ODT_SEL_DPD_EN BIT(5) +#define EMC_SEL_DPD_CTRL_RESET_SEL_DPD_EN BIT(4) +#define EMC_SEL_DPD_CTRL_CA_SEL_DPD_EN BIT(3) +#define EMC_SEL_DPD_CTRL_CLK_SEL_DPD_EN BIT(2) #define EMC_PRE_REFRESH_REQ_CNT 0x3dc #define EMC_DYN_SELF_REF_CONTROL 0x3e0 #define EMC_TXSRDLL 0x3e4 @@ -185,6 +258,9 @@ #define EMC_TR_RDV_MASK 0x3f8 #define EMC_TR_QSAFE 0x3fc #define EMC_TR_QRST 0x400 +#define EMC_ISSUE_QRST 0x428 +#define EMC_AUTO_CAL_CONFIG2 0x458 +#define EMC_AUTO_CAL_CONFIG3 0x45c #define EMC_TR_DVFS 0x460 #define EMC_AUTO_CAL_CHANNEL 0x464 #define EMC_IBDLY 0x468 @@ -198,19 +274,26 @@ #define EMC_MRW6 0x4a4 #define EMC_MRW7 0x4a8 #define EMC_MRW8 0x4ac +#define EMC_MRW9 0x4b0 #define EMC_MRW10 0x4b4 #define EMC_MRW11 0x4b8 #define EMC_MRW12 0x4bc #define EMC_MRW13 0x4c0 #define EMC_MRW14 0x4c4 #define EMC_MRW15 0x4d0 +#define EMC_CFG_SYNC 0x4d4 +#define EMC_FDPD_CTRL_CMD_NO_RAMP 0x4d8 +#define EMC_FDPD_CTRL_CMD_NO_RAMP_CMD_DPD_NO_RAMP_ENABLE BIT(0) #define EMC_WDV_CHK 0x4e0 #define EMC_CFG_PIPE_2 0x554 +#define EMC_CFG_PIPE_CLK 0x558 +#define EMC_CFG_PIPE_CLK_CLK_ALWAYS_ON BIT(0) #define EMC_CFG_PIPE_1 0x55c #define EMC_CFG_PIPE 0x560 #define EMC_QPOP 0x564 #define EMC_QUSE_WIDTH 0x568 #define EMC_PUTERM_WIDTH 0x56c +#define EMC_AUTO_CAL_CONFIG7 0x574 #define EMC_REFCTRL2 0x580 #define EMC_FBIO_CFG7 0x584 #define EMC_FBIO_CFG7_CH0_ENABLE BIT(1) @@ -275,10 +358,13 @@ #define EMC_CMD_BRLSHFT_2 0x5a4 #define EMC_CMD_BRLSHFT_3 0x5a8 #define EMC_QUSE_BRLSHFT_0 0x5ac +#define EMC_AUTO_CAL_CONFIG4 0x5b0 +#define EMC_AUTO_CAL_CONFIG5 0x5b4 #define EMC_QUSE_BRLSHFT_1 0x5b8 #define EMC_QUSE_BRLSHFT_2 0x5bc #define EMC_CCDMW 0x5c0 #define EMC_QUSE_BRLSHFT_3 0x5c4 +#define EMC_AUTO_CAL_CONFIG6 0x5cc #define EMC_DLL_CFG_0 0x5e4 #define EMC_DLL_CFG_1 0x5e8 #define EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_SHIFT 10 @@ -286,6 +372,11 @@ (0x7ff << EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_SHIFT) #define EMC_CONFIG_SAMPLE_DELAY 0x5f0 +#define EMC_CFG_UPDATE 0x5f4 +#define EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_SHIFT 9 +#define EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_MASK \ + (0x3 << EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_SHIFT) + #define EMC_PMACRO_QUSE_DDLL_RANK0_0 0x600 #define EMC_PMACRO_QUSE_DDLL_RANK0_1 0x604 #define EMC_PMACRO_QUSE_DDLL_RANK0_2 0x608 @@ -594,9 +685,20 @@ #define EMC_PMACRO_DDLL_SHORT_CMD_0 0xc20 #define EMC_PMACRO_DDLL_SHORT_CMD_1 0xc24 #define EMC_PMACRO_DDLL_SHORT_CMD_2 0xc28 +#define EMC_PMACRO_CFG_PM_GLOBAL_0 0xc30 +#define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE0 BIT(16) +#define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE1 BIT(17) +#define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE2 BIT(18) +#define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE3 BIT(19) +#define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE4 BIT(20) +#define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE5 BIT(21) +#define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE6 BIT(22) +#define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE7 BIT(23) #define EMC_PMACRO_VTTGEN_CTRL_0 0xc34 #define EMC_PMACRO_VTTGEN_CTRL_1 0xc38 #define EMC_PMACRO_BG_BIAS_CTRL_0 0xc3c +#define EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD BIT(0) +#define EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD BIT(2) #define EMC_PMACRO_PAD_CFG_CTRL 0xc40 #define EMC_PMACRO_ZCTRL 0xc44 #define EMC_PMACRO_CMD_PAD_RX_CTRL 0xc50 @@ -611,15 +713,22 @@ #define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON BIT(26) #define EMC_PMACRO_DATA_PAD_TX_CTRL 0xc64 +#define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF BIT(0) #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC BIT(1) +#define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF BIT(8) #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC BIT(9) #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC BIT(16) #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC BIT(24) #define EMC_PMACRO_COMMON_PAD_TX_CTRL 0xc68 #define EMC_PMACRO_AUTOCAL_CFG_COMMON 0xc78 +#define EMC_PMACRO_AUTOCAL_CFG_COMMON_E_CAL_BYPASS_DVFS BIT(16) #define EMC_PMACRO_VTTGEN_CTRL_2 0xcf0 #define EMC_PMACRO_IB_RXRT 0xcf4 +#define EMC_PMACRO_TRAINING_CTRL_0 0xcf8 +#define EMC_PMACRO_TRAINING_CTRL_0_CH0_TRAINING_E_WRPTR BIT(3) +#define EMC_PMACRO_TRAINING_CTRL_1 0xcfc +#define EMC_PMACRO_TRAINING_CTRL_1_CH1_TRAINING_E_WRPTR BIT(3) #define EMC_TRAINING_CTRL 0xe04 #define EMC_TRAINING_QUSE_CORS_CTRL 0xe0c #define EMC_TRAINING_QUSE_FINE_CTRL 0xe10 @@ -645,15 +754,31 @@ #define EMC_COPY_TABLE_PARAM_TRIM_REGS BIT(1) enum burst_regs_list { + EMC_RP_INDEX = 6, + EMC_R2P_INDEX = 9, + EMC_W2P_INDEX, + EMC_MRW6_INDEX = 31, EMC_REFRESH_INDEX = 41, EMC_PRE_REFRESH_REQ_CNT_INDEX = 43, + EMC_TRPAB_INDEX = 59, + EMC_MRW7_INDEX = 62, EMC_FBIO_CFG5_INDEX = 65, + EMC_FBIO_CFG7_INDEX, + EMC_CFG_DIG_DLL_INDEX, + EMC_ZCAL_INTERVAL_INDEX = 139, + EMC_ZCAL_WAIT_CNT_INDEX, + EMC_MRS_WAIT_CNT_INDEX = 141, EMC_DLL_CFG_0_INDEX = 144, + EMC_PMACRO_AUTOCAL_CFG_COMMON_INDEX = 146, + EMC_CFG_INDEX = 148, EMC_DYN_SELF_REF_CONTROL_INDEX = 150, EMC_PMACRO_CMD_PAD_TX_CTRL_INDEX = 161, EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX, EMC_PMACRO_COMMON_PAD_TX_CTRL_INDEX, EMC_PMACRO_BRICK_CTRL_RFU1_INDEX = 167, + EMC_PMACRO_BG_BIAS_CTRL_0_INDEX = 171, + EMC_MRW14_INDEX = 199, + EMC_MRW15_INDEX = 220, }; enum trim_regs_list { @@ -683,6 +808,36 @@ enum { DLL_ON, }; +enum { + ONE_RANK = 1, + TWO_RANK = 2, +}; + +enum { + T_RP, + T_FC_LPDDR4, + T_RFC, + T_PDEX, + RL, +}; + +enum { + AUTO_PD = 0, + MAN_SR = 2, +}; + +enum { + ASSEMBLY = 0, + ACTIVE, +}; + +enum { + DRAM_TYPE_DDR3, + DRAM_TYPE_LPDDR4, + DRAM_TYPE_LPDDR2, + DRAM_TYPE_DDR2, +}; + enum { REG_EMC, REG_EMC0, @@ -847,7 +1002,9 @@ void emc_writel_per_ch(struct tegra_emc *emc, u32 val, int type, unsigned long offset); u32 emc1_readl(struct tegra_emc *emc, unsigned long offset); +u32 __do_periodic_emc_compensation_r21021(struct tegra_emc *emc); void emc_do_clock_change(struct tegra_emc *emc, u32 clksrc); +void emc_set_clock_r21021(struct tegra_emc *emc, u32 clksrc); void emc_set_shadow_bypass(struct tegra_emc *emc, int set); void emc_timing_update(struct tegra_emc *emc, int dual_chan); u32 emc_get_dll_state(struct emc_table *next_timing); From patchwork Wed May 29 08:21:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Lo X-Patchwork-Id: 1106951 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="Fi78obL0"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45DNx16Jmxz9s55 for ; Wed, 29 May 2019 18:22:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726005AbfE2IWL (ORCPT ); Wed, 29 May 2019 04:22:11 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:1337 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726112AbfE2IWK (ORCPT ); Wed, 29 May 2019 04:22:10 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 29 May 2019 01:22:00 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 29 May 2019 01:22:08 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 29 May 2019 01:22:08 -0700 Received: from HQMAIL103.nvidia.com (172.20.187.11) by HQMAIL108.nvidia.com (172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 29 May 2019 08:22:08 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL103.nvidia.com (172.20.187.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 29 May 2019 08:22:08 +0000 Received: from josephl-linux.nvidia.com (Not Verified[10.19.108.132]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 29 May 2019 01:22:08 -0700 From: Joseph Lo To: Thierry Reding , Peter De Schrijver , Jonathan Hunter , Rob Herring , Stephen Boyd CC: , , , , Joseph Lo Subject: [PATCH V4 7/8] clk: tegra: Remove the old emc_mux clock for Tegra210 Date: Wed, 29 May 2019 16:21:38 +0800 Message-ID: <20190529082139.5581-8-josephl@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190529082139.5581-1-josephl@nvidia.com> References: <20190529082139.5581-1-josephl@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1559118120; bh=EY5mlT8ZlvKOnTYMN0loWZYtEz2EsBsQclJSzJbdsGM=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=Fi78obL0SREI5MJySSZlbMKTPqgfO7oInVOMlgCyEl4n+1rTMCPus94jnBCHwYsFi l2zDkyxzPI3TTyHbybKLSsG0lWrjQUOCBzodntVZxb+lTGHEmuaDY28UbwpCt9ms3s C+9gorkqKCM2Z/1zFMpdVaLHRbes7l9n0KZWaJotrfDrTHxtcBaQB2V7FwJYnYhZmB aRtK1LmlM458gFpA56PyxUQIwhBZsS+WCP2zGTKz8idUd0cCwhE1WF66pYNLX7kn52 Gl/FH1NVGZheNWjUNECyuLARjpy+G1Oep3qxb89S5EiwMieJbf1JLXaYMUoFJMVlR2 h3IKBRNM0wXdg== Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Remove the old emc_mux clock and don't use the common EMC clock definition. This will be replaced by a new clock defined in the EMC driver. Signed-off-by: Joseph Lo --- v4: - make sure the behavior is compatible with case if the kernel still uses the older DTB which doesn't have EMC node. And the MC and EMC clock can still be registered successuflly. v3: - split to 3 patches from the previous version --- drivers/clk/tegra/clk-tegra210.c | 42 ++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 15 deletions(-) diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c index 1d52354820ca..8b209e8b5eaf 100644 --- a/drivers/clk/tegra/clk-tegra210.c +++ b/drivers/clk/tegra/clk-tegra210.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include "clk.h" @@ -324,12 +325,6 @@ static unsigned long tegra210_input_freq[] = { [8] = 12000000, }; -static const char *mux_pllmcp_clkm[] = { - "pll_m", "pll_c", "pll_p", "clk_m", "pll_m_ud", "pll_mb", "pll_mb", - "pll_p", -}; -#define mux_pllmcp_clkm_idx NULL - #define PLL_ENABLE (1 << 30) #define PLLCX_MISC1_IDDQ (1 << 27) @@ -2346,7 +2341,6 @@ static struct tegra_clk tegra210_clks[tegra_clk_max] __initdata = { [tegra_clk_i2c2] = { .dt_id = TEGRA210_CLK_I2C2, .present = true }, [tegra_clk_uartc_8] = { .dt_id = TEGRA210_CLK_UARTC, .present = true }, [tegra_clk_mipi_cal] = { .dt_id = TEGRA210_CLK_MIPI_CAL, .present = true }, - [tegra_clk_emc] = { .dt_id = TEGRA210_CLK_EMC, .present = true }, [tegra_clk_usb2] = { .dt_id = TEGRA210_CLK_USB2, .present = true }, [tegra_clk_bsev] = { .dt_id = TEGRA210_CLK_BSEV, .present = true }, [tegra_clk_uartd_8] = { .dt_id = TEGRA210_CLK_UARTD, .present = true }, @@ -2957,6 +2951,27 @@ static int tegra210_init_pllu(void) return 0; } +static const struct clk_div_table mc_div_table_tegra210[] = { + { .val = 0, .div = 2 }, + { .val = 1, .div = 4 }, + { .val = 2, .div = 1 }, + { .val = 3, .div = 2 }, + { .val = 0, .div = 0 }, +}; + +static void tegra210_clk_register_mc(const char *name, + const char *parent_name) +{ + struct clk *clk; + + clk = clk_register_divider_table(NULL, name, parent_name, + CLK_IS_CRITICAL, + clk_base + CLK_SOURCE_EMC, + 15, 2, CLK_DIVIDER_READ_ONLY, + mc_div_table_tegra210, &emc_lock); + clks[TEGRA210_CLK_MC] = clk; +} + static const char * const sor1_out_parents[] = { /* * Bit 0 of the mux selects sor1_pad_clkout, irrespective of bit 1, so @@ -3040,15 +3055,12 @@ static __init void tegra210_periph_clk_init(void __iomem *clk_base, CLK_SOURCE_LA, 0); clks[TEGRA210_CLK_LA] = clk; - /* emc mux */ - clk = clk_register_mux(NULL, "emc_mux", mux_pllmcp_clkm, - ARRAY_SIZE(mux_pllmcp_clkm), 0, - clk_base + CLK_SOURCE_EMC, - 29, 3, 0, &emc_lock); + /* emc */ + clk = tegra210_clk_register_emc(); + clks[TEGRA210_CLK_EMC] = clk; - clk = tegra_clk_register_mc("mc", "emc_mux", clk_base + CLK_SOURCE_EMC, - &emc_lock); - clks[TEGRA210_CLK_MC] = clk; + /* mc */ + tegra210_clk_register_mc("mc", "emc"); /* cml0 */ clk = clk_register_gate(NULL, "cml0", "pll_e", 0, clk_base + PLLE_AUX, From patchwork Wed May 29 08:21:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Lo X-Patchwork-Id: 1106953 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="Xy2AqPWd"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45DNx62XLtz9s55 for ; Wed, 29 May 2019 18:22:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726673AbfE2IWM (ORCPT ); Wed, 29 May 2019 04:22:12 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:11412 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726112AbfE2IWM (ORCPT ); Wed, 29 May 2019 04:22:12 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 29 May 2019 01:22:11 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 29 May 2019 01:22:11 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 29 May 2019 01:22:11 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 29 May 2019 08:22:11 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 29 May 2019 08:22:11 +0000 Received: from josephl-linux.nvidia.com (Not Verified[10.19.108.132]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 29 May 2019 01:22:10 -0700 From: Joseph Lo To: Thierry Reding , Peter De Schrijver , Jonathan Hunter , Rob Herring , Stephen Boyd CC: , , , , Joseph Lo Subject: [PATCH V4 8/8] arm64: tegra: Add external memory controller node for Tegra210 Date: Wed, 29 May 2019 16:21:39 +0800 Message-ID: <20190529082139.5581-9-josephl@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190529082139.5581-1-josephl@nvidia.com> References: <20190529082139.5581-1-josephl@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1559118131; bh=Abu9+xNYNnzs1jtOHYXGFgLqFOQAdhULT1UYXsUjjjs=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=Xy2AqPWd8rCp5dCumRjNKedZ+IoAjLSAHLFfxxoI8Gynn+0HVM4EzurieQdBuYrrx J9ED9t3J3y0lQ1vhm72aK99foTCzWg7CWRFM5y68BRSgl6XZYHg0hDK+hmBloi/bkJ L4Q0onodSn/jp9xO2mrp0XtqZK/hokua9UDhuaVN0sWTTDuczMq3shDm6L0r1BAnDk GSBzl5OAXcfbTbY5sTOwUm+nvhwCkhtMLOzeB1QA+y5vHiZcZx8l+vOV9ifoUXELcY 16EjzwXpB0bNrcIjLYQC0s/Vzv5zPpey5jPuf+di2nNPHWjqLrFbk9U61rNXn0U8Yk BjZGYDgce0WGw== Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Add external memory controller (EMC) node for Tegra210 Signed-off-by: Joseph Lo --- v4: - no change. v3: - apply memory-region for emc_table. And add reserved-memory node with it. --- arch/arm64/boot/dts/nvidia/tegra210.dtsi | 33 ++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/arch/arm64/boot/dts/nvidia/tegra210.dtsi b/arch/arm64/boot/dts/nvidia/tegra210.dtsi index bc71ef8f9a09..b9ccfee39ed2 100644 --- a/arch/arm64/boot/dts/nvidia/tegra210.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra210.dtsi @@ -872,6 +872,27 @@ #iommu-cells = <1>; }; + external-memory-controller@7001b000 { + compatible = "nvidia,tegra210-emc"; + reg = <0x0 0x7001b000 0x0 0x1000>, + <0x0 0x7001e000 0x0 0x1000>, + <0x0 0x7001f000 0x0 0x1000>; + clocks = <&tegra_car TEGRA210_CLK_EMC>, + <&tegra_car TEGRA210_CLK_PLL_M>, + <&tegra_car TEGRA210_CLK_PLL_C>, + <&tegra_car TEGRA210_CLK_PLL_P>, + <&tegra_car TEGRA210_CLK_CLK_M>, + <&tegra_car TEGRA210_CLK_PLL_M_UD>, + <&tegra_car TEGRA210_CLK_PLL_MB_UD>, + <&tegra_car TEGRA210_CLK_PLL_MB>, + <&tegra_car TEGRA210_CLK_PLL_P_UD>; + clock-names = "emc", "pll_m", "pll_c", "pll_p", "clk_m", + "pll_m_ud", "pll_mb_ud", "pll_mb", "pll_p_ud"; + interrupts = ; + memory-region = <&emc_table>; + nvidia,memory-controller = <&mc>; + }; + sata@70020000 { compatible = "nvidia,tegra210-ahci"; reg = <0x0 0x70027000 0x0 0x2000>, /* AHCI */ @@ -1431,6 +1452,18 @@ }; }; + reserved-memory { + #address-cells = <2>; + #size-cells = <2>; + ranges; + + emc_table: emc-table@8be00000 { + compatible = "nvidia,tegra210-emc-table"; + reg = <0x0 0x8be00000 0x0 0x10000>; + status = "disabled"; + }; + }; + timer { compatible = "arm,armv8-timer"; interrupts =