From patchwork Tue Mar 21 14:27:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roese X-Patchwork-Id: 741610 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.denx.de (dione.denx.de [81.169.180.215]) by ozlabs.org (Postfix) with ESMTP id 3vnbS33bTsz9s2s for ; Wed, 22 Mar 2017 01:54:11 +1100 (AEDT) Received: by lists.denx.de (Postfix, from userid 105) id 377BDC21D62; Tue, 21 Mar 2017 14:34:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on lists.denx.de X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=unavailable autolearn_force=no version=3.4.0 Received: from lists.denx.de (localhost [IPv6:::1]) by lists.denx.de (Postfix) with ESMTP id 4E1D1C21D8D; Tue, 21 Mar 2017 14:28:46 +0000 (UTC) Received: by lists.denx.de (Postfix, from userid 105) id 3F851C21D7E; Tue, 21 Mar 2017 14:28:18 +0000 (UTC) Received: from mx1.mailbox.org (mx1.mailbox.org [80.241.60.212]) by lists.denx.de (Postfix) with ESMTPS id 57488C21D32 for ; Tue, 21 Mar 2017 14:28:14 +0000 (UTC) Received: from smtp1.mailbox.org (smtp1.mailbox.org [80.241.60.240]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.mailbox.org (Postfix) with ESMTPS id 3308445CD8; Tue, 21 Mar 2017 15:28:14 +0100 (CET) X-Virus-Scanned: amavisd-new at heinlein-support.de Received: from smtp1.mailbox.org ([80.241.60.240]) by hefe.heinlein-support.de (hefe.heinlein-support.de [91.198.250.172]) (amavisd-new, port 10030) with ESMTP id 9NRaYEfYCbwS; Tue, 21 Mar 2017 15:28:12 +0100 (CET) From: Stefan Roese To: u-boot@lists.denx.de Date: Tue, 21 Mar 2017 15:27:44 +0100 Message-Id: <20170321142802.24276-24-sr@denx.de> In-Reply-To: <20170321142802.24276-1-sr@denx.de> References: <20170321142802.24276-1-sr@denx.de> Cc: Nadav Haklai , Stefan Chulski , Thomas Petazzoni Subject: [U-Boot] [PATCH v1 23/41] net: mvpp2: rework RXQ interrupt group initialization for PPv2.2 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.18 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" From: Thomas Petazzoni This commit adjusts how the MVPP2_ISR_RXQ_GROUP_REG register is configured, since it changed between PPv2.1 and PPv2.2. Signed-off-by: Thomas Petazzoni Signed-off-by: Stefan Roese Acked-by: Joe Hershberger --- drivers/net/mvpp2.c | 49 ++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 44 insertions(+), 5 deletions(-) diff --git a/drivers/net/mvpp2.c b/drivers/net/mvpp2.c index 0ad808b4a2..567324bed9 100644 --- a/drivers/net/mvpp2.c +++ b/drivers/net/mvpp2.c @@ -228,7 +228,21 @@ do { \ /* Interrupt Cause and Mask registers */ #define MVPP2_ISR_RX_THRESHOLD_REG(rxq) (0x5200 + 4 * (rxq)) -#define MVPP2_ISR_RXQ_GROUP_REG(rxq) (0x5400 + 4 * (rxq)) +#define MVPP21_ISR_RXQ_GROUP_REG(rxq) (0x5400 + 4 * (rxq)) + +#define MVPP22_ISR_RXQ_GROUP_INDEX_REG 0x5400 +#define MVPP22_ISR_RXQ_GROUP_INDEX_SUBGROUP_MASK 0xf +#define MVPP22_ISR_RXQ_GROUP_INDEX_GROUP_MASK 0x380 +#define MVPP22_ISR_RXQ_GROUP_INDEX_GROUP_OFFSET 7 + +#define MVPP22_ISR_RXQ_GROUP_INDEX_SUBGROUP_MASK 0xf +#define MVPP22_ISR_RXQ_GROUP_INDEX_GROUP_MASK 0x380 + +#define MVPP22_ISR_RXQ_SUB_GROUP_CONFIG_REG 0x5404 +#define MVPP22_ISR_RXQ_SUB_GROUP_STARTQ_MASK 0x1f +#define MVPP22_ISR_RXQ_SUB_GROUP_SIZE_MASK 0xf00 +#define MVPP22_ISR_RXQ_SUB_GROUP_SIZE_OFFSET 8 + #define MVPP2_ISR_ENABLE_REG(port) (0x5420 + 4 * (port)) #define MVPP2_ISR_ENABLE_INTERRUPT(mask) ((mask) & 0xffff) #define MVPP2_ISR_DISABLE_INTERRUPT(mask) (((mask) << 16) & 0xffff0000) @@ -3743,7 +3757,19 @@ static int mvpp2_port_init(struct udevice *dev, struct mvpp2_port *port) } /* Configure Rx queue group interrupt for this port */ - mvpp2_write(priv, MVPP2_ISR_RXQ_GROUP_REG(port->id), CONFIG_MV_ETH_RXQ); + if (priv->hw_version == MVPP21) + mvpp2_write(priv, MVPP21_ISR_RXQ_GROUP_REG(port->id), + CONFIG_MV_ETH_RXQ); + else { + u32 val; + + val = (port->id << MVPP22_ISR_RXQ_GROUP_INDEX_GROUP_OFFSET); + mvpp2_write(priv, MVPP22_ISR_RXQ_GROUP_INDEX_REG, val); + + val = (CONFIG_MV_ETH_RXQ << + MVPP22_ISR_RXQ_SUB_GROUP_SIZE_OFFSET); + mvpp2_write(priv, MVPP22_ISR_RXQ_SUB_GROUP_CONFIG_REG, val); + } /* Create Rx descriptor rings */ for (queue = 0; queue < rxq_number; queue++) { @@ -4009,9 +4035,22 @@ static int mvpp2_init(struct udevice *dev, struct mvpp2 *priv) mvpp2_rx_fifo_init(priv); /* Reset Rx queue group interrupt configuration */ - for (i = 0; i < MVPP2_MAX_PORTS; i++) - mvpp2_write(priv, MVPP2_ISR_RXQ_GROUP_REG(i), - CONFIG_MV_ETH_RXQ); + for (i = 0; i < MVPP2_MAX_PORTS; i++) { + if (priv->hw_version == MVPP21) + mvpp2_write(priv, MVPP21_ISR_RXQ_GROUP_REG(i), + CONFIG_MV_ETH_RXQ); + else { + u32 val; + + val = (i << MVPP22_ISR_RXQ_GROUP_INDEX_GROUP_OFFSET); + mvpp2_write(priv, MVPP22_ISR_RXQ_GROUP_INDEX_REG, val); + + val = (CONFIG_MV_ETH_RXQ << + MVPP22_ISR_RXQ_SUB_GROUP_SIZE_OFFSET); + mvpp2_write(priv, + MVPP22_ISR_RXQ_SUB_GROUP_CONFIG_REG, val); + } + } if (priv->hw_version == MVPP21) writel(MVPP2_EXT_GLOBAL_CTRL_DEFAULT,