From patchwork Fri Oct 19 13:37:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kovvuri X-Patchwork-Id: 986784 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="UubYlIFr"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 42c6S03w0Qz9sBn for ; Sat, 20 Oct 2018 00:38:08 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727579AbeJSVoR (ORCPT ); Fri, 19 Oct 2018 17:44:17 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:42757 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727382AbeJSVoQ (ORCPT ); Fri, 19 Oct 2018 17:44:16 -0400 Received: by mail-pg1-f193.google.com with SMTP id i4-v6so15774894pgq.9; Fri, 19 Oct 2018 06:38:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=c8Mg5uI/tgZRKFAlxxDMpK65ZuIiZrioQcV8l2+xjeI=; b=UubYlIFrxhL3FPxAp/cpmoZGbX+pzM9m7gEo0vvYqDZjiFjKfcj0scgiWFU7J4nXhx 60atu1POTLib2qRYv349gFAL33rfCBCVdNlggLKZNAeMekU3bNP77KMJJldtmkSN2DIf 9XvAL70rAaJ3dENzDTS/0sWS9ZY5FYJZ+uipC3/jEyx/QnsRGEO98t2Hw3kO95mikegp nhZPsNZZDw2zEeowNBhbqFoSlIyTDKR12YlcNxEfgRbnUvxIOl5x+FojOOHNkWO1GaYE nKA24HVXLWhQJDc0Pit5atHZojNBiKS14L0Av5ZY246s1U7XBNTZmJsjKzxysr0Z++h1 Uycg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=c8Mg5uI/tgZRKFAlxxDMpK65ZuIiZrioQcV8l2+xjeI=; b=biN7Ef+MFRMgF6rmqSP4mt1vVlgycrF9u7qWVOgqyW2nJA12LTWlEtZyv1a8AwgoIB WKqtt3waEDHQhWQifXuTWroP4vuAraYXaZKkaE7quDqytJ2djjf6yY4lBnm6833CcRLo S2ZPeQF5tNeCsS95IQ8ElCp4RHJwHbzs42Y9L+9uUGIykLHtvRwc+aCkuOPpDhO1MO1E tMXMq7zmXoHpP2nc8CzLu+uGdNKGpDpFSsc/XfN7Z9r9lqkM6NfDp5kXATbGcQO9XTSE 740kdNC/kth9/TPpgxE9+wfnemH0HcoQFHt4qVkuAxcVMEMMXFzS6slewkJiUytowugf YuPw== X-Gm-Message-State: ABuFfoheWn4X0s2+KRqDQ7WdvgL8HjCYp/E0SCtEQpgo6j43EXyX+312 7cQT2bgZaZVIX1hDNd/7RmvWXv5+ X-Google-Smtp-Source: ACcGV60BxVxxC4AeE4C9wOmrnDYl2b1W9KpjYDvcZe1yKJdnfDC5YsE3r6tghsV/991tL7jf3JIA/A== X-Received: by 2002:a63:d917:: with SMTP id r23-v6mr33084718pgg.0.1539956286411; Fri, 19 Oct 2018 06:38:06 -0700 (PDT) Received: from machine421.caveonetworks.com ([115.113.156.2]) by smtp.googlemail.com with ESMTPSA id p82-v6sm46097709pfa.47.2018.10.19.06.38.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 19 Oct 2018 06:38:05 -0700 (PDT) From: sunil.kovvuri@gmail.com To: netdev@vger.kernel.org, davem@davemloft.net Cc: arnd@arndb.de, linux-soc@vger.kernel.org, Sunil Goutham Subject: [PATCH 02/17] octeontx2-af: NIX Tx scheduler queue config support Date: Fri, 19 Oct 2018 19:07:23 +0530 Message-Id: <1539956258-29377-3-git-send-email-sunil.kovvuri@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539956258-29377-1-git-send-email-sunil.kovvuri@gmail.com> References: <1539956258-29377-1-git-send-email-sunil.kovvuri@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sunil Goutham This patch adds support for a PF/VF driver to configure NIX transmit scheduler queues via mbox. Since PF/VF doesn't know the absolute HW index of the NIXLF attached to it, AF traps the register config and overwrites with the correct NIXLF index. HW supports shaping, colouring and policing of packets with these multilevel traffic scheduler queues. Instead of introducing different mbox message formats for different configurations and making both AF & PF/VF driver implementation cumbersome, access to the scheduler queue's CSRs is provided via mbox. AF checks whether the sender PF/VF has the corresponding queue allocated or not and dumps the config to HW. With a single mbox msg 20 registers can be configured. Signed-off-by: Sunil Goutham --- drivers/net/ethernet/marvell/octeontx2/af/Makefile | 3 +- drivers/net/ethernet/marvell/octeontx2/af/mbox.h | 15 ++- drivers/net/ethernet/marvell/octeontx2/af/rvu.h | 11 +++ .../net/ethernet/marvell/octeontx2/af/rvu_nix.c | 104 ++++++++++++++++++++- .../net/ethernet/marvell/octeontx2/af/rvu_reg.c | 71 ++++++++++++++ 5 files changed, 199 insertions(+), 5 deletions(-) create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.c diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile index 45b108f..264cbd7 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile @@ -7,4 +7,5 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o octeontx2_mbox-y := mbox.o -octeontx2_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o +octeontx2_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \ + rvu_reg.o diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 282e556..f2e0743 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -154,7 +154,8 @@ M(NIX_LF_FREE, 0x8001, msg_req, msg_rsp) \ M(NIX_AQ_ENQ, 0x8002, nix_aq_enq_req, nix_aq_enq_rsp) \ M(NIX_HWCTX_DISABLE, 0x8003, hwctx_disable_req, msg_rsp) \ M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc_req, nix_txsch_alloc_rsp) \ -M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free_req, msg_rsp) +M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free_req, msg_rsp) \ +M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_config, msg_rsp) /* Messages initiated by AF (range 0xC00 - 0xDFF) */ #define MBOX_UP_CGX_MESSAGES \ @@ -448,4 +449,16 @@ struct nix_txsch_free_req { u16 schq; }; +struct nix_txschq_config { + struct mbox_msghdr hdr; + u8 lvl; /* SMQ/MDQ/TL4/TL3/TL2/TL1 */ +#define TXSCHQ_IDX_SHIFT 16 +#define TXSCHQ_IDX_MASK (BIT_ULL(10) - 1) +#define TXSCHQ_IDX(reg, shift) (((reg) >> (shift)) & TXSCHQ_IDX_MASK) + u8 num_regs; +#define MAX_REGS_PER_MBOX_MSG 20 + u64 reg[MAX_REGS_PER_MBOX_MSG]; + u64 regval[MAX_REGS_PER_MBOX_MSG]; +}; + #endif /* MBOX_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index c402eba..4b15552 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -195,6 +195,14 @@ int rvu_lf_reset(struct rvu *rvu, struct rvu_block *block, int lf); int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc); int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero); +/* RVU HW reg validation */ +enum regmap_block { + TXSCHQ_HWREGMAP = 0, + MAX_HWREGMAP, +}; + +bool rvu_check_valid_reg(int regmap, int regblk, u64 reg); + /* NPA/NIX AQ APIs */ int rvu_aq_alloc(struct rvu *rvu, struct admin_queue **ad_queue, int qsize, int inst_size, int res_size); @@ -277,4 +285,7 @@ int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu, int rvu_mbox_handler_NIX_TXSCH_FREE(struct rvu *rvu, struct nix_txsch_free_req *req, struct msg_rsp *rsp); +int rvu_mbox_handler_NIX_TXSCHQ_CFG(struct rvu *rvu, + struct nix_txschq_config *req, + struct msg_rsp *rsp); #endif /* RVU_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index e8374d9..56f242d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -738,10 +738,10 @@ static void nix_reset_tx_linkcfg(struct rvu *rvu, int blkaddr, if (lvl == NIX_TXSCH_LVL_TL4) rvu_write64(rvu, blkaddr, NIX_AF_TL4X_SDP_LINK_CFG(schq), 0x00); - if (lvl != NIX_TXSCH_LVL_TL3) + if (lvl != NIX_TXSCH_LVL_TL2) return; - /* Reset TL3's CGX or LBK link config */ + /* Reset TL2's CGX or LBK link config */ for (link = 0; link < (hw->cgx_links + hw->lbk_links); link++) rvu_write64(rvu, blkaddr, NIX_AF_TL3_TL2X_LINKX_CFG(schq, link), 0x00); @@ -851,7 +851,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) /* Disable TL2/3 queue links before SMQ flush*/ spin_lock(&rvu->rsrc_lock); for (lvl = NIX_TXSCH_LVL_TL4; lvl < NIX_TXSCH_LVL_CNT; lvl++) { - if (lvl != NIX_TXSCH_LVL_TL3 && lvl != NIX_TXSCH_LVL_TL4) + if (lvl != NIX_TXSCH_LVL_TL2 && lvl != NIX_TXSCH_LVL_TL4) continue; txsch = &nix_hw->txsch[lvl]; @@ -909,6 +909,104 @@ int rvu_mbox_handler_NIX_TXSCH_FREE(struct rvu *rvu, return nix_txschq_free(rvu, req->hdr.pcifunc); } +static bool is_txschq_config_valid(struct rvu *rvu, u16 pcifunc, int blkaddr, + int lvl, u64 reg, u64 regval) +{ + u64 regbase = reg & 0xFFFF; + u16 schq, parent; + + if (!rvu_check_valid_reg(TXSCHQ_HWREGMAP, lvl, reg)) + return false; + + schq = TXSCHQ_IDX(reg, TXSCHQ_IDX_SHIFT); + /* Check if this schq belongs to this PF/VF or not */ + if (!is_valid_txschq(rvu, blkaddr, lvl, pcifunc, schq)) + return false; + + parent = (regval >> 16) & 0x1FF; + /* Validate MDQ's TL4 parent */ + if (regbase == NIX_AF_MDQX_PARENT(0) && + !is_valid_txschq(rvu, blkaddr, NIX_TXSCH_LVL_TL4, pcifunc, parent)) + return false; + + /* Validate TL4's TL3 parent */ + if (regbase == NIX_AF_TL4X_PARENT(0) && + !is_valid_txschq(rvu, blkaddr, NIX_TXSCH_LVL_TL3, pcifunc, parent)) + return false; + + /* Validate TL3's TL2 parent */ + if (regbase == NIX_AF_TL3X_PARENT(0) && + !is_valid_txschq(rvu, blkaddr, NIX_TXSCH_LVL_TL2, pcifunc, parent)) + return false; + + /* Validate TL2's TL1 parent */ + if (regbase == NIX_AF_TL2X_PARENT(0) && + !is_valid_txschq(rvu, blkaddr, NIX_TXSCH_LVL_TL1, pcifunc, parent)) + return false; + + return true; +} + +int rvu_mbox_handler_NIX_TXSCHQ_CFG(struct rvu *rvu, + struct nix_txschq_config *req, + struct msg_rsp *rsp) +{ + struct rvu_hwinfo *hw = rvu->hw; + u16 pcifunc = req->hdr.pcifunc; + u64 reg, regval, schq_regbase; + struct nix_txsch *txsch; + struct nix_hw *nix_hw; + int blkaddr, idx, err; + int nixlf; + + if (req->lvl >= NIX_TXSCH_LVL_CNT || + req->num_regs > MAX_REGS_PER_MBOX_MSG) + return NIX_AF_INVAL_TXSCHQ_CFG; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + nix_hw = get_nix_hw(rvu->hw, blkaddr); + if (!nix_hw) + return -EINVAL; + + nixlf = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0); + if (nixlf < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + txsch = &nix_hw->txsch[req->lvl]; + for (idx = 0; idx < req->num_regs; idx++) { + reg = req->reg[idx]; + regval = req->regval[idx]; + schq_regbase = reg & 0xFFFF; + + if (!is_txschq_config_valid(rvu, pcifunc, blkaddr, + txsch->lvl, reg, regval)) + return NIX_AF_INVAL_TXSCHQ_CFG; + + /* Replace PF/VF visible NIXLF slot with HW NIXLF id */ + if (schq_regbase == NIX_AF_SMQX_CFG(0)) { + nixlf = rvu_get_lf(rvu, &hw->block[blkaddr], + pcifunc, 0); + regval &= ~(0x7FULL << 24); + regval |= ((u64)nixlf << 24); + } + + rvu_write64(rvu, blkaddr, reg, regval); + + /* Check for SMQ flush, if so, poll for its completion */ + if (schq_regbase == NIX_AF_SMQX_CFG(0) && + (regval & BIT_ULL(49))) { + err = rvu_poll_reg(rvu, blkaddr, + reg, BIT_ULL(49), true); + if (err) + return NIX_AF_SMQ_FLUSH_FAILED; + } + } + return 0; +} + static int nix_setup_txschq(struct rvu *rvu, struct nix_hw *nix_hw, int blkaddr) { struct nix_txsch *txsch; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.c new file mode 100644 index 0000000..9d7c135 --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.c @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell OcteonTx2 RVU Admin Function driver + * + * Copyright (C) 2018 Marvell International Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +#include "rvu_struct.h" +#include "common.h" +#include "mbox.h" +#include "rvu.h" + +struct reg_range { + u64 start; + u64 end; +}; + +struct hw_reg_map { + u8 regblk; + u8 num_ranges; + u64 mask; +#define MAX_REG_RANGES 8 + struct reg_range range[MAX_REG_RANGES]; +}; + +static struct hw_reg_map txsch_reg_map[NIX_TXSCH_LVL_CNT] = { + {NIX_TXSCH_LVL_SMQ, 2, 0xFFFF, {{0x0700, 0x0708}, {0x1400, 0x14C8} } }, + {NIX_TXSCH_LVL_TL4, 3, 0xFFFF, {{0x0B00, 0x0B08}, {0x0B10, 0x0B18}, + {0x1200, 0x12E0} } }, + {NIX_TXSCH_LVL_TL3, 3, 0xFFFF, {{0x1000, 0x10E0}, {0x1600, 0x1608}, + {0x1610, 0x1618} } }, + {NIX_TXSCH_LVL_TL2, 2, 0xFFFF, {{0x0E00, 0x0EE0}, {0x1700, 0x1768} } }, + {NIX_TXSCH_LVL_TL1, 1, 0xFFFF, {{0x0C00, 0x0D98} } }, +}; + +bool rvu_check_valid_reg(int regmap, int regblk, u64 reg) +{ + int idx; + struct hw_reg_map *map; + + /* Only 64bit offsets */ + if (reg & 0x07) + return false; + + if (regmap == TXSCHQ_HWREGMAP) { + if (regblk >= NIX_TXSCH_LVL_CNT) + return false; + map = &txsch_reg_map[regblk]; + } else { + return false; + } + + /* Should never happen */ + if (map->regblk != regblk) + return false; + + reg &= map->mask; + + for (idx = 0; idx < map->num_ranges; idx++) { + if (reg >= map->range[idx].start && + reg < map->range[idx].end) + return true; + } + return false; +}