From patchwork Tue Oct 16 11:27:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kovvuri X-Patchwork-Id: 984710 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="lMfC/J+Q"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 42ZCjG6Mcwz9s5c for ; Tue, 16 Oct 2018 22:28:02 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727163AbeJPTSA (ORCPT ); Tue, 16 Oct 2018 15:18:00 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:37473 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726595AbeJPTSA (ORCPT ); Tue, 16 Oct 2018 15:18:00 -0400 Received: by mail-pg1-f194.google.com with SMTP id c10-v6so10719248pgq.4; Tue, 16 Oct 2018 04:28:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=oa5dMl733/kjBdq+URM7G9wRhBRhy7lx3ysIVfxPgBI=; b=lMfC/J+QA79djwSacVFCK5h3xEPUbP2xxkoV4nmi0EClexaV9UCesLSknuemBbVNSt QHbqR8UYALo/Vd3qKR9XH7ovX2Cj1f7mKmDW7FXO2uKFqm0VWkPc6w1pAph5JLkqFBZR IysSMdVA70BI2PgkUD0K25gxFicoFBhvwdpq6znVizE5twooS485OKRzu7ScJQnZnLtS SMEoGdxtQl7lMvWw9Kn/Y/HSVWjV9YsQLj/mDcRxrDUlhoMKAUCOr2zGmYplC+IXoSOt AXyteca4NVZhYlKytqzop+jpxVBxB5C9phsJptLdET876B6oWQi3Qpm6rVpdQUFZrsgb YyQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=oa5dMl733/kjBdq+URM7G9wRhBRhy7lx3ysIVfxPgBI=; b=GwxqLOzOL3qg0oiQBOhnNsJDPYcBmlOui5/Y4kvMmpB2/IETJ92MddKajDyfffk79U 2ZjN/wUT1p9SPGbhHc47y8kqLn/kAkPHUd9sA3caywaK96Cum5Eet15VMwKhqRCVDftN 9xp68iKWSod/zZY1xTVQ4P7QGE2gKJEwqj+9p4PCStif3uxefu7ATTAO9ufSQFoJ/0LV 8fv9/LovsYz+tLGvQ9y4s9oZpWrnr+XmYXwdW0Or0wssTX2/IJUP7W+xm5dmK2nEWU+5 ccBCpZiPQw0CGvCufMTUHPxutw5uRSu5yMxPOOFgz4hZHZL2bkIeV8RHkYOtYxXJTwz+ Wlig== X-Gm-Message-State: ABuFfojbH+oQVobu4CaSyS0Y3YO33LT9tEkclxhWEyaWw9Rc7hNKsoAk A3zYwhi7Li6MI9EPT2D+T3MmTZrA X-Google-Smtp-Source: ACcGV62e1T1JpD3TVEyDm/CBm3dbJx0wob0brJubQ51x8vz/vjr1Z27DH8gJ/IP5NcDwR7mxLS4wwQ== X-Received: by 2002:a63:ce14:: with SMTP id y20-v6mr20335753pgf.248.1539689279734; Tue, 16 Oct 2018 04:27:59 -0700 (PDT) Received: from machine421.caveonetworks.com ([115.113.156.2]) by smtp.googlemail.com with ESMTPSA id v190-v6sm29515550pgb.16.2018.10.16.04.27.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 16 Oct 2018 04:27:58 -0700 (PDT) From: sunil.kovvuri@gmail.com To: netdev@vger.kernel.org, davem@davemloft.net Cc: arnd@arndb.de, linux-soc@vger.kernel.org, Geetha sowjanya , Stanislaw Kardach , Sunil Goutham Subject: [PATCH 10/16] octeontx2-af: Support for disabling NPA Aura/Pool contexts Date: Tue, 16 Oct 2018 16:57:14 +0530 Message-Id: <1539689240-11526-11-git-send-email-sunil.kovvuri@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539689240-11526-1-git-send-email-sunil.kovvuri@gmail.com> References: <1539689240-11526-1-git-send-email-sunil.kovvuri@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Geetha sowjanya This patch adds support for a RVU PF/VF to disable all Aura/Pool contexts of a NPA LF via mbox. This will be used by PF/VF drivers upon teardown or while freeing up HW resources. A HW context which is not INIT'ed cannot be modified and a RVU PF/VF driver may or may not INIT all the Aura/Pool contexts. So a bitmap is introduced to keep track of enabled NPA Aura/Pool contexts, so that only enabled hw contexts are disabled upon LF teardown. Signed-off-by: Geetha sowjanya Signed-off-by: Stanislaw Kardach Signed-off-by: Sunil Goutham --- drivers/net/ethernet/marvell/octeontx2/af/mbox.h | 7 ++ drivers/net/ethernet/marvell/octeontx2/af/rvu.h | 5 ++ .../net/ethernet/marvell/octeontx2/af/rvu_npa.c | 98 ++++++++++++++++++++++ 3 files changed, 110 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index bf11058..4e87314 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -142,6 +142,7 @@ M(CGX_INTLBK_DISABLE, 0x20B, msg_req, msg_rsp) \ M(NPA_LF_ALLOC, 0x400, npa_lf_alloc_req, npa_lf_alloc_rsp) \ M(NPA_LF_FREE, 0x401, msg_req, msg_rsp) \ M(NPA_AQ_ENQ, 0x402, npa_aq_enq_req, npa_aq_enq_rsp) \ +M(NPA_HWCTX_DISABLE, 0x403, hwctx_disable_req, msg_rsp) \ /* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \ /* TIM mbox IDs (range 0x800 - 0x9FF) */ \ /* CPT mbox IDs (range 0xA00 - 0xBFF) */ \ @@ -325,4 +326,10 @@ struct npa_aq_enq_rsp { }; }; +/* Disable all contexts of type 'ctype' */ +struct hwctx_disable_req { + struct mbox_msghdr hdr; + u8 ctype; +}; + #endif /* MBOX_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index a70c26b..bfc95c3 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -77,6 +77,8 @@ struct rvu_pfvf { struct qmem *aura_ctx; struct qmem *pool_ctx; struct qmem *npa_qints_ctx; + unsigned long *aura_bmap; + unsigned long *pool_bmap; }; struct rvu_hwinfo { @@ -216,6 +218,9 @@ void rvu_npa_freemem(struct rvu *rvu); int rvu_mbox_handler_NPA_AQ_ENQ(struct rvu *rvu, struct npa_aq_enq_req *req, struct npa_aq_enq_rsp *rsp); +int rvu_mbox_handler_NPA_HWCTX_DISABLE(struct rvu *rvu, + struct hwctx_disable_req *req, + struct msg_rsp *rsp); int rvu_mbox_handler_NPA_LF_ALLOC(struct rvu *rvu, struct npa_lf_alloc_req *req, struct npa_lf_alloc_rsp *rsp); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c index 4ff0e76..0e43a69 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c @@ -63,6 +63,7 @@ static int rvu_npa_aq_enq_inst(struct rvu *rvu, struct npa_aq_enq_req *req, struct admin_queue *aq; struct rvu_pfvf *pfvf; void *ctx, *mask; + bool ena; pfvf = rvu_get_pfvf(rvu, pcifunc); if (!pfvf->aura_ctx || req->aura_id >= pfvf->aura_ctx->qsize) @@ -149,6 +150,35 @@ static int rvu_npa_aq_enq_inst(struct rvu *rvu, struct npa_aq_enq_req *req, return rc; } + /* Set aura bitmap if aura hw context is enabled */ + if (req->ctype == NPA_AQ_CTYPE_AURA) { + if (req->op == NPA_AQ_INSTOP_INIT && req->aura.ena) + __set_bit(req->aura_id, pfvf->aura_bmap); + if (req->op == NPA_AQ_INSTOP_WRITE) { + ena = (req->aura.ena & req->aura_mask.ena) | + (test_bit(req->aura_id, pfvf->aura_bmap) & + ~req->aura_mask.ena); + if (ena) + __set_bit(req->aura_id, pfvf->aura_bmap); + else + __clear_bit(req->aura_id, pfvf->aura_bmap); + } + } + + /* Set pool bitmap if pool hw context is enabled */ + if (req->ctype == NPA_AQ_CTYPE_POOL) { + if (req->op == NPA_AQ_INSTOP_INIT && req->pool.ena) + __set_bit(req->aura_id, pfvf->pool_bmap); + if (req->op == NPA_AQ_INSTOP_WRITE) { + ena = (req->pool.ena & req->pool_mask.ena) | + (test_bit(req->aura_id, pfvf->pool_bmap) & + ~req->pool_mask.ena); + if (ena) + __set_bit(req->aura_id, pfvf->pool_bmap); + else + __clear_bit(req->aura_id, pfvf->pool_bmap); + } + } spin_unlock(&aq->lock); if (rsp) { @@ -166,6 +196,51 @@ static int rvu_npa_aq_enq_inst(struct rvu *rvu, struct npa_aq_enq_req *req, return 0; } +static int npa_lf_hwctx_disable(struct rvu *rvu, struct hwctx_disable_req *req) +{ + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, req->hdr.pcifunc); + struct npa_aq_enq_req aq_req; + unsigned long *bmap; + int id, cnt = 0; + int err = 0, rc; + + if (!pfvf->pool_ctx || !pfvf->aura_ctx) + return NPA_AF_ERR_AQ_ENQUEUE; + + memset(&aq_req, 0, sizeof(struct npa_aq_enq_req)); + aq_req.hdr.pcifunc = req->hdr.pcifunc; + + if (req->ctype == NPA_AQ_CTYPE_POOL) { + aq_req.pool.ena = 0; + aq_req.pool_mask.ena = 1; + cnt = pfvf->pool_ctx->qsize; + bmap = pfvf->pool_bmap; + } else if (req->ctype == NPA_AQ_CTYPE_AURA) { + aq_req.aura.ena = 0; + aq_req.aura_mask.ena = 1; + cnt = pfvf->aura_ctx->qsize; + bmap = pfvf->aura_bmap; + } + + aq_req.ctype = req->ctype; + aq_req.op = NPA_AQ_INSTOP_WRITE; + + for (id = 0; id < cnt; id++) { + if (!test_bit(id, bmap)) + continue; + aq_req.aura_id = id; + rc = rvu_npa_aq_enq_inst(rvu, &aq_req, NULL); + if (rc) { + err = rc; + dev_err(rvu->dev, "Failed to disable %s:%d context\n", + (req->ctype == NPA_AQ_CTYPE_AURA) ? + "Aura" : "Pool", id); + } + } + + return err; +} + int rvu_mbox_handler_NPA_AQ_ENQ(struct rvu *rvu, struct npa_aq_enq_req *req, struct npa_aq_enq_rsp *rsp) @@ -173,11 +248,24 @@ int rvu_mbox_handler_NPA_AQ_ENQ(struct rvu *rvu, return rvu_npa_aq_enq_inst(rvu, req, rsp); } +int rvu_mbox_handler_NPA_HWCTX_DISABLE(struct rvu *rvu, + struct hwctx_disable_req *req, + struct msg_rsp *rsp) +{ + return npa_lf_hwctx_disable(rvu, req); +} + static void npa_ctx_free(struct rvu *rvu, struct rvu_pfvf *pfvf) { + kfree(pfvf->aura_bmap); + pfvf->aura_bmap = NULL; + qmem_free(rvu->dev, pfvf->aura_ctx); pfvf->aura_ctx = NULL; + kfree(pfvf->pool_bmap); + pfvf->pool_bmap = NULL; + qmem_free(rvu->dev, pfvf->pool_ctx); pfvf->pool_ctx = NULL; @@ -227,12 +315,22 @@ int rvu_mbox_handler_NPA_LF_ALLOC(struct rvu *rvu, if (err) goto free_mem; + pfvf->aura_bmap = kcalloc(NPA_AURA_COUNT(req->aura_sz), sizeof(long), + GFP_KERNEL); + if (!pfvf->aura_bmap) + goto free_mem; + /* Alloc memory for pool HW contexts */ hwctx_size = 1UL << ((ctx_cfg >> 4) & 0xF); err = qmem_alloc(rvu->dev, &pfvf->pool_ctx, req->nr_pools, hwctx_size); if (err) goto free_mem; + pfvf->pool_bmap = kcalloc(NPA_AURA_COUNT(req->aura_sz), sizeof(long), + GFP_KERNEL); + if (!pfvf->pool_bmap) + goto free_mem; + /* Get no of queue interrupts supported */ cfg = rvu_read64(rvu, blkaddr, NPA_AF_CONST); qints = (cfg >> 28) & 0xFFF;