From patchwork Wed Jun 5 21:11:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110764 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="MdrBbTqc"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1h521RGz9s9y for ; Thu, 6 Jun 2019 07:12:05 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726805AbfFEVME (ORCPT ); Wed, 5 Jun 2019 17:12:04 -0400 Received: from mail-qt1-f193.google.com ([209.85.160.193]:46964 "EHLO mail-qt1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726464AbfFEVMD (ORCPT ); Wed, 5 Jun 2019 17:12:03 -0400 Received: by mail-qt1-f193.google.com with SMTP id z19so183121qtz.13 for ; Wed, 05 Jun 2019 14:12:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ald4YexKoj7e7jbz1K2HPsjJos1TJnoj/ojq23acBKs=; b=MdrBbTqclAb0VMYD5GYMmDVmbfsjHcglmuzSB/kQ8e+rFRk2OTYJjuP3GkDaPVOPKj HIzoNMqJTMIZXnzCob4IPyHQQwjbo0OhSqZo7xpoUABzohBdGWmoIjrX0cpPnoMKVb25 kNXxdIhNI0G5zzqB++Epw64ZfyXjDl8xDz4/R3VPpQeHq15llA55X3kAFMEKplzpeKYf DXlqBSJ0CyvvAycOiXHBsQmF9UaUd240B7slNw2wWBO2iotUmNF8YCvF0FZVn09w1mzh 7i3dZ2qDHnYVWF+HPWBwkybNLsJHA3dcVTlA507PguDsLCYgjKRwP//mfRIg65jAldno 6mKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ald4YexKoj7e7jbz1K2HPsjJos1TJnoj/ojq23acBKs=; b=msQi4AxHVczSWw1CXeEu/nOj1Nw9yQb4mSqnTnj8X1gPuwDLhB7SZTcMYN0IixEYEt LMs9cfc3kXS7+Cg8KjBaOEp51lpipLl2JJfOOHQxskwNjKwyYsdANx162Jnb0QelZp6F +IbJzIgABlntvj8WO/FK95LVxckPcEx9XNR6DXuHVDWRResg+43WB3qjyWqphFusamUg ST3OqhepdVfvWbXresRT+rRfX9p+2qthaW/m6RiLxA1mFodusVIUDNK4ANgn96hKTL4a mP/ne1KsPT0ljCliLiX67gDnYw1GN80pRbNF4Rgck7M5BLHLT7N2RxAnIMSjvoZRul4q A3Hw== X-Gm-Message-State: APjAAAV/c1GK+iUKc7rOC6xM3+9+M/l8b/i6mrmjaB/fVeSBxAozrp0X eCzKtKeC714fXvgloNFkG3ao9A== X-Google-Smtp-Source: APXvYqxZ4PPsnK2Nkn95ncoOsbgsFstRGHE0Up9cOkXhboFYQ4s25HCcKkTwUQUlbNPxvlbFiFzsEw== X-Received: by 2002:ac8:38d5:: with SMTP id g21mr36095572qtc.52.1559769122657; Wed, 05 Jun 2019 14:12:02 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:02 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 01/13] nfp: count all failed TX attempts as errors Date: Wed, 5 Jun 2019 14:11:31 -0700 Message-Id: <20190605211143.29689-2-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Currently if we need to modify the head of the skb and allocation fails we would free the skb and not increment the error counter. Make sure all errors are counted. Signed-off-by: Jakub Kicinski Reviewed-by: Dirk van der Merwe --- drivers/net/ethernet/netronome/nfp/nfp_net_common.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index b82b684f52ce..0c163b086de5 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -873,17 +873,14 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) } md_bytes = nfp_net_prep_port_id(skb); - if (unlikely(md_bytes < 0)) { - nfp_net_tx_xmit_more_flush(tx_ring); - dev_kfree_skb_any(skb); - return NETDEV_TX_OK; - } + if (unlikely(md_bytes < 0)) + goto err_flush; /* Start with the head skbuf */ dma_addr = dma_map_single(dp->dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE); if (dma_mapping_error(dp->dev, dma_addr)) - goto err_free; + goto err_dma_err; wr_idx = D_IDX(tx_ring, tx_ring->wr_p); @@ -979,8 +976,9 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) tx_ring->txbufs[wr_idx].skb = NULL; tx_ring->txbufs[wr_idx].dma_addr = 0; tx_ring->txbufs[wr_idx].fidx = -2; -err_free: +err_dma_err: nn_dp_warn(dp, "Failed to map DMA TX buffer\n"); +err_flush: nfp_net_tx_xmit_more_flush(tx_ring); u64_stats_update_begin(&r_vec->tx_sync); r_vec->tx_errors++; From patchwork Wed Jun 5 21:11:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110766 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="Jxj8MRa6"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1h72pq6z9s3l for ; Thu, 6 Jun 2019 07:12:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726813AbfFEVMG (ORCPT ); Wed, 5 Jun 2019 17:12:06 -0400 Received: from mail-qt1-f195.google.com ([209.85.160.195]:33029 "EHLO mail-qt1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726464AbfFEVMF (ORCPT ); Wed, 5 Jun 2019 17:12:05 -0400 Received: by mail-qt1-f195.google.com with SMTP id 14so282987qtf.0 for ; Wed, 05 Jun 2019 14:12:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5WvNLfzQkIj6Fjp6BUV6aaFi7+dj1Xot63IBgKfzxlY=; b=Jxj8MRa688aj4uIoC+lYBJ2y8z8l/kzuQROAGo3Fu9Lj5iOXg7v45KJOxHOjCBD3aj u76cy8JbW/30exEH9ktzKe0G0VFay24RtBUHIjZKshBEi6PB0vSTtjSnVDU4OS5dtBv5 k489kBXo+/rj8RyGK/4OxGMnbKEBvElODO0cC1zq5f2X/4MStVyt/cs/QcKLFmsO+2kc 7b3xKSTAiy8dIWwnjlg8qm8eNFbjgHeWpqKZCGvUGdybd7hYBamjU4eFYy3eDLcxRDne s1bfXu51WCZ5d0/7N5izXrHNW4ePo0OYFpbbHSmPFMm3xDU7niubHx5MPnO0poxtSlQP yqZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5WvNLfzQkIj6Fjp6BUV6aaFi7+dj1Xot63IBgKfzxlY=; b=G7CvJ5bOXDZ7lTKr0c53rUrtwxBJi1dun9WFU2aFs5cykEOng9Azwv0jUAytLdzCgL RZepX0+lmFcm/Derrwya/EJxKL7ZiEwmRgrj1Pn2m7xxOv+d48iXIajA7y28xB8vnB2l N9mrhbcbxOfkCA01W93YbszLDK4SSnfPo8mWToAMOPA7mfFcVLBE4DpNO9lMyy7lsB3L Jm+jlNRNIuKdVNOoY51LSXOIHQHrtQQHMx5jLSVkNU3KyWmcl+L9vmZacN8JAUBabx2z jB/ZCa3yzMpEpfgMDbmxvsfGoA7yVZKKznY9nu9IL96XLh1pex4n/dCMLVJGQcflAA/W iEAg== X-Gm-Message-State: APjAAAUmxg7jZakOwPeNdCxB50MjSS+3hnPq6fEDx45/+KFX5PdSu/+A z1o0aseDvoYbmgmqZ8dDoKBQ1g== X-Google-Smtp-Source: APXvYqx9eCH2T2XM2H+oAboPgnXITFSxLr4buzz40L+moHn5GF6+HJNsM7V4QhDxrTnfjm+ycD6fag== X-Received: by 2002:aed:3bcf:: with SMTP id s15mr35879695qte.105.1559769124115; Wed, 05 Jun 2019 14:12:04 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:03 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 02/13] nfp: make bar_lock a semaphore Date: Wed, 5 Jun 2019 14:11:32 -0700 Message-Id: <20190605211143.29689-3-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org We will need to release the bar lock from a workqueue so move from a mutex to a semaphore. This lock should not be too hot. Unfortunately semaphores don't have lockdep support. Signed-off-by: Jakub Kicinski Reviewed-by: Dirk van der Merwe --- drivers/net/ethernet/netronome/nfp/nfp_net.h | 7 ++++--- drivers/net/ethernet/netronome/nfp/nfp_net_common.c | 8 +------- 2 files changed, 5 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index df9aff2684ed..e006b3abc9f6 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -17,6 +17,7 @@ #include #include #include +#include #include #include "nfp_net_ctrl.h" @@ -620,7 +621,7 @@ struct nfp_net { struct timer_list reconfig_timer; u32 reconfig_in_progress_update; - struct mutex bar_lock; + struct semaphore bar_lock; u32 rx_coalesce_usecs; u32 rx_coalesce_max_frames; @@ -848,12 +849,12 @@ static inline void nfp_ctrl_unlock(struct nfp_net *nn) static inline void nn_ctrl_bar_lock(struct nfp_net *nn) { - mutex_lock(&nn->bar_lock); + down(&nn->bar_lock); } static inline void nn_ctrl_bar_unlock(struct nfp_net *nn) { - mutex_unlock(&nn->bar_lock); + up(&nn->bar_lock); } /* Globals */ diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 0c163b086de5..39d70936c741 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -275,8 +274,6 @@ static int __nfp_net_reconfig(struct nfp_net *nn, u32 update) { int ret; - lockdep_assert_held(&nn->bar_lock); - nfp_net_reconfig_sync_enter(nn); nfp_net_reconfig_start(nn, update); @@ -331,7 +328,6 @@ int nfp_net_mbox_reconfig(struct nfp_net *nn, u32 mbox_cmd) u32 mbox = nn->tlv_caps.mbox_off; int ret; - lockdep_assert_held(&nn->bar_lock); nn_writeq(nn, mbox + NFP_NET_CFG_MBOX_SIMPLE_CMD, mbox_cmd); ret = __nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_MBOX); @@ -3702,7 +3698,7 @@ nfp_net_alloc(struct pci_dev *pdev, void __iomem *ctrl_bar, bool needs_netdev, nn->dp.txd_cnt = NFP_NET_TX_DESCS_DEFAULT; nn->dp.rxd_cnt = NFP_NET_RX_DESCS_DEFAULT; - mutex_init(&nn->bar_lock); + sema_init(&nn->bar_lock, 1); spin_lock_init(&nn->reconfig_lock); spin_lock_init(&nn->link_status_lock); @@ -3732,8 +3728,6 @@ void nfp_net_free(struct nfp_net *nn) { WARN_ON(timer_pending(&nn->reconfig_timer) || nn->reconfig_posted); - mutex_destroy(&nn->bar_lock); - if (nn->dp.netdev) free_netdev(nn->dp.netdev); else From patchwork Wed Jun 5 21:11:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110767 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="aMlQ3zPi"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1h76Smnz9s9y for ; Thu, 6 Jun 2019 07:12:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726818AbfFEVMH (ORCPT ); Wed, 5 Jun 2019 17:12:07 -0400 Received: from mail-qk1-f193.google.com ([209.85.222.193]:38496 "EHLO mail-qk1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726807AbfFEVMG (ORCPT ); Wed, 5 Jun 2019 17:12:06 -0400 Received: by mail-qk1-f193.google.com with SMTP id a27so162179qkk.5 for ; Wed, 05 Jun 2019 14:12:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7/Q++++o00AF5V9ZUexI0wyJ3BHrrSACJG9scGtGgRc=; b=aMlQ3zPiGnH4sEMwjlZp8q+AEYmeBIThva655lT7sCo1dKt9/jJRk51gJ94UpjjuDh NJvTLNBs2h3NjkTguSuDhT3+VrdwMRiKm2pW9LJIbQBNzP/uaj0Fvn4fqPGARhA0ezR6 ncnDyJ72kzL3QNF16/3pTvNTgVPztK4GohEkkwax494S8osa/ESZWApHTvI9O2SQ8b/u ptRBEt7fqQyVnypHBXtuDIp8rTuo6NRnSfPIA+ZA2GfIdbshG8xcMFSQ7cv3lKdMqVUv XOM5YmB5BuBQdRAImYkG9F/zaWxlvy0cUyGELlQbCaDwSp7z6PvsMIhGE017ofmL2Blr ppdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7/Q++++o00AF5V9ZUexI0wyJ3BHrrSACJG9scGtGgRc=; b=BgD0LXXOsnA375YW882hTt4TyVEAM9TcFjIvFdKC3dMtIKfM5hi4JQ9w2mB+FE06qP yu9lnJ9iDmqCPWNxirla64Kk4S8daD5jRc7ME3lf84IOZHwMq5x7Ft/kKa6+8qb1TkEn XVKrmg/R5/oUnMOJUvb2HOMt17ioRrZM+ahyxL2Rd5sQQ0tANiOU8nzEh/m3MX3OFK8y BMH9v5IlcyX7EXXpGcfS7DssTJYXEw/887Q82s/Y9RRfaxzeggXjQyn6U7vLb4WGf79N 68+9zbD+hfuFYEOynt4+KJ/dLb7dYKq337uSJaxz+OFsW9BAmdG2YZxWThdVdkX65PUB EZyw== X-Gm-Message-State: APjAAAWqy5UIZHN0oW90+LCHi2sLQKezhFXGnb1mG1dMTGeI8CpfSCvl DxHO7Lycuk07DD5TH7rcA5GUVg== X-Google-Smtp-Source: APXvYqzxz+zeemDInevJ8fUeow1J6SISCeVFJG+8z3Q3qM0BrV5xJ2dk8nKbKNrUouvvRO8C2yP9PQ== X-Received: by 2002:a37:9d04:: with SMTP id g4mr34365102qke.52.1559769125701; Wed, 05 Jun 2019 14:12:05 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:05 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 03/13] nfp: parse the mailbox cmsg TLV Date: Wed, 5 Jun 2019 14:11:33 -0700 Message-Id: <20190605211143.29689-4-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Parse the mailbox TLV. When control message queue is not available we can fall back to passing the control messages via the vNIC mailbox. Signed-off-by: Jakub Kicinski Reviewed-by: Dirk van der Merwe --- drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c | 4 ++++ drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h | 8 ++++++++ 2 files changed, 12 insertions(+) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c index 6d5213b5bcb0..6c207c5e9265 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c @@ -99,6 +99,10 @@ int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem, caps->repr_cap = readl(data); break; + case NFP_NET_CFG_TLV_TYPE_MBOX_CMSG_TYPES: + if (length >= 4) + caps->mbox_cmsg_types = readl(data); + break; default: if (!FIELD_GET(NFP_NET_CFG_TLV_HEADER_REQUIRED, hdr)) break; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index 25919e338071..05a5c82ac8f6 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -466,6 +466,11 @@ * %NFP_NET_CFG_TLV_TYPE_REPR_CAP: * Single word, equivalent of %NFP_NET_CFG_CAP for representors, features which * can be used on representors. + * + * %NFP_NET_CFG_TLV_TYPE_MBOX_CMSG_TYPES: + * Variable, bitmap of control message types supported by the mailbox handler. + * Bit 0 corresponds to message type 0, bit 1 to 1, etc. Control messages are + * encapsulated into simple TLVs, with an end TLV and written to the Mailbox. */ #define NFP_NET_CFG_TLV_TYPE_UNKNOWN 0 #define NFP_NET_CFG_TLV_TYPE_RESERVED 1 @@ -475,6 +480,7 @@ #define NFP_NET_CFG_TLV_TYPE_EXPERIMENTAL0 5 #define NFP_NET_CFG_TLV_TYPE_EXPERIMENTAL1 6 #define NFP_NET_CFG_TLV_TYPE_REPR_CAP 7 +#define NFP_NET_CFG_TLV_TYPE_MBOX_CMSG_TYPES 10 struct device; @@ -484,12 +490,14 @@ struct device; * @mbox_off: vNIC mailbox area offset * @mbox_len: vNIC mailbox area length * @repr_cap: capabilities for representors + * @mbox_cmsg_types: cmsgs which can be passed through the mailbox */ struct nfp_net_tlv_caps { u32 me_freq_mhz; unsigned int mbox_off; unsigned int mbox_len; u32 repr_cap; + u32 mbox_cmsg_types; }; int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem, From patchwork Wed Jun 5 21:11:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110768 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="u/u23HOG"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hB3BJdz9s3l for ; Thu, 6 Jun 2019 07:12:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726774AbfFEVMJ (ORCPT ); Wed, 5 Jun 2019 17:12:09 -0400 Received: from mail-qk1-f196.google.com ([209.85.222.196]:33917 "EHLO mail-qk1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726554AbfFEVMJ (ORCPT ); Wed, 5 Jun 2019 17:12:09 -0400 Received: by mail-qk1-f196.google.com with SMTP id t64so178438qkh.1 for ; Wed, 05 Jun 2019 14:12:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=b0MmhqFJXtc5eNB1lBTfOMKjhsk9zpsxwwGYRdO1jwI=; b=u/u23HOG2NfxoI5KmsgelUFrWI8wXYX5RKwXmnziLw9nmYCmp7YQ2HKe5dztKJVZ+6 W/FXLvorVTVs4sclwvePvtcdIm6BDrnNRAB1DWFLXgq5z0nTo9cbN8RbrIRFVGCM8bxP dCJFk5iGOdCXqvHzbG+RWQVJLbXZmqzOaD/JMA3LMqliT1PJWuqSqVTrWyE3jHX9+aOV xiiktkDnvsbzSreHQZpFu9q4YfeSHCKy4rRDbedE2Has5lNqKWHXRVHrbHO1qWplxNSx 0P0oPc6CvAWoe//TCUro5iR+EOy/CgCZ5QLmvVIYPglMl6jG5sm7kurHNpXU5jGDik5K 90Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=b0MmhqFJXtc5eNB1lBTfOMKjhsk9zpsxwwGYRdO1jwI=; b=ZsvFDiqEnKxuCozQdRo64Q72+wv0S3t4/Svar5jhJDbo3H2VtwjsCwUvFBYj2Y20cR Q2JNgl/j8I/7SkcLumr03DdaEKIX4Ira3IUaZa3N5QSh41wkfbJi0qAHJrFfntysf+4d pta7dDzFF9lo1aJoLcl0+KeMeKseo+pAqmfmF2TWhGkDf+rdwOvVdu8KJ6x98INDd+XO M9fSxgGWgn7CN2pqPhnryFhXQjpdYyEDylJ57LDm30Dy9b8ZR8QTJggxQxfjUxTnjor2 ZMEix9j0v2ObsQs4dZg+aDoaUeS6ssbZ0E4SNxEI18+JFtUtOSqj1NIKuYZR6aWsqD9j xwQQ== X-Gm-Message-State: APjAAAUL9bUNc7N7acIg86wOpC6xjNisudH5ufT2oi0RwX7SFm6kDAyf iC4IYrWOht8qmslH9uelz5fdsw== X-Google-Smtp-Source: APXvYqyE68tl/6U9CK7z9NaGPSmc2HGWtO+Ewnr0on06SHJ5IqKbfmlf7ZNvM+ZkHkfhnY2T8taqoQ== X-Received: by 2002:a05:620a:14ba:: with SMTP id x26mr15544877qkj.328.1559769127362; Wed, 05 Jun 2019 14:12:07 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:06 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 04/13] nfp: add support for sending control messages via mailbox Date: Wed, 5 Jun 2019 14:11:34 -0700 Message-Id: <20190605211143.29689-5-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org FW may prefer to handle some communication via a mailbox or the vNIC may simply not have a control queue (VFs). Add a way of exchanging ccm-compatible messages via a mailbox. Signed-off-by: Jakub Kicinski Reviewed-by: Dirk van der Merwe --- drivers/net/ethernet/netronome/nfp/Makefile | 1 + drivers/net/ethernet/netronome/nfp/ccm.c | 3 - drivers/net/ethernet/netronome/nfp/ccm.h | 44 +- drivers/net/ethernet/netronome/nfp/ccm_mbox.c | 591 ++++++++++++++++++ drivers/net/ethernet/netronome/nfp/nfp_net.h | 10 + .../ethernet/netronome/nfp/nfp_net_common.c | 4 + .../net/ethernet/netronome/nfp/nfp_net_ctrl.h | 1 + 7 files changed, 646 insertions(+), 8 deletions(-) create mode 100644 drivers/net/ethernet/netronome/nfp/ccm_mbox.c diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile index 87bf784f8e8f..e40893692a8e 100644 --- a/drivers/net/ethernet/netronome/nfp/Makefile +++ b/drivers/net/ethernet/netronome/nfp/Makefile @@ -16,6 +16,7 @@ nfp-objs := \ nfpcore/nfp_rtsym.o \ nfpcore/nfp_target.o \ ccm.o \ + ccm_mbox.o \ nfp_asm.o \ nfp_app.o \ nfp_app_nic.o \ diff --git a/drivers/net/ethernet/netronome/nfp/ccm.c b/drivers/net/ethernet/netronome/nfp/ccm.c index 94476e41e261..71afd111bae3 100644 --- a/drivers/net/ethernet/netronome/nfp/ccm.c +++ b/drivers/net/ethernet/netronome/nfp/ccm.c @@ -7,9 +7,6 @@ #include "nfp_app.h" #include "nfp_net.h" -#define NFP_CCM_TYPE_REPLY_BIT 7 -#define __NFP_CCM_REPLY(req) (BIT(NFP_CCM_TYPE_REPLY_BIT) | (req)) - #define ccm_warn(app, msg...) nn_dp_warn(&(app)->ctrl->dp, msg) #define NFP_CCM_TAG_ALLOC_SPAN (U16_MAX / 4) diff --git a/drivers/net/ethernet/netronome/nfp/ccm.h b/drivers/net/ethernet/netronome/nfp/ccm.h index ac963b128203..c84be54abb4c 100644 --- a/drivers/net/ethernet/netronome/nfp/ccm.h +++ b/drivers/net/ethernet/netronome/nfp/ccm.h @@ -9,6 +9,7 @@ #include struct nfp_app; +struct nfp_net; /* Firmware ABI */ @@ -26,10 +27,18 @@ enum nfp_ccm_type { #define NFP_CCM_ABI_VERSION 1 +#define NFP_CCM_TYPE_REPLY_BIT 7 +#define __NFP_CCM_REPLY(req) (BIT(NFP_CCM_TYPE_REPLY_BIT) | (req)) + struct nfp_ccm_hdr { - u8 type; - u8 ver; - __be16 tag; + union { + struct { + u8 type; + u8 ver; + __be16 tag; + }; + __be32 raw; + }; }; static inline u8 nfp_ccm_get_type(struct sk_buff *skb) @@ -41,15 +50,31 @@ static inline u8 nfp_ccm_get_type(struct sk_buff *skb) return hdr->type; } -static inline unsigned int nfp_ccm_get_tag(struct sk_buff *skb) +static inline __be16 __nfp_ccm_get_tag(struct sk_buff *skb) { struct nfp_ccm_hdr *hdr; hdr = (struct nfp_ccm_hdr *)skb->data; - return be16_to_cpu(hdr->tag); + return hdr->tag; +} + +static inline unsigned int nfp_ccm_get_tag(struct sk_buff *skb) +{ + return be16_to_cpu(__nfp_ccm_get_tag(skb)); } +#define NFP_NET_MBOX_TLV_TYPE GENMASK(31, 16) +#define NFP_NET_MBOX_TLV_LEN GENMASK(15, 0) + +enum nfp_ccm_mbox_tlv_type { + NFP_NET_MBOX_TLV_TYPE_UNKNOWN = 0, + NFP_NET_MBOX_TLV_TYPE_END = 1, + NFP_NET_MBOX_TLV_TYPE_MSG = 2, + NFP_NET_MBOX_TLV_TYPE_MSG_NOSUP = 3, + NFP_NET_MBOX_TLV_TYPE_RESV = 4, +}; + /* Implementation */ /** @@ -80,4 +105,13 @@ void nfp_ccm_rx(struct nfp_ccm *ccm, struct sk_buff *skb); struct sk_buff * nfp_ccm_communicate(struct nfp_ccm *ccm, struct sk_buff *skb, enum nfp_ccm_type type, unsigned int reply_size); + +bool nfp_ccm_mbox_fits(struct nfp_net *nn, unsigned int size); +struct sk_buff * +nfp_ccm_mbox_alloc(struct nfp_net *nn, unsigned int req_size, + unsigned int reply_size, gfp_t flags); +int nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb, + enum nfp_ccm_type type, + unsigned int reply_size, + unsigned int max_reply_size); #endif diff --git a/drivers/net/ethernet/netronome/nfp/ccm_mbox.c b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c new file mode 100644 index 000000000000..e5acd96c3335 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c @@ -0,0 +1,591 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2019 Netronome Systems, Inc. */ + +#include +#include +#include + +#include "ccm.h" +#include "nfp_net.h" + +/* CCM messages via the mailbox. CMSGs get wrapped into simple TLVs + * and copied into the mailbox. Multiple messages can be copied to + * form a batch. Threads come in with CMSG formed in an skb, then + * enqueue that skb onto the request queue. If threads skb is first + * in queue this thread will handle the mailbox operation. It copies + * up to 16 messages into the mailbox (making sure that both requests + * and replies will fit. After FW is done processing the batch it + * copies the data out and wakes waiting threads. + * If a thread is waiting it either gets its the message completed + * (response is copied into the same skb as the request, overwriting + * it), or becomes the first in queue. + * Completions and next-to-run are signaled via the control buffer + * to limit potential cache line bounces. + */ + +#define NFP_CCM_MBOX_BATCH_LIMIT 16 +#define NFP_CCM_TIMEOUT (NFP_NET_POLL_TIMEOUT * 1000) +#define NFP_CCM_MAX_QLEN 256 + +enum nfp_net_mbox_cmsg_state { + NFP_NET_MBOX_CMSG_STATE_QUEUED, + NFP_NET_MBOX_CMSG_STATE_NEXT, + NFP_NET_MBOX_CMSG_STATE_BUSY, + NFP_NET_MBOX_CMSG_STATE_REPLY_FOUND, + NFP_NET_MBOX_CMSG_STATE_DONE, +}; + +/** + * struct nfp_ccm_mbox_skb_cb - CCM mailbox specific info + * @state: processing state (/stage) of the message + * @err: error encountered during processing if any + * @max_len: max(request_len, reply_len) + * @exp_reply: expected reply length (0 means don't validate) + */ +struct nfp_ccm_mbox_cmsg_cb { + enum nfp_net_mbox_cmsg_state state; + int err; + unsigned int max_len; + unsigned int exp_reply; +}; + +static u32 nfp_ccm_mbox_max_msg(struct nfp_net *nn) +{ + return round_down(nn->tlv_caps.mbox_len, 4) - + NFP_NET_CFG_MBOX_SIMPLE_VAL - /* common mbox command header */ + 4 * 2; /* Msg TLV plus End TLV headers */ +} + +static void +nfp_ccm_mbox_msg_init(struct sk_buff *skb, unsigned int exp_reply, int max_len) +{ + struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb; + + cb->state = NFP_NET_MBOX_CMSG_STATE_QUEUED; + cb->err = 0; + cb->max_len = max_len; + cb->exp_reply = exp_reply; +} + +static int nfp_ccm_mbox_maxlen(const struct sk_buff *skb) +{ + struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb; + + return cb->max_len; +} + +static bool nfp_ccm_mbox_done(struct sk_buff *skb) +{ + struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb; + + return cb->state == NFP_NET_MBOX_CMSG_STATE_DONE; +} + +static bool nfp_ccm_mbox_in_progress(struct sk_buff *skb) +{ + struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb; + + return cb->state != NFP_NET_MBOX_CMSG_STATE_QUEUED && + cb->state != NFP_NET_MBOX_CMSG_STATE_NEXT; +} + +static void nfp_ccm_mbox_set_busy(struct sk_buff *skb) +{ + struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb; + + cb->state = NFP_NET_MBOX_CMSG_STATE_BUSY; +} + +static bool nfp_ccm_mbox_is_first(struct nfp_net *nn, struct sk_buff *skb) +{ + return skb_queue_is_first(&nn->mbox_cmsg.queue, skb); +} + +static bool nfp_ccm_mbox_should_run(struct nfp_net *nn, struct sk_buff *skb) +{ + struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb; + + return cb->state == NFP_NET_MBOX_CMSG_STATE_NEXT; +} + +static void nfp_ccm_mbox_mark_next_runner(struct nfp_net *nn) +{ + struct nfp_ccm_mbox_cmsg_cb *cb; + struct sk_buff *skb; + + skb = skb_peek(&nn->mbox_cmsg.queue); + if (!skb) + return; + + cb = (void *)skb->cb; + cb->state = NFP_NET_MBOX_CMSG_STATE_NEXT; +} + +static void +nfp_ccm_mbox_write_tlv(struct nfp_net *nn, u32 off, u32 type, u32 len) +{ + nn_writel(nn, off, + FIELD_PREP(NFP_NET_MBOX_TLV_TYPE, type) | + FIELD_PREP(NFP_NET_MBOX_TLV_LEN, len)); +} + +static void nfp_ccm_mbox_copy_in(struct nfp_net *nn, struct sk_buff *last) +{ + struct sk_buff *skb; + int reserve, i, cnt; + __be32 *data; + u32 off, len; + + off = nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL; + skb = __skb_peek(&nn->mbox_cmsg.queue); + while (true) { + nfp_ccm_mbox_write_tlv(nn, off, NFP_NET_MBOX_TLV_TYPE_MSG, + skb->len); + off += 4; + + /* Write data word by word, skb->data should be aligned */ + data = (__be32 *)skb->data; + cnt = skb->len / 4; + for (i = 0 ; i < cnt; i++) { + nn_writel(nn, off, be32_to_cpu(data[i])); + off += 4; + } + if (skb->len & 3) { + __be32 tmp = 0; + + memcpy(&tmp, &data[i], skb->len & 3); + nn_writel(nn, off, be32_to_cpu(tmp)); + off += 4; + } + + /* Reserve space if reply is bigger */ + len = round_up(skb->len, 4); + reserve = nfp_ccm_mbox_maxlen(skb) - len; + if (reserve > 0) { + nfp_ccm_mbox_write_tlv(nn, off, + NFP_NET_MBOX_TLV_TYPE_RESV, + reserve); + off += 4 + reserve; + } + + if (skb == last) + break; + skb = skb_queue_next(&nn->mbox_cmsg.queue, skb); + } + + nfp_ccm_mbox_write_tlv(nn, off, NFP_NET_MBOX_TLV_TYPE_END, 0); +} + +static struct sk_buff * +nfp_ccm_mbox_find_req(struct nfp_net *nn, __be16 tag, struct sk_buff *last) +{ + struct sk_buff *skb; + + skb = __skb_peek(&nn->mbox_cmsg.queue); + while (true) { + if (__nfp_ccm_get_tag(skb) == tag) + return skb; + + if (skb == last) + return NULL; + skb = skb_queue_next(&nn->mbox_cmsg.queue, skb); + } +} + +static void nfp_ccm_mbox_copy_out(struct nfp_net *nn, struct sk_buff *last) +{ + struct nfp_ccm_mbox_cmsg_cb *cb; + u8 __iomem *data, *end; + struct sk_buff *skb; + + data = nn->dp.ctrl_bar + nn->tlv_caps.mbox_off + + NFP_NET_CFG_MBOX_SIMPLE_VAL; + end = data + nn->tlv_caps.mbox_len; + + while (true) { + unsigned int length, offset, type; + struct nfp_ccm_hdr hdr; + __be32 *skb_data; + u32 tlv_hdr; + int i, cnt; + + tlv_hdr = readl(data); + type = FIELD_GET(NFP_NET_MBOX_TLV_TYPE, tlv_hdr); + length = FIELD_GET(NFP_NET_MBOX_TLV_LEN, tlv_hdr); + offset = data - nn->dp.ctrl_bar; + + /* Advance past the header */ + data += 4; + + if (data + length > end) { + nn_dp_warn(&nn->dp, "mailbox oversized TLV type:%d offset:%u len:%u\n", + type, offset, length); + break; + } + + if (type == NFP_NET_MBOX_TLV_TYPE_END) + break; + if (type == NFP_NET_MBOX_TLV_TYPE_RESV) + goto next_tlv; + if (type != NFP_NET_MBOX_TLV_TYPE_MSG && + type != NFP_NET_MBOX_TLV_TYPE_MSG_NOSUP) { + nn_dp_warn(&nn->dp, "mailbox unknown TLV type:%d offset:%u len:%u\n", + type, offset, length); + break; + } + + if (length < 4) { + nn_dp_warn(&nn->dp, "mailbox msg too short to contain header TLV type:%d offset:%u len:%u\n", + type, offset, length); + break; + } + + hdr.raw = cpu_to_be32(readl(data)); + + skb = nfp_ccm_mbox_find_req(nn, hdr.tag, last); + if (!skb) { + nn_dp_warn(&nn->dp, "mailbox request not found:%u\n", + be16_to_cpu(hdr.tag)); + break; + } + cb = (void *)skb->cb; + + if (type == NFP_NET_MBOX_TLV_TYPE_MSG_NOSUP) { + nn_dp_warn(&nn->dp, + "mailbox msg not supported type:%d\n", + nfp_ccm_get_type(skb)); + cb->err = -EIO; + goto next_tlv; + } + + if (hdr.type != __NFP_CCM_REPLY(nfp_ccm_get_type(skb))) { + nn_dp_warn(&nn->dp, "mailbox msg reply wrong type:%u expected:%lu\n", + hdr.type, + __NFP_CCM_REPLY(nfp_ccm_get_type(skb))); + cb->err = -EIO; + goto next_tlv; + } + if (cb->exp_reply && length != cb->exp_reply) { + nn_dp_warn(&nn->dp, "mailbox msg reply wrong size type:%u expected:%u have:%u\n", + hdr.type, length, cb->exp_reply); + cb->err = -EIO; + goto next_tlv; + } + if (length > cb->max_len) { + nn_dp_warn(&nn->dp, "mailbox msg oversized reply type:%u max:%u have:%u\n", + hdr.type, cb->max_len, length); + cb->err = -EIO; + goto next_tlv; + } + + if (length <= skb->len) + __skb_trim(skb, length); + else + skb_put(skb, length - skb->len); + + /* We overcopy here slightly, but that's okay, the skb is large + * enough, and the garbage will be ignored (beyond skb->len). + */ + skb_data = (__be32 *)skb->data; + memcpy(skb_data, &hdr, 4); + + cnt = DIV_ROUND_UP(length, 4); + for (i = 1 ; i < cnt; i++) + skb_data[i] = cpu_to_be32(readl(data + i * 4)); + + cb->state = NFP_NET_MBOX_CMSG_STATE_REPLY_FOUND; +next_tlv: + data += round_up(length, 4); + if (data + 4 > end) { + nn_dp_warn(&nn->dp, + "reached end of MBOX without END TLV\n"); + break; + } + } + + smp_wmb(); /* order the skb->data vs. cb->state */ + spin_lock_bh(&nn->mbox_cmsg.queue.lock); + do { + skb = __skb_dequeue(&nn->mbox_cmsg.queue); + cb = (void *)skb->cb; + + if (cb->state != NFP_NET_MBOX_CMSG_STATE_REPLY_FOUND) { + cb->err = -ENOENT; + smp_wmb(); /* order the cb->err vs. cb->state */ + } + cb->state = NFP_NET_MBOX_CMSG_STATE_DONE; + } while (skb != last); + + nfp_ccm_mbox_mark_next_runner(nn); + spin_unlock_bh(&nn->mbox_cmsg.queue.lock); +} + +static void +nfp_ccm_mbox_mark_all_err(struct nfp_net *nn, struct sk_buff *last, int err) +{ + struct nfp_ccm_mbox_cmsg_cb *cb; + struct sk_buff *skb; + + spin_lock_bh(&nn->mbox_cmsg.queue.lock); + do { + skb = __skb_dequeue(&nn->mbox_cmsg.queue); + cb = (void *)skb->cb; + + cb->err = err; + smp_wmb(); /* order the cb->err vs. cb->state */ + cb->state = NFP_NET_MBOX_CMSG_STATE_DONE; + } while (skb != last); + + nfp_ccm_mbox_mark_next_runner(nn); + spin_unlock_bh(&nn->mbox_cmsg.queue.lock); +} + +static void nfp_ccm_mbox_run_queue_unlock(struct nfp_net *nn) + __releases(&nn->mbox_cmsg.queue.lock) +{ + int space = nn->tlv_caps.mbox_len - NFP_NET_CFG_MBOX_SIMPLE_VAL; + struct sk_buff *skb, *last; + int cnt, err; + + space -= 4; /* for End TLV */ + + /* First skb must fit, because it's ours and we checked it fits */ + cnt = 1; + last = skb = __skb_peek(&nn->mbox_cmsg.queue); + space -= 4 + nfp_ccm_mbox_maxlen(skb); + + while (!skb_queue_is_last(&nn->mbox_cmsg.queue, last)) { + skb = skb_queue_next(&nn->mbox_cmsg.queue, last); + space -= 4 + nfp_ccm_mbox_maxlen(skb); + if (space < 0) + break; + last = skb; + nfp_ccm_mbox_set_busy(skb); + cnt++; + if (cnt == NFP_CCM_MBOX_BATCH_LIMIT) + break; + } + spin_unlock_bh(&nn->mbox_cmsg.queue.lock); + + /* Now we own all skb's marked in progress, new requests may arrive + * at the end of the queue. + */ + + nn_ctrl_bar_lock(nn); + + nfp_ccm_mbox_copy_in(nn, last); + + err = nfp_net_mbox_reconfig(nn, NFP_NET_CFG_MBOX_CMD_TLV_CMSG); + if (!err) + nfp_ccm_mbox_copy_out(nn, last); + else + nfp_ccm_mbox_mark_all_err(nn, last, -EIO); + + nn_ctrl_bar_unlock(nn); + + wake_up_all(&nn->mbox_cmsg.wq); +} + +static int nfp_ccm_mbox_skb_return(struct sk_buff *skb) +{ + struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb; + + if (cb->err) + dev_kfree_skb_any(skb); + return cb->err; +} + +/* If wait timed out but the command is already in progress we have + * to wait until it finishes. Runners has ownership of the skbs marked + * as busy. + */ +static int +nfp_ccm_mbox_unlink_unlock(struct nfp_net *nn, struct sk_buff *skb, + enum nfp_ccm_type type) + __releases(&nn->mbox_cmsg.queue.lock) +{ + bool was_first; + + if (nfp_ccm_mbox_in_progress(skb)) { + spin_unlock_bh(&nn->mbox_cmsg.queue.lock); + + wait_event(nn->mbox_cmsg.wq, nfp_ccm_mbox_done(skb)); + smp_rmb(); /* pairs with smp_wmb() after data is written */ + return nfp_ccm_mbox_skb_return(skb); + } + + was_first = nfp_ccm_mbox_should_run(nn, skb); + __skb_unlink(skb, &nn->mbox_cmsg.queue); + if (was_first) + nfp_ccm_mbox_mark_next_runner(nn); + + spin_unlock_bh(&nn->mbox_cmsg.queue.lock); + + if (was_first) + wake_up_all(&nn->mbox_cmsg.wq); + + nn_dp_warn(&nn->dp, "time out waiting for mbox response to 0x%02x\n", + type); + return -ETIMEDOUT; +} + +static int +nfp_ccm_mbox_msg_prepare(struct nfp_net *nn, struct sk_buff *skb, + enum nfp_ccm_type type, + unsigned int reply_size, unsigned int max_reply_size, + gfp_t flags) +{ + const unsigned int mbox_max = nfp_ccm_mbox_max_msg(nn); + unsigned int max_len; + ssize_t undersize; + int err; + + if (unlikely(!(nn->tlv_caps.mbox_cmsg_types & BIT(type)))) { + nn_dp_warn(&nn->dp, + "message type %d not supported by mailbox\n", type); + return -EINVAL; + } + + /* If the reply size is unknown assume it will take the entire + * mailbox, the callers should do their best for this to never + * happen. + */ + if (!max_reply_size) + max_reply_size = mbox_max; + max_reply_size = round_up(max_reply_size, 4); + + /* Make sure we can fit the entire reply into the skb, + * and that we don't have to slow down the mbox handler + * with allocations. + */ + undersize = max_reply_size - (skb_end_pointer(skb) - skb->data); + if (undersize > 0) { + err = pskb_expand_head(skb, 0, undersize, flags); + if (err) { + nn_dp_warn(&nn->dp, + "can't allocate reply buffer for mailbox\n"); + return err; + } + } + + /* Make sure that request and response both fit into the mailbox */ + max_len = max(max_reply_size, round_up(skb->len, 4)); + if (max_len > mbox_max) { + nn_dp_warn(&nn->dp, + "message too big for tha mailbox: %u/%u vs %u\n", + skb->len, max_reply_size, mbox_max); + return -EMSGSIZE; + } + + nfp_ccm_mbox_msg_init(skb, reply_size, max_len); + + return 0; +} + +static int +nfp_ccm_mbox_msg_enqueue(struct nfp_net *nn, struct sk_buff *skb, + enum nfp_ccm_type type) +{ + struct nfp_ccm_hdr *hdr; + + assert_spin_locked(&nn->mbox_cmsg.queue.lock); + + if (nn->mbox_cmsg.queue.qlen >= NFP_CCM_MAX_QLEN) { + nn_dp_warn(&nn->dp, "mailbox request queue too long\n"); + return -EBUSY; + } + + hdr = (void *)skb->data; + hdr->ver = NFP_CCM_ABI_VERSION; + hdr->type = type; + hdr->tag = cpu_to_be16(nn->mbox_cmsg.tag++); + + __skb_queue_tail(&nn->mbox_cmsg.queue, skb); + + return 0; +} + +int nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb, + enum nfp_ccm_type type, + unsigned int reply_size, + unsigned int max_reply_size) +{ + int err; + + err = nfp_ccm_mbox_msg_prepare(nn, skb, type, reply_size, + max_reply_size, GFP_KERNEL); + if (err) + goto err_free_skb; + + spin_lock_bh(&nn->mbox_cmsg.queue.lock); + + err = nfp_ccm_mbox_msg_enqueue(nn, skb, type); + if (err) + goto err_unlock; + + /* First in queue takes the mailbox lock and processes the batch */ + if (!nfp_ccm_mbox_is_first(nn, skb)) { + bool to; + + spin_unlock_bh(&nn->mbox_cmsg.queue.lock); + + to = !wait_event_timeout(nn->mbox_cmsg.wq, + nfp_ccm_mbox_done(skb) || + nfp_ccm_mbox_should_run(nn, skb), + msecs_to_jiffies(NFP_CCM_TIMEOUT)); + + /* fast path for those completed by another thread */ + if (nfp_ccm_mbox_done(skb)) { + smp_rmb(); /* pairs with wmb after data is written */ + return nfp_ccm_mbox_skb_return(skb); + } + + spin_lock_bh(&nn->mbox_cmsg.queue.lock); + + if (!nfp_ccm_mbox_is_first(nn, skb)) { + WARN_ON(!to); + + err = nfp_ccm_mbox_unlink_unlock(nn, skb, type); + if (err) + goto err_free_skb; + return 0; + } + } + + /* run queue expects the lock held */ + nfp_ccm_mbox_run_queue_unlock(nn); + return nfp_ccm_mbox_skb_return(skb); + +err_unlock: + spin_unlock_bh(&nn->mbox_cmsg.queue.lock); +err_free_skb: + dev_kfree_skb_any(skb); + return err; +} + +struct sk_buff * +nfp_ccm_mbox_alloc(struct nfp_net *nn, unsigned int req_size, + unsigned int reply_size, gfp_t flags) +{ + unsigned int max_size; + struct sk_buff *skb; + + if (!reply_size) + max_size = nfp_ccm_mbox_max_msg(nn); + else + max_size = max(req_size, reply_size); + max_size = round_up(max_size, 4); + + skb = alloc_skb(max_size, flags); + if (!skb) + return NULL; + + skb_put(skb, req_size); + + return skb; +} + +bool nfp_ccm_mbox_fits(struct nfp_net *nn, unsigned int size) +{ + return nfp_ccm_mbox_max_msg(nn) >= size; +} diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index e006b3abc9f6..134d2709cd70 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -563,6 +563,10 @@ struct nfp_net_dp { * @tx_bar: Pointer to mapped TX queues * @rx_bar: Pointer to mapped FL/RX queues * @tlv_caps: Parsed TLV capabilities + * @mbox_cmsg: Common Control Message via vNIC mailbox state + * @mbox_cmsg.queue: CCM mbox queue of pending messages + * @mbox_cmsg.wq: CCM mbox wait queue of waiting processes + * @mbox_cmsg.tag: CCM mbox message tag allocator * @debugfs_dir: Device directory in debugfs * @vnic_list: Entry on device vNIC list * @pdev: Backpointer to PCI device @@ -638,6 +642,12 @@ struct nfp_net { struct nfp_net_tlv_caps tlv_caps; + struct { + struct sk_buff_head queue; + wait_queue_head_t wq; + u16 tag; + } mbox_cmsg; + struct dentry *debugfs_dir; struct list_head vnic_list; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 39d70936c741..0ccc5206340b 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -3705,6 +3705,9 @@ nfp_net_alloc(struct pci_dev *pdev, void __iomem *ctrl_bar, bool needs_netdev, timer_setup(&nn->reconfig_timer, nfp_net_reconfig_timer, 0); + skb_queue_head_init(&nn->mbox_cmsg.queue); + init_waitqueue_head(&nn->mbox_cmsg.wq); + err = nfp_net_tlv_caps_parse(&nn->pdev->dev, nn->dp.ctrl_bar, &nn->tlv_caps); if (err) @@ -3727,6 +3730,7 @@ nfp_net_alloc(struct pci_dev *pdev, void __iomem *ctrl_bar, bool needs_netdev, void nfp_net_free(struct nfp_net *nn) { WARN_ON(timer_pending(&nn->reconfig_timer) || nn->reconfig_posted); + WARN_ON(!skb_queue_empty(&nn->mbox_cmsg.queue)); if (nn->dp.netdev) free_netdev(nn->dp.netdev); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index 05a5c82ac8f6..b94db7fb691d 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -394,6 +394,7 @@ #define NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_KILL 2 #define NFP_NET_CFG_MBOX_CMD_PCI_DSCP_PRIOMAP_SET 5 +#define NFP_NET_CFG_MBOX_CMD_TLV_CMSG 6 /** * VLAN filtering using general use mailbox From patchwork Wed Jun 5 21:11:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110769 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="lFgc4oOx"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hC26ZNz9s9y for ; Thu, 6 Jun 2019 07:12:11 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726819AbfFEVMK (ORCPT ); Wed, 5 Jun 2019 17:12:10 -0400 Received: from mail-qt1-f193.google.com ([209.85.160.193]:42263 "EHLO mail-qt1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726636AbfFEVMJ (ORCPT ); Wed, 5 Jun 2019 17:12:09 -0400 Received: by mail-qt1-f193.google.com with SMTP id s15so212193qtk.9 for ; Wed, 05 Jun 2019 14:12:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ulUmx8g8SzjRW0G1FmnXVZQ1KCjrxp2ilQlP+1itrt0=; b=lFgc4oOxBBHUTX7BMnQr/ACsSjsV+w24HRHG2L8I0gmRajUjPtONDkgpnckp041Rsc 7NxSm0WDsQHvuN99BA6P51WiesVaQENLLL2M9/pDgcWZhIY4VFXJB2Tf+pkwa+j/oolu aLpiqeZ6lM71ujFcHIdLyCJur8gf1+zPrVt2DA2tHR5h2oIR+050JLHvWObnZ2lFajcD 4GfY9DztCtkUSlHAdILAwUMYhHDFu8bROESylix/RLOi60YA+xYUjXpWXDa3xI3KLZ3o VxbiFuV4MQIfxN8YIGinJHmf4DRIf3Q6ptfkfN/uF2QwnUCYI+TNBiVYkUR+yrX5vl4s rH8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ulUmx8g8SzjRW0G1FmnXVZQ1KCjrxp2ilQlP+1itrt0=; b=KQ6Yd3RKkn81/Kzc413+n0FGOxSiP3mwqnLjD9Q3qE2WKxAMYDTm5VN/n9kqkJKd8t RA/tdyNSQvJHDrYQ6863YE/USA5zh3xkmgXZXzspzEfXFoLKwRBE1snZnOoORkWSURRY P/gGaqu+eUn2djhutu3KHCx45X8hkNBTHTQvfkpuOyfCnIL9BKmp3/cN1+fVvt4Oqaml pC/EmghlbwWXYRGAYgozfFr1f9T89fQ2fxj2iUMCSVjWDx2tIpn5qjSexYZ0ZhyDhe0A 2Gn0Bh8YRBfPIcIrvZKkOH+/KFJR74GS2YEJcbCQ4lLh3Lv3LnRj4tUfGjhqx5IpSNhT qQtg== X-Gm-Message-State: APjAAAUbwFiOXjZLZV4G6fvGkztv29CmmZUE05BbV+JtCqlS3eIN7rAR U//cmrcb0vPpfE9PbFw1GuE6fg== X-Google-Smtp-Source: APXvYqzL2bs3XHlI+8y6VxXQnJ2iwQWj6OOi8iH8VD7bjtnM5j9wvW4+qUM1/C8Nrr2N1Pun1HA1OQ== X-Received: by 2002:ac8:4619:: with SMTP id p25mr5380387qtn.73.1559769128772; Wed, 05 Jun 2019 14:12:08 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:08 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 05/13] nfp: parse crypto opcode TLV Date: Wed, 5 Jun 2019 14:11:35 -0700 Message-Id: <20190605211143.29689-6-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Parse TLV containing a bitmask of supported crypto operations. The TLV contains a capability bitmask (supported operations) and enabled bitmask. Each operation describes the crypto protocol quite exhaustively (protocol, AEAD, direction). Signed-off-by: Jakub Kicinski Reviewed-by: Dirk van der Merwe --- drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c | 11 +++++++++++ drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h | 10 ++++++++++ 2 files changed, 21 insertions(+) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c index 6c207c5e9265..d835c14b7257 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c @@ -103,6 +103,17 @@ int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem, if (length >= 4) caps->mbox_cmsg_types = readl(data); break; + case NFP_NET_CFG_TLV_TYPE_CRYPTO_OPS: + if (length < 32) { + dev_err(dev, + "CRYPTO OPS TLV should be at least 32B, is %dB offset:%u\n", + length, offset); + return -EINVAL; + } + + caps->crypto_ops = readl(data); + caps->crypto_enable_off = data - ctrl_mem + 16; + break; default: if (!FIELD_GET(NFP_NET_CFG_TLV_HEADER_REQUIRED, hdr)) break; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index b94db7fb691d..bd4e2194dda5 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -472,6 +472,11 @@ * Variable, bitmap of control message types supported by the mailbox handler. * Bit 0 corresponds to message type 0, bit 1 to 1, etc. Control messages are * encapsulated into simple TLVs, with an end TLV and written to the Mailbox. + * + * %NFP_NET_CFG_TLV_TYPE_CRYPTO_OPS: + * 8 words, bitmaps of supported and enabled crypto operations. + * First 16B (4 words) contains a bitmap of supported crypto operations, + * and next 16B contain the enabled operations. */ #define NFP_NET_CFG_TLV_TYPE_UNKNOWN 0 #define NFP_NET_CFG_TLV_TYPE_RESERVED 1 @@ -482,6 +487,7 @@ #define NFP_NET_CFG_TLV_TYPE_EXPERIMENTAL1 6 #define NFP_NET_CFG_TLV_TYPE_REPR_CAP 7 #define NFP_NET_CFG_TLV_TYPE_MBOX_CMSG_TYPES 10 +#define NFP_NET_CFG_TLV_TYPE_CRYPTO_OPS 11 /* see crypto/fw.h */ struct device; @@ -492,6 +498,8 @@ struct device; * @mbox_len: vNIC mailbox area length * @repr_cap: capabilities for representors * @mbox_cmsg_types: cmsgs which can be passed through the mailbox + * @crypto_ops: supported crypto operations + * @crypto_enable_off: offset of crypto ops enable region */ struct nfp_net_tlv_caps { u32 me_freq_mhz; @@ -499,6 +507,8 @@ struct nfp_net_tlv_caps { unsigned int mbox_len; u32 repr_cap; u32 mbox_cmsg_types; + u32 crypto_ops; + unsigned int crypto_enable_off; }; int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem, From patchwork Wed Jun 5 21:11:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110770 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="Lg8RttMx"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hF0wvBz9s3l for ; Thu, 6 Jun 2019 07:12:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726791AbfFEVMM (ORCPT ); Wed, 5 Jun 2019 17:12:12 -0400 Received: from mail-qt1-f193.google.com ([209.85.160.193]:42267 "EHLO mail-qt1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726636AbfFEVMM (ORCPT ); Wed, 5 Jun 2019 17:12:12 -0400 Received: by mail-qt1-f193.google.com with SMTP id s15so212286qtk.9 for ; Wed, 05 Jun 2019 14:12:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=b47iSl8LtPRBKIkSMy71My2j2J2J5s5/pTcQ2zNYwTk=; b=Lg8RttMxfdb4Wpzc3hCxYwGAziRyA3os0vzYxuQqXZodOkC5bFQQSKZCv1WQgsReuy OGtHlNRp3G+q5zx/FqCixdrauSolAluBs8M2pMpq6E+mJZFlobnnEfVIN0l8z2YvWdxk 6z/3kcQaKWHqL0sZr5of/5Z/ivNlqipyJFiG3YvyNQGS8tsWE4SaD3zc3xkIyncowqsC lOFGri0q9Lb+nOBzW/5OMdCKGgtFijFwTHRyPlq3UaEi44KSOqRh4m6mDOKslLY9Kq4u 4Bud60DFaMgyDEr8RmW2hDkOAqy29HTAPr2E2y6A/tVUA5IkkWKoGMJmRlmWrQDtPRPz 2Vgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=b47iSl8LtPRBKIkSMy71My2j2J2J5s5/pTcQ2zNYwTk=; b=nXZBkbUJeuAPW9hA8qbex2RyGGs5dvvK4n1lk+q2dv646t1SqpBGHkJfgxSlqn0IA7 eONbmM+pHLAYsCB1VoalDHIBgVNmB2CNLJHDxsIfatnfL7mm917AVaNMPqHV5luiaily 2+VS9CDjjwc8KMuswGKa+UJCQsa20dTGkld1VvFQTbcOTMUKvMFZNq6LCX9c1TqFy0ku ZH3w5deFUhb59l/N11aoHcVhS28/mAOmhQOEkQgNldxJyx6A9a3d6bzS1tTYfptHph11 +8ONC4BTMXIkArPD5QfWV5ArQ1yinCI04dKG17RvmAyfDgK2g/8p5PP/U6MDsEReXlS0 mV0A== X-Gm-Message-State: APjAAAWpc0gV9tNL/vX+dFrJUKs719D6KtH9Sq1JqPhZ5zIi5LDB5B96 8UQpNg1CoeBOafTN2feGj4rJuw== X-Google-Smtp-Source: APXvYqwhOumvQRCdU/yDfUQ9YPLl9AAmHtC05/htQLI5enBE9lWYrLuLa1srfyYB1mP0EPQ8yFRMvA== X-Received: by 2002:aed:23ac:: with SMTP id j41mr10849838qtc.200.1559769130267; Wed, 05 Jun 2019 14:12:10 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:09 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 06/13] nfp: add tls init code Date: Wed, 5 Jun 2019 14:11:36 -0700 Message-Id: <20190605211143.29689-7-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add FW ABI defines and code for basic init of TLS offload. Signed-off-by: Jakub Kicinski Reviewed-by: Dirk van der Merwe --- drivers/net/ethernet/netronome/Kconfig | 1 + drivers/net/ethernet/netronome/nfp/Makefile | 5 + drivers/net/ethernet/netronome/nfp/ccm.h | 4 + .../ethernet/netronome/nfp/crypto/crypto.h | 16 +++ .../net/ethernet/netronome/nfp/crypto/fw.h | 82 +++++++++++ .../net/ethernet/netronome/nfp/crypto/tls.c | 127 ++++++++++++++++++ drivers/net/ethernet/netronome/nfp/nfp_net.h | 1 + .../ethernet/netronome/nfp/nfp_net_common.c | 10 +- .../net/ethernet/netronome/nfp/nfp_net_ctrl.h | 1 + 9 files changed, 245 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/netronome/nfp/crypto/crypto.h create mode 100644 drivers/net/ethernet/netronome/nfp/crypto/fw.h create mode 100644 drivers/net/ethernet/netronome/nfp/crypto/tls.c diff --git a/drivers/net/ethernet/netronome/Kconfig b/drivers/net/ethernet/netronome/Kconfig index 4ad5109059e0..bac5be4d4f43 100644 --- a/drivers/net/ethernet/netronome/Kconfig +++ b/drivers/net/ethernet/netronome/Kconfig @@ -20,6 +20,7 @@ config NFP tristate "Netronome(R) NFP4000/NFP6000 NIC driver" depends on PCI && PCI_MSI depends on VXLAN || VXLAN=n + depends on TLS && TLS_DEVICE || TLS_DEVICE=n select NET_DEVLINK ---help--- This driver supports the Netronome(R) NFP4000/NFP6000 based diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile index e40893692a8e..2805641965f3 100644 --- a/drivers/net/ethernet/netronome/nfp/Makefile +++ b/drivers/net/ethernet/netronome/nfp/Makefile @@ -35,6 +35,11 @@ nfp-objs := \ nfp_shared_buf.o \ nic/main.o +ifeq ($(CONFIG_TLS_DEVICE),y) +nfp-objs += \ + crypto/tls.o +endif + ifeq ($(CONFIG_NFP_APP_FLOWER),y) nfp-objs += \ flower/action.o \ diff --git a/drivers/net/ethernet/netronome/nfp/ccm.h b/drivers/net/ethernet/netronome/nfp/ccm.h index c84be54abb4c..01efa779ab31 100644 --- a/drivers/net/ethernet/netronome/nfp/ccm.h +++ b/drivers/net/ethernet/netronome/nfp/ccm.h @@ -22,6 +22,10 @@ enum nfp_ccm_type { NFP_CCM_TYPE_BPF_MAP_GETNEXT = 6, NFP_CCM_TYPE_BPF_MAP_GETFIRST = 7, NFP_CCM_TYPE_BPF_BPF_EVENT = 8, + NFP_CCM_TYPE_CRYPTO_RESET = 9, + NFP_CCM_TYPE_CRYPTO_ADD = 10, + NFP_CCM_TYPE_CRYPTO_DEL = 11, + NFP_CCM_TYPE_CRYPTO_UPDATE = 12, __NFP_CCM_TYPE_MAX, }; diff --git a/drivers/net/ethernet/netronome/nfp/crypto/crypto.h b/drivers/net/ethernet/netronome/nfp/crypto/crypto.h new file mode 100644 index 000000000000..43aed51a8769 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/crypto/crypto.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ +/* Copyright (C) 2019 Netronome Systems, Inc. */ + +#ifndef NFP_CRYPTO_H +#define NFP_CRYPTO_H 1 + +#ifdef CONFIG_TLS_DEVICE +int nfp_net_tls_init(struct nfp_net *nn); +#else +static inline int nfp_net_tls_init(struct nfp_net *nn) +{ + return 0; +} +#endif + +#endif diff --git a/drivers/net/ethernet/netronome/nfp/crypto/fw.h b/drivers/net/ethernet/netronome/nfp/crypto/fw.h new file mode 100644 index 000000000000..192ba907d91b --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/crypto/fw.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ +/* Copyright (C) 2019 Netronome Systems, Inc. */ + +#ifndef NFP_CRYPTO_FW_H +#define NFP_CRYPTO_FW_H 1 + +#include "../ccm.h" + +#define NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC 0 +#define NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_DEC 1 + +struct nfp_crypto_reply_simple { + struct nfp_ccm_hdr hdr; + __be32 error; +}; + +struct nfp_crypto_req_reset { + struct nfp_ccm_hdr hdr; + __be32 ep_id; +}; + +#define NFP_NET_TLS_IPVER GENMASK(15, 12) +#define NFP_NET_TLS_VLAN GENMASK(11, 0) +#define NFP_NET_TLS_VLAN_UNUSED 4095 + +struct nfp_crypto_req_add_front { + struct nfp_ccm_hdr hdr; + __be32 ep_id; + u8 resv[3]; + u8 opcode; + u8 key_len; + __be16 ipver_vlan __packed; + u8 l4_proto; +}; + +struct nfp_crypto_req_add_back { + __be16 src_port; + __be16 dst_port; + __be32 key[8]; + __be32 salt; + __be32 iv[2]; + __be32 counter; + __be32 rec_no[2]; + __be32 tcp_seq; +}; + +struct nfp_crypto_req_add_v4 { + struct nfp_crypto_req_add_front front; + __be32 src_ip; + __be32 dst_ip; + struct nfp_crypto_req_add_back back; +}; + +struct nfp_crypto_req_add_v6 { + struct nfp_crypto_req_add_front front; + __be32 src_ip[4]; + __be32 dst_ip[4]; + struct nfp_crypto_req_add_back back; +}; + +struct nfp_crypto_reply_add { + struct nfp_ccm_hdr hdr; + __be32 error; + __be32 handle[2]; +}; + +struct nfp_crypto_req_del { + struct nfp_ccm_hdr hdr; + __be32 ep_id; + __be32 handle[2]; +}; + +struct nfp_crypto_req_update { + struct nfp_ccm_hdr hdr; + __be32 ep_id; + u8 resv[3]; + u8 opcode; + __be32 handle[2]; + __be32 rec_no[2]; + __be32 tcp_seq; +}; +#endif diff --git a/drivers/net/ethernet/netronome/nfp/crypto/tls.c b/drivers/net/ethernet/netronome/nfp/crypto/tls.c new file mode 100644 index 000000000000..c5909f069ee8 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/crypto/tls.c @@ -0,0 +1,127 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2019 Netronome Systems, Inc. */ + +#include +#include + +#include "../ccm.h" +#include "../nfp_net.h" +#include "crypto.h" +#include "fw.h" + +#define NFP_NET_TLS_CCM_MBOX_OPS_MASK \ + (BIT(NFP_CCM_TYPE_CRYPTO_RESET) | \ + BIT(NFP_CCM_TYPE_CRYPTO_ADD) | \ + BIT(NFP_CCM_TYPE_CRYPTO_DEL) | \ + BIT(NFP_CCM_TYPE_CRYPTO_UPDATE)) + +#define NFP_NET_TLS_OPCODE_MASK_RX \ + BIT(NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_DEC) + +#define NFP_NET_TLS_OPCODE_MASK_TX \ + BIT(NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC) + +#define NFP_NET_TLS_OPCODE_MASK \ + (NFP_NET_TLS_OPCODE_MASK_RX | NFP_NET_TLS_OPCODE_MASK_TX) + +static struct sk_buff * +nfp_net_tls_alloc_simple(struct nfp_net *nn, size_t req_sz, gfp_t flags) +{ + return nfp_ccm_mbox_alloc(nn, req_sz, + sizeof(struct nfp_crypto_reply_simple), + flags); +} + +static int +nfp_net_tls_communicate_simple(struct nfp_net *nn, struct sk_buff *skb, + const char *name, enum nfp_ccm_type type) +{ + struct nfp_crypto_reply_simple *reply; + int err; + + err = nfp_ccm_mbox_communicate(nn, skb, type, + sizeof(*reply), sizeof(*reply)); + if (err) { + nn_dp_warn(&nn->dp, "failed to %s TLS: %d\n", name, err); + return err; + } + + reply = (void *)skb->data; + err = -be32_to_cpu(reply->error); + if (err) + nn_dp_warn(&nn->dp, "failed to %s TLS, fw replied: %d\n", + name, err); + dev_consume_skb_any(skb); + + return err; +} + +static int +nfp_net_tls_add(struct net_device *netdev, struct sock *sk, + enum tls_offload_ctx_dir direction, + struct tls_crypto_info *crypto_info, + u32 start_offload_tcp_sn) +{ + return -EOPNOTSUPP; +} + +static void +nfp_net_tls_del(struct net_device *netdev, struct tls_context *tls_ctx, + enum tls_offload_ctx_dir direction) +{ +} + +static const struct tlsdev_ops nfp_net_tls_ops = { + .tls_dev_add = nfp_net_tls_add, + .tls_dev_del = nfp_net_tls_del, +}; + +static int nfp_net_tls_reset(struct nfp_net *nn) +{ + struct nfp_crypto_req_reset *req; + struct sk_buff *skb; + + skb = nfp_net_tls_alloc_simple(nn, sizeof(*req), GFP_KERNEL); + if (!skb) + return -ENOMEM; + + req = (void *)skb->data; + req->ep_id = 0; + + return nfp_net_tls_communicate_simple(nn, skb, "reset", + NFP_CCM_TYPE_CRYPTO_RESET); +} + +int nfp_net_tls_init(struct nfp_net *nn) +{ + struct net_device *netdev = nn->dp.netdev; + int err; + + if (!(nn->tlv_caps.crypto_ops & NFP_NET_TLS_OPCODE_MASK)) + return 0; + + if ((nn->tlv_caps.mbox_cmsg_types & NFP_NET_TLS_CCM_MBOX_OPS_MASK) != + NFP_NET_TLS_CCM_MBOX_OPS_MASK) + return 0; + + if (!nfp_ccm_mbox_fits(nn, sizeof(struct nfp_crypto_req_add_v6))) { + nn_warn(nn, "disabling TLS offload - mbox too small: %d\n", + nn->tlv_caps.mbox_len); + return 0; + } + + err = nfp_net_tls_reset(nn); + if (err) + return err; + + nn_ctrl_bar_lock(nn); + nn_writel(nn, nn->tlv_caps.crypto_enable_off, 0); + err = __nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_CRYPTO); + nn_ctrl_bar_unlock(nn); + if (err) + return err; + + netdev->tlsdev_ops = &nfp_net_tls_ops; + + return 0; +} diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index 134d2709cd70..7010c9f1e676 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -894,6 +894,7 @@ void nfp_ctrl_close(struct nfp_net *nn); void nfp_net_set_ethtool_ops(struct net_device *netdev); void nfp_net_info(struct nfp_net *nn); +int __nfp_net_reconfig(struct nfp_net *nn, u32 update); int nfp_net_reconfig(struct nfp_net *nn, u32 update); unsigned int nfp_net_rss_key_sz(struct nfp_net *nn); void nfp_net_rss_write_itbl(struct nfp_net *nn); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 0ccc5206340b..ac6ea6d3557b 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -44,6 +44,7 @@ #include "nfp_net.h" #include "nfp_net_sriov.h" #include "nfp_port.h" +#include "crypto/crypto.h" /** * nfp_net_get_fw_version() - Read and parse the FW version @@ -270,7 +271,7 @@ static void nfp_net_reconfig_wait_posted(struct nfp_net *nn) * * Return: Negative errno on error, 0 on success */ -static int __nfp_net_reconfig(struct nfp_net *nn, u32 update) +int __nfp_net_reconfig(struct nfp_net *nn, u32 update) { int ret; @@ -4005,9 +4006,14 @@ int nfp_net_init(struct nfp_net *nn) if (err) return err; - if (nn->dp.netdev) + if (nn->dp.netdev) { nfp_net_netdev_init(nn); + err = nfp_net_tls_init(nn); + if (err) + return err; + } + nfp_net_vecs_init(nn); if (!nn->dp.netdev) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index bd4e2194dda5..b570c90fa96c 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -135,6 +135,7 @@ #define NFP_NET_CFG_UPDATE_MACADDR (0x1 << 11) /* MAC address change */ #define NFP_NET_CFG_UPDATE_MBOX (0x1 << 12) /* Mailbox update */ #define NFP_NET_CFG_UPDATE_VF (0x1 << 13) /* VF settings change */ +#define NFP_NET_CFG_UPDATE_CRYPTO (0x1 << 14) /* Crypto on/off */ #define NFP_NET_CFG_UPDATE_ERR (0x1 << 31) /* A error occurred */ #define NFP_NET_CFG_TXRS_ENABLE 0x0008 #define NFP_NET_CFG_RXRS_ENABLE 0x0010 From patchwork Wed Jun 5 21:11:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110771 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="SpyhmVvw"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hH3DDfz9s3l for ; Thu, 6 Jun 2019 07:12:15 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726826AbfFEVMO (ORCPT ); Wed, 5 Jun 2019 17:12:14 -0400 Received: from mail-qt1-f193.google.com ([209.85.160.193]:36230 "EHLO mail-qt1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726636AbfFEVMN (ORCPT ); Wed, 5 Jun 2019 17:12:13 -0400 Received: by mail-qt1-f193.google.com with SMTP id u12so260609qth.3 for ; Wed, 05 Jun 2019 14:12:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dLPGcpdSfCRyLS7Y0eLFF/7oaj5BqRJXgjbceBIYUm4=; b=SpyhmVvwml9onZ0Njos468DQURqk7c/5lTuIfyYgbgpDLS6n/BOLyRIYh7SL9Yog0D drYfBCfO5FwtR4pke+vjQfo3xaDKxFhvcfc1nyvwmDEEmAhmf6WS4xQdENgBZdplr1is uiWH5vN89p7R0fc5Y2/e5s6VvdUdHNNm0tqsvP6vS0iaoAha+S9XxN+hNynUiaUV9tP2 GrKQai+C0ayE6qYyvQRzIdDGkzrDsp68XUsQoFqFx7LCBqpOLcrDa4Am8IMV0A3rzb1f pEfu+3p9ItMn93UknNriwze1ZOdSqKU7WfvXKitX397aPDzUZIhuIKy14Kai16xnkawY iH6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dLPGcpdSfCRyLS7Y0eLFF/7oaj5BqRJXgjbceBIYUm4=; b=f/rxnld5dyUbXClB9Eku2Gk66tcHbJHXMXw+6/9ZWYj2MBMW8YH9V17hbJ/rSK2Vh0 HXt2V/GTAr78VUFoLoqf2b83andYBAKHNvAkjcGzbYdIDzKfgtu0c5uLyK3ckJ+c4CLW lIYUcD2WHWb20GxnCT6PYqf8oByzSJz148nhwLjtQhsbiFpEJPePs1E+PnWxTsKEVjks kQ+LKc9h0WmBJKnHi/VfYQU4VHNbcbHcL9bcnHFieNALKK7lpz9mI31vd+aThBifFxFo sdSiII8LYb27wivtihww6EDumQ67cUd7xMGRt7BkF4/UKPHn99rE+mPL4w9jbL5+MnwN n0Ag== X-Gm-Message-State: APjAAAXkxoEPo/noodXKmqXclELAwplfcRHC5nf4MfcaTMjTjeAG9aHg 7D1v4nOg0ylg1kWT0vvMiZ/ASniNBX0= X-Google-Smtp-Source: APXvYqz7zdttjXY4FDHfxhkT6UsaQtyX7n/pF0LpgXX6QkL0ufMGbZ1lgssVJ7IcFgm3jjf8t7c++g== X-Received: by 2002:a0c:95f8:: with SMTP id t53mr16117357qvt.115.1559769132475; Wed, 05 Jun 2019 14:12:12 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:11 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 07/13] nfp: prepare for more TX metadata prepend Date: Wed, 5 Jun 2019 14:11:37 -0700 Message-Id: <20190605211143.29689-8-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Subsequent patches will add support for more TX metadata fields. Prepare for this by handling an additional double word - firmware handle as metadata type 7. Signed-off-by: Dirk van der Merwe Signed-off-by: Jakub Kicinski --- .../ethernet/netronome/nfp/nfp_net_common.c | 44 ++++++++++++++----- .../net/ethernet/netronome/nfp/nfp_net_ctrl.h | 1 + 2 files changed, 35 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index ac6ea6d3557b..df21effec320 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -808,24 +808,47 @@ static void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring) tx_ring->wr_ptr_add = 0; } -static int nfp_net_prep_port_id(struct sk_buff *skb) +static int nfp_net_prep_tx_meta(struct sk_buff *skb, u64 tls_handle) { struct metadata_dst *md_dst = skb_metadata_dst(skb); unsigned char *data; + u32 meta_id = 0; + int md_bytes; - if (likely(!md_dst)) - return 0; - if (unlikely(md_dst->type != METADATA_HW_PORT_MUX)) + if (likely(!md_dst && !tls_handle)) return 0; + if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX)) { + if (!tls_handle) + return 0; + md_dst = NULL; + } - if (unlikely(skb_cow_head(skb, 8))) + md_bytes = 4 + !!md_dst * 4 + !!tls_handle * 8; + + if (unlikely(skb_cow_head(skb, md_bytes))) return -ENOMEM; - data = skb_push(skb, 8); - put_unaligned_be32(NFP_NET_META_PORTID, data); - put_unaligned_be32(md_dst->u.port_info.port_id, data + 4); + meta_id = 0; + data = skb_push(skb, md_bytes) + md_bytes; + if (md_dst) { + data -= 4; + put_unaligned_be32(md_dst->u.port_info.port_id, data); + meta_id = NFP_NET_META_PORTID; + } + if (tls_handle) { + /* conn handle is opaque, we just use u64 to be able to quickly + * compare it to zero + */ + data -= 8; + memcpy(data, &tls_handle, sizeof(tls_handle)); + meta_id <<= NFP_NET_META_FIELD_SIZE; + meta_id |= NFP_NET_META_CONN_HANDLE; + } + + data -= 4; + put_unaligned_be32(meta_id, data); - return 8; + return md_bytes; } /** @@ -848,6 +871,7 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) struct nfp_net_dp *dp; dma_addr_t dma_addr; unsigned int fsize; + u64 tls_handle = 0; u16 qidx; dp = &nn->dp; @@ -869,7 +893,7 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) return NETDEV_TX_BUSY; } - md_bytes = nfp_net_prep_port_id(skb); + md_bytes = nfp_net_prep_tx_meta(skb, tls_handle); if (unlikely(md_bytes < 0)) goto err_flush; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index b570c90fa96c..ee6b24e4eacd 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -44,6 +44,7 @@ #define NFP_NET_META_MARK 2 #define NFP_NET_META_PORTID 5 #define NFP_NET_META_CSUM 6 /* checksum complete type */ +#define NFP_NET_META_CONN_HANDLE 7 #define NFP_META_PORT_ID_CTRL ~0U From patchwork Wed Jun 5 21:11:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110772 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="v9o5+kN7"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hK3NVwz9s9y for ; Thu, 6 Jun 2019 07:12:17 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726832AbfFEVMQ (ORCPT ); Wed, 5 Jun 2019 17:12:16 -0400 Received: from mail-qt1-f195.google.com ([209.85.160.195]:38780 "EHLO mail-qt1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726825AbfFEVMO (ORCPT ); Wed, 5 Jun 2019 17:12:14 -0400 Received: by mail-qt1-f195.google.com with SMTP id l3so244413qtj.5 for ; Wed, 05 Jun 2019 14:12:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=J9lmQKmPdFxMzdJBm6YW25KEhsucLsyzLcjjFpWRcY0=; b=v9o5+kN7bqSGo6UfbIlCWgPQ5hQtG4hz7ETANt0XHs59OSp389ZweCn/zGF/tbBSa8 Gf4isjtiN8IiQaqa26VCgHnLJQ406CneuXap2CzZNo7fR27XzSbYWhjAq3MvS+swrJQM Q2r9OeRiF3zjRz64k0WZI80LcRRiyjVRYXVle149bswOABQ2qp5gpbyNQPHGFyUg1wxs 8y0mxUFDYjMhNrG/m8KU74WJUlQERrSaAUV+CeVSkg+8mctHCQ2WSDvruO1woITi7sqp yjsMyqgjNHAgKzzrP4lGXC7impCAm9z/Fid04dfQ/zpZkyOuBxdZdwURObc2X0uzHBeH n30w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J9lmQKmPdFxMzdJBm6YW25KEhsucLsyzLcjjFpWRcY0=; b=qWGGIp0xoW85i7CXGbRDu20DvNFTBmZ+v+HsKom5sOy+qvT5dRL8p8OlG/Y41CO2w9 MfRBTOrPjB/+XNNN83AaPyWkyKHhTaL8802hO7cuY0Wd4tB0dqQRH8Wj05aSnp7w4mnR O+gSLLxZ2QH75b1UopSsv3mpvDFMH4QdnxPKhR7kXfa9FzshE+KUQRAy3D+MCRardvIe EXSgsGreFDjNFOkx5o/LlX/u3ycVDZdk02rUu16jcALF6PyPFJaiFMLcF03Rg5LfRoqw QLvgsWRlbi2y97u3PzkLjmWBuOdD0AMhOLlMpp41qHQm+d5MkhtsOooYtHbdxIECAp+d 7Fiw== X-Gm-Message-State: APjAAAVfSTrL/MXAOv+hfv9/T2l5y9KSRJCZQKYQrJaOos+4x50SpsdG fxKxBdsC5/FqIcmDfUF8bQlNxA== X-Google-Smtp-Source: APXvYqyL7VzZ49NxKj5fxOE0PZJGX7l4Q2KO6cLHzRaTtPX5xjnCqCSrGFvUcUFd8EGTOnI5AuuEGg== X-Received: by 2002:ac8:94f:: with SMTP id z15mr3016928qth.265.1559769134074; Wed, 05 Jun 2019 14:12:14 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:13 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 08/13] net/tls: split the TLS_DRIVER_STATE_SIZE and bump TX to 16 bytes Date: Wed, 5 Jun 2019 14:11:38 -0700 Message-Id: <20190605211143.29689-9-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org 8 bytes of driver state has been enough so far, but for drivers which have to store 8 byte handle it's no longer practical to store the state directly in the context. Drivers generally don't need much extra state on RX side, while TX side has to be tracking TCP sequence numbers. Split the lengths of max driver state size on RX and TX. The struct tls_offload_context_tx currently stands at 616 bytes and struct tls_offload_context_rx stands at 368 bytes. Upcoming work will consume extra 8 bytes in both for kernel-driven resync. This means that we can bump TX side to 16 bytes and still fit into the same number of cache lines but on RX side we would be 8 bytes over. Signed-off-by: Jakub Kicinski Reviewed-by: Dirk van der Merwe --- include/net/tls.h | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/include/net/tls.h b/include/net/tls.h index 0a0072636009..3094db5398a9 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -202,12 +202,12 @@ struct tls_offload_context_tx { * Currently the belief is that there is not enough * driver specific state to justify another layer of indirection */ -#define TLS_DRIVER_STATE_SIZE (max_t(size_t, 8, sizeof(void *))) +#define TLS_DRIVER_STATE_SIZE_TX 16 }; #define TLS_OFFLOAD_CONTEXT_SIZE_TX \ (ALIGN(sizeof(struct tls_offload_context_tx), sizeof(void *)) + \ - TLS_DRIVER_STATE_SIZE) + TLS_DRIVER_STATE_SIZE_TX) struct cipher_context { char *iv; @@ -307,11 +307,12 @@ struct tls_offload_context_rx { * Currently the belief is that there is not enough * driver specific state to justify another layer of indirection */ +#define TLS_DRIVER_STATE_SIZE_RX 8 }; #define TLS_OFFLOAD_CONTEXT_SIZE_RX \ (ALIGN(sizeof(struct tls_offload_context_rx), sizeof(void *)) + \ - TLS_DRIVER_STATE_SIZE) + TLS_DRIVER_STATE_SIZE_RX) int wait_on_pending_writer(struct sock *sk, long *timeo); int tls_sk_query(struct sock *sk, int optname, char __user *optval, From patchwork Wed Jun 5 21:11:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110773 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="TWZfANph"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hL19YZz9s3l for ; Thu, 6 Jun 2019 07:12:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726840AbfFEVMR (ORCPT ); Wed, 5 Jun 2019 17:12:17 -0400 Received: from mail-qt1-f194.google.com ([209.85.160.194]:34051 "EHLO mail-qt1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726636AbfFEVMQ (ORCPT ); Wed, 5 Jun 2019 17:12:16 -0400 Received: by mail-qt1-f194.google.com with SMTP id m29so276596qtu.1 for ; Wed, 05 Jun 2019 14:12:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7Duh1yTQlft3V3BU7ZiPmeLdAPdvQ22o7I+d26Xnth4=; b=TWZfANphhcgt9jUDwkjC5pC6IDmG7UntcY5/fe5koK6fjA+R8Pl+Hk0xMbnAHo3Qdl Cn64ejma7ojLIxSSTwlogCg9h6r09xF4AxACyvjb/9kS7kwmZTHQRhvVYabEzBKHCD+k EDU6vfOQsu6+9+QcYRAKZCTJuXcmDMhTIGJdzeRxIzyDrUFmBtJD//r6celXiBRQD4VF 7XTP8pvtQEBAPs8KB77LCfqyNfFQuzOyWLORopLYC8D87SSGNMAWkmdFREkdDzZIFdsh +H5AUzHnzchAqBJOdobFpKNOVKjB1wYQpITdCJ1sWk79DGlnjUdeckQH0c5GNnuL5uNN OxpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7Duh1yTQlft3V3BU7ZiPmeLdAPdvQ22o7I+d26Xnth4=; b=NHs7yWsOPNdbCJurDdOqeRitzPne1PGl80QQcrLBuylrTb3sCxxfz09au0lXMABoP/ 5mUfDmJdRlbx9Zch8zfVLP6QKfdYToNkSJOTnOted0DomzGMNJ4E1FzpQq74JZ4kdEVF GL4QVfHE9y3aTVDEQmPyErkJ7PrX+S+3USAk7dEUErLP4wKztxc6/6R4HTSH+edf6HQB zb0Ys4QN9UST03ZZXkGjXO1z2OsaazwQf/QMAMn4LdcKN/YIxgLarW/4TTAStfKCUkiD HUrRSL2HnecTVU6L4y/S8bKegWINNJmF0btJeNhmqAKEk3zqt14vJK1Ha5ylybONZBS5 aP0w== X-Gm-Message-State: APjAAAW/sTYsWWk+IFJVDZ9joeByqWB4yVxGk9KLBppwYWxTCWPrMI0l ABDrkqURt/0TSZ76lGEMfOwNYg== X-Google-Smtp-Source: APXvYqw/0wq9WxEq52WqgLj2y9uoaFJnAGBoQFZSwIFp9n9uTZi4MH6GzHYBWatJfo8BCG0JtfeAYA== X-Received: by 2002:a0c:96c4:: with SMTP id b4mr33815557qvd.2.1559769135518; Wed, 05 Jun 2019 14:12:15 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:15 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 09/13] net/tls: simplify driver context retrieval Date: Wed, 5 Jun 2019 14:11:39 -0700 Message-Id: <20190605211143.29689-10-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Currently drivers have to ensure the alignment of their tls state structure, which leads to unnecessary layers of getters and encapsulated structures in each driver. Simplify all this by marking the driver state as aligned (driver_state members are currently aligned, so no hole is added, besides ALIGN in TLS_OFFLOAD_CONTEXT_SIZE_RX/TX would reserve this extra space, anyway.) With that we can add a common accessor to the core. Signed-off-by: Jakub Kicinski Reviewed-by: Dirk van der Merwe --- include/net/tls.h | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/include/net/tls.h b/include/net/tls.h index 3094db5398a9..3da0d941e729 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -197,7 +198,7 @@ struct tls_offload_context_tx { struct scatterlist sg_tx_data[MAX_SKB_FRAGS]; void (*sk_destruct)(struct sock *sk); - u8 driver_state[]; + u8 driver_state[] __aligned(8); /* The TLS layer reserves room for driver specific state * Currently the belief is that there is not enough * driver specific state to justify another layer of indirection @@ -206,8 +207,7 @@ struct tls_offload_context_tx { }; #define TLS_OFFLOAD_CONTEXT_SIZE_TX \ - (ALIGN(sizeof(struct tls_offload_context_tx), sizeof(void *)) + \ - TLS_DRIVER_STATE_SIZE_TX) + (sizeof(struct tls_offload_context_tx) + TLS_DRIVER_STATE_SIZE_TX) struct cipher_context { char *iv; @@ -302,7 +302,7 @@ struct tls_offload_context_rx { /* sw must be the first member of tls_offload_context_rx */ struct tls_sw_context_rx sw; atomic64_t resync_req; - u8 driver_state[]; + u8 driver_state[] __aligned(8); /* The TLS layer reserves room for driver specific state * Currently the belief is that there is not enough * driver specific state to justify another layer of indirection @@ -311,8 +311,7 @@ struct tls_offload_context_rx { }; #define TLS_OFFLOAD_CONTEXT_SIZE_RX \ - (ALIGN(sizeof(struct tls_offload_context_rx), sizeof(void *)) + \ - TLS_DRIVER_STATE_SIZE_RX) + (sizeof(struct tls_offload_context_rx) + TLS_DRIVER_STATE_SIZE_RX) int wait_on_pending_writer(struct sock *sk, long *timeo); int tls_sk_query(struct sock *sk, int optname, char __user *optval, @@ -557,6 +556,23 @@ tls_offload_ctx_rx(const struct tls_context *tls_ctx) return (struct tls_offload_context_rx *)tls_ctx->priv_ctx_rx; } +#if IS_ENABLED(CONFIG_TLS_DEVICE) +static inline void *__tls_driver_ctx(struct tls_context *tls_ctx, + enum tls_offload_ctx_dir direction) +{ + if (direction == TLS_OFFLOAD_CTX_DIR_TX) + return tls_offload_ctx_tx(tls_ctx)->driver_state; + else + return tls_offload_ctx_rx(tls_ctx)->driver_state; +} + +static inline void * +tls_driver_ctx(const struct sock *sk, enum tls_offload_ctx_dir direction) +{ + return __tls_driver_ctx(tls_get_ctx(sk), direction); +} +#endif + /* The TLS context is valid until sk_destruct is called */ static inline void tls_offload_rx_resync_request(struct sock *sk, __be32 seq) { From patchwork Wed Jun 5 21:11:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110774 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="IHoCQFdF"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hN1lGVz9s3l for ; Thu, 6 Jun 2019 07:12:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726849AbfFEVMT (ORCPT ); Wed, 5 Jun 2019 17:12:19 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:40568 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726636AbfFEVMR (ORCPT ); Wed, 5 Jun 2019 17:12:17 -0400 Received: by mail-qt1-f196.google.com with SMTP id a15so229963qtn.7 for ; Wed, 05 Jun 2019 14:12:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uYj0Odmhf7Iq3xtIQQ+awAIvBYcndEa//r4cpU53KlM=; b=IHoCQFdFC83v8inutIGrt2G1jER1FLzcg4Sectz4ahwMxDScq1dTM7/ZygaJx+XKX3 itPr+fw+wC1F/4d9zyncKZNvnXKSEVK7dtnuH5PeT6sBZNn2mO3TxFJbTDBgEj1nkJJg UxtfcDCy76EkoTKtbDo2mPP0+Ef4iePuoq4UkjllrakXPHJjHxgJaIFOn8Ubse+eqP8Z MwAhaUlQkIrd/cOsSgTxJ7KXSF359GseO+tELxnljx2BIHbTwhKkOly1FIm86ePhsWch +IYNeN8PRYdtvFMyJilQCIPhvObM33cRHj+rfWOzU+cmGeJeAeOuMSCw+dn+zGA3lSZb EpEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uYj0Odmhf7Iq3xtIQQ+awAIvBYcndEa//r4cpU53KlM=; b=Fi34McJpqgQbPkmLIeNhNeCxW6dhnwaLB8pVmZ1ojgbW9rSFXYQ9Ygcuvc3OhdyPMV d6eZ7MM2cN93eyC1M4DRTxERVCiaRbkwjfO+4FnRfUitPgr2DOWI9E2tD7mAnlonwW72 evNl3rr0r7FONjSOI5h0FAMzwcBlj1vkjtm+AHKm9cAfCKuxQytChPWSm5QmPcrsbD7d NoNmjAaKyYz5YT8dZlGhz9FJ4oSidRCAvQkLThcfzMiPnVt48DcA6hZ1gBUWXAs+IMOw qT9OrlFq7k96uikDU38ZRXMFqjyErf+DOwszUE4Yt9kJNGRPhYmGP4Cks7+ZE47J7UEu phSQ== X-Gm-Message-State: APjAAAW/4moSUL+E1lQFxSqtAeCd74rGoee6+ShYZ+Ao5vjTTUg9Sf7P 8kwaSHanCgMi/E8FmxfYLIcDxA== X-Google-Smtp-Source: APXvYqwt65QeEvGwEcD05OFla8lukROcHksMtsAVL+uge0gVRt2CQ6LxQDD6xlTJDgSjYh5XQDnS3Q== X-Received: by 2002:ac8:25dd:: with SMTP id f29mr27442926qtf.144.1559769137021; Wed, 05 Jun 2019 14:12:17 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:16 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Dirk van der Merwe , Jakub Kicinski Subject: [PATCH net-next 10/13] net/tls: export TLS per skb encryption Date: Wed, 5 Jun 2019 14:11:40 -0700 Message-Id: <20190605211143.29689-11-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dirk van der Merwe While offloading TLS connections, drivers need to handle the case where out of order packets need to be transmitted. Other drivers obtain the entire TLS record for the specific skb to provide as context to hardware for encryption. However, other designs may also want to keep the hardware state intact and perform the out of order encryption entirely on the host. To achieve this, export the already existing software encryption fallback path so drivers could access this. Signed-off-by: Dirk van der Merwe Reviewed-by: Jakub Kicinski --- include/net/tls.h | 1 + net/tls/tls_device_fallback.c | 6 ++++++ 2 files changed, 7 insertions(+) diff --git a/include/net/tls.h b/include/net/tls.h index 3da0d941e729..d1a4f365d6be 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -590,6 +590,7 @@ void tls_unregister_device(struct tls_device *device); int tls_device_decrypted(struct sock *sk, struct sk_buff *skb); int decrypt_skb(struct sock *sk, struct sk_buff *skb, struct scatterlist *sgout); +struct sk_buff *tls_encrypt_skb(struct sk_buff *skb); struct sk_buff *tls_validate_xmit_skb(struct sock *sk, struct net_device *dev, diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c index 5a087e1981c3..1d2d804ac633 100644 --- a/net/tls/tls_device_fallback.c +++ b/net/tls/tls_device_fallback.c @@ -426,6 +426,12 @@ struct sk_buff *tls_validate_xmit_skb(struct sock *sk, } EXPORT_SYMBOL_GPL(tls_validate_xmit_skb); +struct sk_buff *tls_encrypt_skb(struct sk_buff *skb) +{ + return tls_sw_fallback(skb->sk, skb); +} +EXPORT_SYMBOL_GPL(tls_encrypt_skb); + int tls_sw_fallback_init(struct sock *sk, struct tls_offload_context_tx *offload_ctx, struct tls_crypto_info *crypto_info) From patchwork Wed Jun 5 21:11:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110775 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="0o2WLjzz"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hQ4xRxz9s3l for ; Thu, 6 Jun 2019 07:12:22 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726861AbfFEVMV (ORCPT ); Wed, 5 Jun 2019 17:12:21 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:41754 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726788AbfFEVMT (ORCPT ); Wed, 5 Jun 2019 17:12:19 -0400 Received: by mail-qt1-f196.google.com with SMTP id s57so223477qte.8 for ; Wed, 05 Jun 2019 14:12:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nKi3QVx46vMwLCyhP46A6+oWwEaywdjwqRcWLaG3ljo=; b=0o2WLjzzCVOmv6CHkZYxyMWh1Fu60IPmV7RcSJxvBTevbi4r6p/SqYeWMeWXOoeKAn 3FXUAzgxLEaC9ZHJIn5+IN1/7Sdq3ExW+p8REnQIFZlJaz79qLAIZ/9khW4Av1pnzLtK u3df3bYFrIx2tYiSzkuyOOMS0qn2KYtcNPk2DqbKUkT41c3O3PgThb7MoxNU+d/wujc8 kcfiuhcL6+NyfPajlSLFgPwmXoXb94Aka+5sodQV4RAhdcAQUEC8q5cxBCoEtDZEdoqD FzzeahZbTvQjBD97Q+WDBE8/UVIGj1IRAeTbwxJzAR8dS75J4n7f1oZQ6j9cZuTcIYI5 ZNkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nKi3QVx46vMwLCyhP46A6+oWwEaywdjwqRcWLaG3ljo=; b=SeDkgszhK3rYjQ8cEdjv4MVWQVkFfiUuLljHZD/FOFF+SdqOfjaOCDD6hXSCTkRlgh VkSG3cu97O8nqhPvT3PYbh/smodDR3CNJ92TUjzI0jznQnJ1RUC3F6ik2JFuZ6p+gyDc 3R9JLL2nyZKs6SHTtOyZG2q4XPutWaKU0xF0eSC1eFzuqIOv0hhyKfgenSO+WHHy7qn2 fwfn8LjhXepGjKAdi3Hj3JU3XI5YC49+Q4rO5H/Ikv8ZJ4N7GsZQzvu/O+ICbVa6rtjb 6QTVyVMd2FYq2ZdcrrtyWJBSa6V/tr4qguHicJAESUDf1t42zq5reAHWtlgLE/M+uweO tvmQ== X-Gm-Message-State: APjAAAXGzd/tacO4IpW+ziv3AgrQQFUzcQZsZsl2IclkPlmSND5SgA3e jDaTBYXQJgaSITnJRQnx+9LBIg== X-Google-Smtp-Source: APXvYqzJeQ8SggvkKYxGAG2qXkZ/Kw33BAfYkGdDFRmHZ2EhzuFQBmT1gcRBzLTADNfkACNbbB9T8A== X-Received: by 2002:ac8:25e7:: with SMTP id f36mr25035783qtf.139.1559769138478; Wed, 05 Jun 2019 14:12:18 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:17 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Dirk van der Merwe , Jakub Kicinski Subject: [PATCH net-next 11/13] nfp: tls: add datapath support for TLS TX Date: Wed, 5 Jun 2019 14:11:41 -0700 Message-Id: <20190605211143.29689-12-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dirk van der Merwe Prepend connection handle to each transmitted TLS packet. For each connection, the driver tracks the next sequence number expected. If an out of order packet is observed, the driver calls into the TLS kernel code to reencrypt that particular skb. Signed-off-by: Dirk van der Merwe Signed-off-by: Jakub Kicinski --- .../ethernet/netronome/nfp/crypto/crypto.h | 7 +++ drivers/net/ethernet/netronome/nfp/nfp_net.h | 2 + .../ethernet/netronome/nfp/nfp_net_common.c | 56 +++++++++++++++++++ 3 files changed, 65 insertions(+) diff --git a/drivers/net/ethernet/netronome/nfp/crypto/crypto.h b/drivers/net/ethernet/netronome/nfp/crypto/crypto.h index 43aed51a8769..1f97fb443134 100644 --- a/drivers/net/ethernet/netronome/nfp/crypto/crypto.h +++ b/drivers/net/ethernet/netronome/nfp/crypto/crypto.h @@ -4,6 +4,13 @@ #ifndef NFP_CRYPTO_H #define NFP_CRYPTO_H 1 +struct nfp_net_tls_offload_ctx { + __be32 fw_handle[2]; + + u32 next_seq; + bool out_of_sync; +}; + #ifdef CONFIG_TLS_DEVICE int nfp_net_tls_init(struct nfp_net *nn); #else diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index 7010c9f1e676..689e9e1938c8 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -459,6 +459,7 @@ struct nfp_stat_pair { * @netdev: Backpointer to net_device structure * @is_vf: Is the driver attached to a VF? * @chained_metadata_format: Firemware will use new metadata format + * @ktls_tx: Is kTLS TX enabled? * @rx_dma_dir: Mapping direction for RX buffers * @rx_dma_off: Offset at which DMA packets (for XDP headroom) * @rx_offset: Offset in the RX buffers where packet data starts @@ -483,6 +484,7 @@ struct nfp_net_dp { u8 is_vf:1; u8 chained_metadata_format:1; + u8 ktls_tx:1; u8 rx_dma_dir; u8 rx_offset; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index df21effec320..52f20f191eed 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -36,6 +36,7 @@ #include #include +#include #include #include "nfpcore/nfp_nsp.h" @@ -801,6 +802,55 @@ static void nfp_net_tx_csum(struct nfp_net_dp *dp, u64_stats_update_end(&r_vec->tx_sync); } +#ifdef CONFIG_TLS_DEVICE +static struct sk_buff * +nfp_net_tls_tx(struct nfp_net_dp *dp, struct sk_buff *skb, u64 *tls_handle, + int *nr_frags) +{ + struct nfp_net_tls_offload_ctx *ntls; + struct sk_buff *nskb; + u32 datalen, seq; + + if (likely(!dp->ktls_tx)) + return skb; + if (!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk)) + return skb; + + datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb)); + seq = ntohl(tcp_hdr(skb)->seq); + ntls = tls_driver_ctx(skb->sk, TLS_OFFLOAD_CTX_DIR_TX); + if (unlikely(ntls->next_seq != seq || ntls->out_of_sync)) { + /* Pure ACK out of order already */ + if (!datalen) + return skb; + + nskb = tls_encrypt_skb(skb); + if (!nskb) + return NULL; + /* encryption wasn't necessary */ + if (nskb == skb) + return skb; + /* we don't re-check ring space */ + if (unlikely(skb_is_nonlinear(nskb))) { + nn_dp_warn(dp, "tls_encrypt_skb() produced fragmented frame\n"); + dev_kfree_skb_any(nskb); + return NULL; + } + + /* jump forward, a TX may have gotten lost, need to sync TX */ + if (!ntls->out_of_sync && seq - ntls->next_seq < U32_MAX / 4) + ntls->out_of_sync = true; + + *nr_frags = 0; + return nskb; + } + + memcpy(tls_handle, ntls->fw_handle, sizeof(ntls->fw_handle)); + ntls->next_seq += datalen; + return skb; +} +#endif + static void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring) { wmb(); @@ -893,6 +943,12 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) return NETDEV_TX_BUSY; } +#ifdef CONFIG_TLS_DEVICE + skb = nfp_net_tls_tx(dp, skb, &tls_handle, &nr_frags); + if (unlikely(!skb)) + goto err_flush; +#endif + md_bytes = nfp_net_prep_tx_meta(skb, tls_handle); if (unlikely(md_bytes < 0)) goto err_flush; From patchwork Wed Jun 5 21:11:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110776 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="eNcBOACo"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hS4lMkz9s3l for ; Thu, 6 Jun 2019 07:12:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726872AbfFEVMX (ORCPT ); Wed, 5 Jun 2019 17:12:23 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:41759 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726851AbfFEVMV (ORCPT ); Wed, 5 Jun 2019 17:12:21 -0400 Received: by mail-qt1-f196.google.com with SMTP id s57so223560qte.8 for ; Wed, 05 Jun 2019 14:12:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Wtjn18KGDSkMBtHgyLJiZkRqhJk2ccyxlrQk1FDA35Q=; b=eNcBOACoVXX2/8dFVE44W62umPs9shBSJ320mSSlVZ1r52Kth7OXa+EI6DUmOdv0Y6 FuTdPt6bZ2XUdlWw0WschDhkDhrR3ATluiLfbr8hLM5G/4pRnb4CQQy+GYmeU5mkMoeJ EXyDu3hu11P8uMKrDhletoko8YmNEI8DbacFVqGkra95umAr7Ufvl0wyNhgSfLoZ7cZZ 32I+T78x2JQJv45Yd0se4NlKgeWGk3hVPKjse2x4Yqz+PyjVu/qp6P/ZffbwfuAedII4 m6+8puKXyh0p8fE1O6Nlh7+xzq8yc5VUnAn+2lFPO2gv+mik2McG6BrXs8xQe7IdK9mC b/9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Wtjn18KGDSkMBtHgyLJiZkRqhJk2ccyxlrQk1FDA35Q=; b=p2OzU9xXRVHjIZoM2P/otdp2gxWxeJKQVMc1cpOjH+KlI7JHxvM/bvQO+x2BSEziYD 3JEJuaLzW7/SkzSDS2lia5szBZnqnHeHKTdRsK6IZGIHXpjKDXbLyJ/BbNggf1EUNDjH 6V/4ra1sDcnSTESsodYXrbFYJu+/gSISZXTb/8XQ5v6UwIIGw43aWC7wDbEaSexe9ejN /vY+eroya7ZJnkM2t/qh3JZV/wiejAr9L2kBgmh/2Q2YPii+3BLad7GGmeLeiwHMpTKD Q+XQy/SgCFwvwkWE962hj1i1uxredRU/0v/5APSOC+zlDCUoKjuXmLzhPApaZmtIBaO5 Unww== X-Gm-Message-State: APjAAAU72f1Q3ipsysFU71QYhREBSTupoHq77HA0NkOeVoJX7GOxVmJZ 3oexCqEZ1NcKFKmN8G4YY2r+UA== X-Google-Smtp-Source: APXvYqySR35F8SaKCQm2ggOP/xOQHJ8kszZH5WNVe9qX2Tf+diLOfP5R+jJwWAhYVLkRAtla+Vu8Ig== X-Received: by 2002:ac8:2734:: with SMTP id g49mr9597099qtg.228.1559769140215; Wed, 05 Jun 2019 14:12:20 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:19 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Dirk van der Merwe , Jakub Kicinski Subject: [PATCH net-next 12/13] nfp: tls: add/delete TLS TX connections Date: Wed, 5 Jun 2019 14:11:42 -0700 Message-Id: <20190605211143.29689-13-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dirk van der Merwe This patch adds the functionality to add and delete TLS connections on the NFP, received from the kernel TLS callbacks. Make use of the common control message (CCM) infrastructure to propagate the kernel state to firmware. Signed-off-by: Dirk van der Merwe Signed-off-by: Jakub Kicinski --- .../net/ethernet/netronome/nfp/crypto/tls.c | 300 +++++++++++++++++- drivers/net/ethernet/netronome/nfp/nfp_net.h | 5 +- 2 files changed, 303 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/crypto/tls.c b/drivers/net/ethernet/netronome/nfp/crypto/tls.c index c5909f069ee8..3e079c8469a2 100644 --- a/drivers/net/ethernet/netronome/nfp/crypto/tls.c +++ b/drivers/net/ethernet/netronome/nfp/crypto/tls.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) /* Copyright (C) 2019 Netronome Systems, Inc. */ +#include +#include #include #include @@ -24,6 +26,71 @@ #define NFP_NET_TLS_OPCODE_MASK \ (NFP_NET_TLS_OPCODE_MASK_RX | NFP_NET_TLS_OPCODE_MASK_TX) +static void nfp_net_crypto_set_op(struct nfp_net *nn, u8 opcode, bool on) +{ + u32 off, val; + + off = nn->tlv_caps.crypto_enable_off + round_down(opcode / 8, 4); + + val = nn_readl(nn, off); + if (on) + val |= BIT(opcode & 31); + else + val &= ~BIT(opcode & 31); + nn_writel(nn, off, val); +} + +static bool +__nfp_net_tls_conn_cnt_changed(struct nfp_net *nn, int add, + enum tls_offload_ctx_dir direction) +{ + u8 opcode; + int cnt; + + opcode = NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC; + nn->ktls_tx_conn_cnt += add; + cnt = nn->ktls_tx_conn_cnt; + nn->dp.ktls_tx = !!nn->ktls_tx_conn_cnt; + + /* Care only about 0 -> 1 and 1 -> 0 transitions */ + if (cnt > 1) + return false; + + nfp_net_crypto_set_op(nn, opcode, cnt); + return true; +} + +static int +nfp_net_tls_conn_cnt_changed(struct nfp_net *nn, int add, + enum tls_offload_ctx_dir direction) +{ + int ret = 0; + + /* Use the BAR lock to protect the connection counts */ + nn_ctrl_bar_lock(nn); + if (__nfp_net_tls_conn_cnt_changed(nn, add, direction)) { + ret = __nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_CRYPTO); + /* Undo the cnt adjustment if failed */ + if (ret) + __nfp_net_tls_conn_cnt_changed(nn, -add, direction); + } + nn_ctrl_bar_unlock(nn); + + return ret; +} + +static int +nfp_net_tls_conn_add(struct nfp_net *nn, enum tls_offload_ctx_dir direction) +{ + return nfp_net_tls_conn_cnt_changed(nn, 1, direction); +} + +static int +nfp_net_tls_conn_remove(struct nfp_net *nn, enum tls_offload_ctx_dir direction) +{ + return nfp_net_tls_conn_cnt_changed(nn, -1, direction); +} + static struct sk_buff * nfp_net_tls_alloc_simple(struct nfp_net *nn, size_t req_sz, gfp_t flags) { @@ -56,19 +123,245 @@ nfp_net_tls_communicate_simple(struct nfp_net *nn, struct sk_buff *skb, return err; } +static void nfp_net_tls_del_fw(struct nfp_net *nn, __be32 *fw_handle) +{ + struct nfp_crypto_req_del *req; + struct sk_buff *skb; + + skb = nfp_net_tls_alloc_simple(nn, sizeof(*req), GFP_KERNEL); + if (!skb) + return; + + req = (void *)skb->data; + req->ep_id = 0; + memcpy(req->handle, fw_handle, sizeof(req->handle)); + + nfp_net_tls_communicate_simple(nn, skb, "delete", + NFP_CCM_TYPE_CRYPTO_DEL); +} + +static struct nfp_crypto_req_add_back * +nfp_net_tls_set_ipv4(struct nfp_crypto_req_add_v4 *req, struct sock *sk, + int direction) +{ + struct inet_sock *inet = inet_sk(sk); + + req->front.key_len += sizeof(__be32) * 2; + req->front.ipver_vlan = cpu_to_be16(FIELD_PREP(NFP_NET_TLS_IPVER, 4) | + FIELD_PREP(NFP_NET_TLS_VLAN, + NFP_NET_TLS_VLAN_UNUSED)); + + if (direction == TLS_OFFLOAD_CTX_DIR_TX) { + req->src_ip = inet->inet_saddr; + req->dst_ip = inet->inet_daddr; + } else { + req->src_ip = inet->inet_daddr; + req->dst_ip = inet->inet_saddr; + } + + return &req->back; +} + +static struct nfp_crypto_req_add_back * +nfp_net_tls_set_ipv6(struct nfp_crypto_req_add_v6 *req, struct sock *sk, + int direction) +{ +#if IS_ENABLED(CONFIG_IPV6) + struct ipv6_pinfo *np = inet6_sk(sk); + + req->front.key_len += sizeof(struct in6_addr) * 2; + req->front.ipver_vlan = cpu_to_be16(FIELD_PREP(NFP_NET_TLS_IPVER, 6) | + FIELD_PREP(NFP_NET_TLS_VLAN, + NFP_NET_TLS_VLAN_UNUSED)); + + if (direction == TLS_OFFLOAD_CTX_DIR_TX) { + memcpy(req->src_ip, &np->saddr, sizeof(req->src_ip)); + memcpy(req->dst_ip, &sk->sk_v6_daddr, sizeof(req->dst_ip)); + } else { + memcpy(req->src_ip, &sk->sk_v6_daddr, sizeof(req->src_ip)); + memcpy(req->dst_ip, &np->saddr, sizeof(req->dst_ip)); + } + +#endif + return &req->back; +} + +static void +nfp_net_tls_set_l4(struct nfp_crypto_req_add_front *front, + struct nfp_crypto_req_add_back *back, struct sock *sk, + int direction) +{ + struct inet_sock *inet = inet_sk(sk); + + front->l4_proto = IPPROTO_TCP; + + if (direction == TLS_OFFLOAD_CTX_DIR_TX) { + back->src_port = inet->inet_sport; + back->dst_port = inet->inet_dport; + } else { + back->src_port = inet->inet_dport; + back->dst_port = inet->inet_sport; + } +} + +static u8 nfp_tls_1_2_dir_to_opcode(enum tls_offload_ctx_dir direction) +{ + switch (direction) { + case TLS_OFFLOAD_CTX_DIR_TX: + return NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC; + case TLS_OFFLOAD_CTX_DIR_RX: + return NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_DEC; + default: + WARN_ON_ONCE(1); + return 0; + } +} + +static bool +nfp_net_cipher_supported(struct nfp_net *nn, u16 cipher_type, + enum tls_offload_ctx_dir direction) +{ + u8 bit; + + switch (cipher_type) { + case TLS_CIPHER_AES_GCM_128: + if (direction == TLS_OFFLOAD_CTX_DIR_TX) + bit = NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC; + else + return false; + break; + default: + return false; + } + + return nn->tlv_caps.crypto_ops & BIT(bit); +} + static int nfp_net_tls_add(struct net_device *netdev, struct sock *sk, enum tls_offload_ctx_dir direction, struct tls_crypto_info *crypto_info, u32 start_offload_tcp_sn) { - return -EOPNOTSUPP; + struct tls12_crypto_info_aes_gcm_128 *tls_ci; + struct nfp_net *nn = netdev_priv(netdev); + struct nfp_crypto_req_add_front *front; + struct nfp_net_tls_offload_ctx *ntls; + struct nfp_crypto_req_add_back *back; + struct nfp_crypto_reply_add *reply; + struct sk_buff *skb; + size_t req_sz; + bool ipv6; + int err; + + BUILD_BUG_ON(sizeof(struct nfp_net_tls_offload_ctx) > + TLS_DRIVER_STATE_SIZE_TX); + + if (!nfp_net_cipher_supported(nn, crypto_info->cipher_type, direction)) + return -EOPNOTSUPP; + + switch (sk->sk_family) { +#if IS_ENABLED(CONFIG_IPV6) + case AF_INET6: + if (sk->sk_ipv6only || + ipv6_addr_type(&sk->sk_v6_daddr) != IPV6_ADDR_MAPPED) { + req_sz = sizeof(struct nfp_crypto_req_add_v6); + ipv6 = true; + break; + } +#endif + /* fall through */ + case AF_INET: + req_sz = sizeof(struct nfp_crypto_req_add_v4); + ipv6 = false; + break; + default: + return -EOPNOTSUPP; + } + + err = nfp_net_tls_conn_add(nn, direction); + if (err) + return err; + + skb = nfp_ccm_mbox_alloc(nn, req_sz, sizeof(*reply), GFP_KERNEL); + if (!skb) { + err = -ENOMEM; + goto err_conn_remove; + } + + front = (void *)skb->data; + front->ep_id = 0; + front->key_len = 8; + front->opcode = nfp_tls_1_2_dir_to_opcode(direction); + memset(front->resv, 0, sizeof(front->resv)); + + if (ipv6) + back = nfp_net_tls_set_ipv6((void *)skb->data, sk, direction); + else + back = nfp_net_tls_set_ipv4((void *)skb->data, sk, direction); + + nfp_net_tls_set_l4(front, back, sk, direction); + + back->counter = 0; + back->tcp_seq = cpu_to_be32(start_offload_tcp_sn); + + tls_ci = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info; + memcpy(back->key, tls_ci->key, TLS_CIPHER_AES_GCM_128_KEY_SIZE); + memset(&back->key[TLS_CIPHER_AES_GCM_128_KEY_SIZE / 4], 0, + sizeof(back->key) - TLS_CIPHER_AES_GCM_128_KEY_SIZE); + memcpy(back->iv, tls_ci->iv, TLS_CIPHER_AES_GCM_128_IV_SIZE); + memcpy(&back->salt, tls_ci->salt, TLS_CIPHER_AES_GCM_128_SALT_SIZE); + memcpy(back->rec_no, tls_ci->rec_seq, sizeof(tls_ci->rec_seq)); + + err = nfp_ccm_mbox_communicate(nn, skb, NFP_CCM_TYPE_CRYPTO_ADD, + sizeof(*reply), sizeof(*reply)); + if (err) { + nn_dp_warn(&nn->dp, "failed to add TLS: %d\n", err); + /* communicate frees skb on error */ + goto err_conn_remove; + } + + reply = (void *)skb->data; + err = -be32_to_cpu(reply->error); + if (err) { + if (err != -ENOSPC) + nn_dp_warn(&nn->dp, + "failed to add TLS, FW replied: %d\n", err); + goto err_free_skb; + } + + if (!reply->handle[0] && !reply->handle[1]) { + nn_dp_warn(&nn->dp, "FW returned NULL handle\n"); + goto err_fw_remove; + } + + ntls = tls_driver_ctx(sk, direction); + memcpy(ntls->fw_handle, reply->handle, sizeof(ntls->fw_handle)); + ntls->next_seq = start_offload_tcp_sn; + dev_consume_skb_any(skb); + + return 0; + +err_fw_remove: + nfp_net_tls_del_fw(nn, reply->handle); +err_free_skb: + dev_consume_skb_any(skb); +err_conn_remove: + nfp_net_tls_conn_remove(nn, direction); + return err; } static void nfp_net_tls_del(struct net_device *netdev, struct tls_context *tls_ctx, enum tls_offload_ctx_dir direction) { + struct nfp_net *nn = netdev_priv(netdev); + struct nfp_net_tls_offload_ctx *ntls; + + nfp_net_tls_conn_remove(nn, direction); + + ntls = __tls_driver_ctx(tls_ctx, direction); + nfp_net_tls_del_fw(nn, ntls->fw_handle); } static const struct tlsdev_ops nfp_net_tls_ops = { @@ -121,6 +414,11 @@ int nfp_net_tls_init(struct nfp_net *nn) if (err) return err; + if (nn->tlv_caps.crypto_ops & NFP_NET_TLS_OPCODE_MASK_TX) { + netdev->hw_features |= NETIF_F_HW_TLS_TX; + netdev->features |= NETIF_F_HW_TLS_TX; + } + netdev->tlsdev_ops = &nfp_net_tls_ops; return 0; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index 689e9e1938c8..8c1639a83fd4 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -552,7 +552,7 @@ struct nfp_net_dp { * @reconfig_timer: Timer for async reading of reconfig results * @reconfig_in_progress_update: Update FW is processing now (debug only) * @bar_lock: vNIC config BAR access lock, protects: update, - * mailbox area + * mailbox area, crypto TLV * @link_up: Is the link up? * @link_status_lock: Protects @link_* and ensures atomicity with BAR reading * @rx_coalesce_usecs: RX interrupt moderation usecs delay parameter @@ -565,6 +565,7 @@ struct nfp_net_dp { * @tx_bar: Pointer to mapped TX queues * @rx_bar: Pointer to mapped FL/RX queues * @tlv_caps: Parsed TLV capabilities + * @ktls_tx_conn_cnt: Number of offloaded kTLS TX connections * @mbox_cmsg: Common Control Message via vNIC mailbox state * @mbox_cmsg.queue: CCM mbox queue of pending messages * @mbox_cmsg.wq: CCM mbox wait queue of waiting processes @@ -644,6 +645,8 @@ struct nfp_net { struct nfp_net_tlv_caps tlv_caps; + unsigned int ktls_tx_conn_cnt; + struct { struct sk_buff_head queue; wait_queue_head_t wq; From patchwork Wed Jun 5 21:11:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1110777 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="Re3GvNpR"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45K1hX1KGSz9s3l for ; Thu, 6 Jun 2019 07:12:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726879AbfFEVM1 (ORCPT ); Wed, 5 Jun 2019 17:12:27 -0400 Received: from mail-qt1-f195.google.com ([209.85.160.195]:35462 "EHLO mail-qt1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726862AbfFEVMX (ORCPT ); Wed, 5 Jun 2019 17:12:23 -0400 Received: by mail-qt1-f195.google.com with SMTP id d23so268001qto.2 for ; Wed, 05 Jun 2019 14:12:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nLzaazMHvcijCpMcqbIMPQOAnCt6vLhhD70+49EGVDA=; b=Re3GvNpRFcqudFAD3s5uxqJDRhgRQgS5g2wXC/R+3pQDkAe9po3af5WcRVj9K+I4XW sSS8sN+SBfvIfrCboamg6KRmdxCY4EvGumGTUZIfLn5f4BjioIiHPXCgFu3KxerguijC uuIEM+I2KNQ07dQxPE5iQiKwel1W8lzLkNal+vUUaTKii7xT1kuZMt+NGEVhWHlwP2j5 9ZFILQez1I/N1OD5u2V+AxIRT5sjLDEOtHDojJUaQleypAUWzsD5bYX/QP/SQyy8nn0F BJw1qLwt+aoiY0ZVX0gaKVprDSyimNVyYw+Fhs3KK1HI8n44dSuZGdhD/Xx0oKrr8xuP HYWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nLzaazMHvcijCpMcqbIMPQOAnCt6vLhhD70+49EGVDA=; b=Z8wY88t3j/IRHBYcduUB8oPWx8BuaF9ZwhDS8BieyupM0CzfgWYR3tq+w/2DclzMrF ln7MdHWMXYNAw+bFj1eXVubf7GeIEva6DoFxUmSO0ykix9aMUQ0H/gdYzy25GFkCpVOF +PCK/aznFGhS+wi2WFiCoFkVh759o9/0dH3CkYa1JahFA2vErrX7toi1PzriSe64DCAO uR/Yu2RMU1SvmTTW4MxJwv8zjQ+xUNXpaX2EOIU6R6DSr0fDnuGDWtc0xYY4IsUvhyA6 CRvthbUYra3VVbHsdhMagBB/x/oM1V90xPDif40qMb2vuYpn5uCHonQTzK37sOZT3ICM hfIw== X-Gm-Message-State: APjAAAVoJVzyxVPCFyVWufWuPh4j/aoNtfolUOxFbjs4OHfoAssrw1LH /VTJs1FOwW5MQzSgmEG5FVtKqw== X-Google-Smtp-Source: APXvYqwNPPDmGkdE3lJnXQ5mJHa6AHAm3HtZUKOlNHipv0YRQHNLUmFYh9h/5KcIa2R5RVHlsRffMw== X-Received: by 2002:ac8:2fa8:: with SMTP id l37mr35820064qta.358.1559769141791; Wed, 05 Jun 2019 14:12:21 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id t20sm2933807qtr.7.2019.06.05.14.12.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 14:12:21 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, alexei.starovoitov@gmail.com, Jakub Kicinski , Dirk van der Merwe Subject: [PATCH net-next 13/13] nfp: tls: add basic statistics Date: Wed, 5 Jun 2019 14:11:43 -0700 Message-Id: <20190605211143.29689-14-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190605211143.29689-1-jakub.kicinski@netronome.com> References: <20190605211143.29689-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Count TX TLS packets: successes, out of order, and dropped due to missing record info. Make sure the RX and TX completion statistics don't share cache lines with TX ones as much as possible. With TLS stats they are no longer reasonably aligned. Signed-off-by: Dirk van der Merwe Signed-off-by: Jakub Kicinski --- .../net/ethernet/netronome/nfp/crypto/tls.c | 6 +++- drivers/net/ethernet/netronome/nfp/nfp_net.h | 23 ++++++++++++-- .../ethernet/netronome/nfp/nfp_net_common.c | 31 +++++++++++++++---- .../ethernet/netronome/nfp/nfp_net_ethtool.c | 16 ++++++++-- 4 files changed, 64 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/crypto/tls.c b/drivers/net/ethernet/netronome/nfp/crypto/tls.c index 3e079c8469a2..c638223e9f60 100644 --- a/drivers/net/ethernet/netronome/nfp/crypto/tls.c +++ b/drivers/net/ethernet/netronome/nfp/crypto/tls.c @@ -324,9 +324,13 @@ nfp_net_tls_add(struct net_device *netdev, struct sock *sk, reply = (void *)skb->data; err = -be32_to_cpu(reply->error); if (err) { - if (err != -ENOSPC) + if (err == -ENOSPC) { + if (!atomic_fetch_inc(&nn->ktls_no_space)) + nn_info(nn, "HW TLS table full\n"); + } else { nn_dp_warn(&nn->dp, "failed to add TLS, FW replied: %d\n", err); + } goto err_free_skb; } diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index 8c1639a83fd4..661fa5941b91 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -12,6 +12,7 @@ #ifndef _NFP_NET_H_ #define _NFP_NET_H_ +#include #include #include #include @@ -373,6 +374,11 @@ struct nfp_net_rx_ring { * @hw_csum_tx_inner: Counter of inner TX checksum offload requests * @tx_gather: Counter of packets with Gather DMA * @tx_lso: Counter of LSO packets sent + * @hw_tls_tx: Counter of TLS packets sent with crypto offloaded to HW + * @tls_tx_fallback: Counter of TLS packets sent which had to be encrypted + * by the fallback path because packets came out of order + * @tls_tx_no_fallback: Counter of TLS packets not sent because the fallback + * path could not encrypt them * @tx_errors: How many TX errors were encountered * @tx_busy: How often was TX busy (no space)? * @rx_replace_buf_alloc_fail: Counter of RX buffer allocation failures @@ -410,21 +416,28 @@ struct nfp_net_r_vector { u64 hw_csum_rx_inner_ok; u64 hw_csum_rx_complete; + u64 hw_csum_rx_error; + u64 rx_replace_buf_alloc_fail; + struct nfp_net_tx_ring *xdp_ring; struct u64_stats_sync tx_sync; u64 tx_pkts; u64 tx_bytes; - u64 hw_csum_tx; + + u64 ____cacheline_aligned_in_smp hw_csum_tx; u64 hw_csum_tx_inner; u64 tx_gather; u64 tx_lso; + u64 hw_tls_tx; - u64 hw_csum_rx_error; - u64 rx_replace_buf_alloc_fail; + u64 tls_tx_fallback; + u64 tls_tx_no_fallback; u64 tx_errors; u64 tx_busy; + /* Cold data follows */ + u32 irq_vector; irq_handler_t handler; char name[IFNAMSIZ + 8]; @@ -566,6 +579,8 @@ struct nfp_net_dp { * @rx_bar: Pointer to mapped FL/RX queues * @tlv_caps: Parsed TLV capabilities * @ktls_tx_conn_cnt: Number of offloaded kTLS TX connections + * @ktls_no_space: Counter of firmware rejecting kTLS connection due to + * lack of space * @mbox_cmsg: Common Control Message via vNIC mailbox state * @mbox_cmsg.queue: CCM mbox queue of pending messages * @mbox_cmsg.wq: CCM mbox wait queue of waiting processes @@ -647,6 +662,8 @@ struct nfp_net { unsigned int ktls_tx_conn_cnt; + atomic_t ktls_no_space; + struct { struct sk_buff_head queue; wait_queue_head_t wq; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 52f20f191eed..e221847d9a3e 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -804,8 +804,8 @@ static void nfp_net_tx_csum(struct nfp_net_dp *dp, #ifdef CONFIG_TLS_DEVICE static struct sk_buff * -nfp_net_tls_tx(struct nfp_net_dp *dp, struct sk_buff *skb, u64 *tls_handle, - int *nr_frags) +nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + struct sk_buff *skb, u64 *tls_handle, int *nr_frags) { struct nfp_net_tls_offload_ctx *ntls; struct sk_buff *nskb; @@ -824,15 +824,26 @@ nfp_net_tls_tx(struct nfp_net_dp *dp, struct sk_buff *skb, u64 *tls_handle, if (!datalen) return skb; + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tls_tx_fallback++; + u64_stats_update_end(&r_vec->tx_sync); + nskb = tls_encrypt_skb(skb); - if (!nskb) + if (!nskb) { + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tls_tx_no_fallback++; + u64_stats_update_end(&r_vec->tx_sync); return NULL; + } /* encryption wasn't necessary */ if (nskb == skb) return skb; /* we don't re-check ring space */ if (unlikely(skb_is_nonlinear(nskb))) { nn_dp_warn(dp, "tls_encrypt_skb() produced fragmented frame\n"); + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_errors++; + u64_stats_update_end(&r_vec->tx_sync); dev_kfree_skb_any(nskb); return NULL; } @@ -845,6 +856,12 @@ nfp_net_tls_tx(struct nfp_net_dp *dp, struct sk_buff *skb, u64 *tls_handle, return nskb; } + if (datalen) { + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->hw_tls_tx++; + u64_stats_update_end(&r_vec->tx_sync); + } + memcpy(tls_handle, ntls->fw_handle, sizeof(ntls->fw_handle)); ntls->next_seq += datalen; return skb; @@ -944,9 +961,11 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) } #ifdef CONFIG_TLS_DEVICE - skb = nfp_net_tls_tx(dp, skb, &tls_handle, &nr_frags); - if (unlikely(!skb)) - goto err_flush; + skb = nfp_net_tls_tx(dp, r_vec, skb, &tls_handle, &nr_frags); + if (unlikely(!skb)) { + nfp_net_tx_xmit_more_flush(tx_ring); + return NETDEV_TX_OK; + } #endif md_bytes = nfp_net_prep_tx_meta(skb, tls_handle); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c index 851e31e0ba8e..3a8e1af7042d 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c @@ -150,8 +150,9 @@ static const struct nfp_et_stat nfp_mac_et_stats[] = { #define NN_ET_GLOBAL_STATS_LEN ARRAY_SIZE(nfp_net_et_stats) #define NN_ET_SWITCH_STATS_LEN 9 -#define NN_RVEC_GATHER_STATS 9 +#define NN_RVEC_GATHER_STATS 12 #define NN_RVEC_PER_Q_STATS 3 +#define NN_CTRL_PATH_STATS 1 #define SFP_SFF_REV_COMPLIANCE 1 @@ -423,7 +424,8 @@ static unsigned int nfp_vnic_get_sw_stats_count(struct net_device *netdev) { struct nfp_net *nn = netdev_priv(netdev); - return NN_RVEC_GATHER_STATS + nn->max_r_vecs * NN_RVEC_PER_Q_STATS; + return NN_RVEC_GATHER_STATS + nn->max_r_vecs * NN_RVEC_PER_Q_STATS + + NN_CTRL_PATH_STATS; } static u8 *nfp_vnic_get_sw_stats_strings(struct net_device *netdev, u8 *data) @@ -446,6 +448,11 @@ static u8 *nfp_vnic_get_sw_stats_strings(struct net_device *netdev, u8 *data) data = nfp_pr_et(data, "hw_tx_inner_csum"); data = nfp_pr_et(data, "tx_gather"); data = nfp_pr_et(data, "tx_lso"); + data = nfp_pr_et(data, "tx_tls_encrypted"); + data = nfp_pr_et(data, "tx_tls_ooo"); + data = nfp_pr_et(data, "tx_tls_drop_no_sync_data"); + + data = nfp_pr_et(data, "hw_tls_no_space"); return data; } @@ -478,6 +485,9 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data) tmp[6] = nn->r_vecs[i].hw_csum_tx_inner; tmp[7] = nn->r_vecs[i].tx_gather; tmp[8] = nn->r_vecs[i].tx_lso; + tmp[9] = nn->r_vecs[i].hw_tls_tx; + tmp[10] = nn->r_vecs[i].tls_tx_fallback; + tmp[11] = nn->r_vecs[i].tls_tx_no_fallback; } while (u64_stats_fetch_retry(&nn->r_vecs[i].tx_sync, start)); data += NN_RVEC_PER_Q_STATS; @@ -489,6 +499,8 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data) for (j = 0; j < NN_RVEC_GATHER_STATS; j++) *data++ = gathered_stats[j]; + *data++ = atomic_read(&nn->ktls_no_space); + return data; }