From patchwork Thu Feb 22 17:50:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876781 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMNG2gmTz9ryG for ; Fri, 23 Feb 2018 04:51:10 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933640AbeBVRvI (ORCPT ); Thu, 22 Feb 2018 12:51:08 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:42911 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933495AbeBVRvA (ORCPT ); Thu, 22 Feb 2018 12:51:00 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHolEc014660; Thu, 22 Feb 2018 09:50:48 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 01/12] tls: tls_device struct to register TLS drivers Date: Thu, 22 Feb 2018 23:20:45 +0530 Message-Id: <1519321845-22815-1-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org tls_device structure to register Inline TLS drivers with net/tls Signed-off-by: Atul Gupta --- include/net/tls.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/include/net/tls.h b/include/net/tls.h index 4913430..e315bf9 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -55,6 +55,27 @@ #define TLS_RECORD_TYPE_DATA 0x17 #define TLS_AAD_SPACE_SIZE 13 +#define TLS_DEVICE_NAME_MAX 32 + +enum { + TLS_BASE_TX, + TLS_SW_TX, + TLS_FULL_HW, /* TLS record processed Inline */ + TLS_NUM_CONFIG, +}; +extern struct proto tls_prots[TLS_NUM_CONFIG]; + +struct tls_device { + char name[TLS_DEVICE_NAME_MAX]; + struct list_head dev_list; + + /* netdev present in registered inline tls driver */ + int (*netdev)(struct tls_device *device, + struct net_device *netdev); + int (*feature)(struct tls_device *device); + int (*hash)(struct tls_device *device, struct sock *sk); + void (*unhash)(struct tls_device *device, struct sock *sk); +}; struct tls_sw_context { struct crypto_aead *aead_send; @@ -256,5 +277,7 @@ static inline struct tls_offload_context *tls_offload_ctx( int tls_proccess_cmsg(struct sock *sk, struct msghdr *msg, unsigned char *record_type); +void tls_register_device(struct tls_device *device); +void tls_unregister_device(struct tls_device *device); #endif /* _TLS_OFFLOAD_H */ From patchwork Thu Feb 22 17:51:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876782 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMNX03Rfz9ryG for ; Fri, 23 Feb 2018 04:51:24 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933647AbeBVRvV (ORCPT ); Thu, 22 Feb 2018 12:51:21 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:3920 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933495AbeBVRvU (ORCPT ); Thu, 22 Feb 2018 12:51:20 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHp9pq014663; Thu, 22 Feb 2018 09:51:10 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 02/12] ethtool: enable Inline TLS in HW Date: Thu, 22 Feb 2018 23:21:08 +0530 Message-Id: <1519321868-22863-1-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Signed-off-by: Atul Gupta --- include/linux/netdev_features.h | 2 ++ net/core/ethtool.c | 1 + 2 files changed, 3 insertions(+) diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h index db84c51..aacabe2 100644 --- a/include/linux/netdev_features.h +++ b/include/linux/netdev_features.h @@ -79,6 +79,7 @@ enum { NETIF_F_RX_UDP_TUNNEL_PORT_BIT, /* Offload of RX port for UDP tunnels */ NETIF_F_GRO_HW_BIT, /* Hardware Generic receive offload */ + NETIF_F_HW_TLS_INLINE_BIT, /* Offload TLS record */ /* * Add your fresh new feature above and remember to update @@ -145,6 +146,7 @@ enum { #define NETIF_F_HW_ESP __NETIF_F(HW_ESP) #define NETIF_F_HW_ESP_TX_CSUM __NETIF_F(HW_ESP_TX_CSUM) #define NETIF_F_RX_UDP_TUNNEL_PORT __NETIF_F(RX_UDP_TUNNEL_PORT) +#define NETIF_F_HW_TLS_INLINE __NETIF_F(HW_TLS_INLINE) #define for_each_netdev_feature(mask_addr, bit) \ for_each_set_bit(bit, (unsigned long *)mask_addr, NETDEV_FEATURE_COUNT) diff --git a/net/core/ethtool.c b/net/core/ethtool.c index 494e6a5..ab16781 100644 --- a/net/core/ethtool.c +++ b/net/core/ethtool.c @@ -107,6 +107,7 @@ int ethtool_op_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info) [NETIF_F_HW_ESP_BIT] = "esp-hw-offload", [NETIF_F_HW_ESP_TX_CSUM_BIT] = "esp-tx-csum-hw-offload", [NETIF_F_RX_UDP_TUNNEL_PORT_BIT] = "rx-udp_tunnel-port-offload", + [NETIF_F_HW_TLS_INLINE_BIT] = "tls-inline", }; static const char From patchwork Thu Feb 22 17:51:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876783 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMP34szBz9ryG for ; Fri, 23 Feb 2018 04:51:51 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933603AbeBVRvt (ORCPT ); Thu, 22 Feb 2018 12:51:49 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:30069 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933460AbeBVRvs (ORCPT ); Thu, 22 Feb 2018 12:51:48 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHpWdG014675; Thu, 22 Feb 2018 09:51:32 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 03/12] tls: support for inline tls Date: Thu, 22 Feb 2018 23:21:30 +0530 Message-Id: <1519321890-22919-1-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Facility to register Inline TLS drivers to net/tls. Setup TLS_FULL_HW prot to listen on offload device. Cases handled 1. Inline TLS device exists, setup prot for TLS_FULL_HW 2. Atleast one Inline TLS exists, sets TLS_FULL_HW. If non-inline capable device establish connection, move to TLS_SW_TX 3. default mode TLS_SW_TX continues Signed-off-by: Atul Gupta --- net/tls/tls_main.c | 123 ++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 116 insertions(+), 7 deletions(-) diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c index b0d5fce..34f8781 100644 --- a/net/tls/tls_main.c +++ b/net/tls/tls_main.c @@ -38,6 +38,7 @@ #include #include #include +#include #include @@ -45,13 +46,9 @@ MODULE_DESCRIPTION("Transport Layer Security Support"); MODULE_LICENSE("Dual BSD/GPL"); -enum { - TLS_BASE_TX, - TLS_SW_TX, - TLS_NUM_CONFIG, -}; - -static struct proto tls_prots[TLS_NUM_CONFIG]; +static LIST_HEAD(device_list); +static DEFINE_MUTEX(device_mutex); +struct proto tls_prots[TLS_NUM_CONFIG]; static inline void update_sk_prot(struct sock *sk, struct tls_context *ctx) { @@ -260,6 +257,37 @@ static void tls_sk_proto_close(struct sock *sk, long timeout) sk_proto_close(sk, timeout); } +static struct net_device *get_netdev(struct sock *sk) +{ + struct inet_sock *inet = inet_sk(sk); + struct net_device *netdev; + + netdev = dev_get_by_index(sock_net(sk), inet->cork.fl.flowi_oif); + return netdev; +} + +static int tls_offload_dev_absent(struct sock *sk) +{ + struct net_device *netdev; + struct tls_device *dev; + int rc = 0; + + netdev = get_netdev(sk); + if (!netdev) + return -EINVAL; + + mutex_lock(&device_mutex); + list_for_each_entry(dev, &device_list, dev_list) { + if (dev->netdev && dev->netdev(dev, netdev)) { + rc = -EEXIST; + break; + } + } + mutex_unlock(&device_mutex); + dev_put(netdev); + return rc; +} + static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval, int __user *optlen) { @@ -403,6 +431,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, char __user *optval, goto err_crypto_info; } + rc = tls_offload_dev_absent(sk); + if (rc == -EINVAL) { + goto out; + } else if (rc == -EEXIST) { + /* Retain HW unhash for cleanup and move to SW Tx */ + sk->sk_prot[TLS_BASE_TX].unhash = + sk->sk_prot[TLS_FULL_HW].unhash; + } + /* currently SW is default, we will have ethtool in future */ rc = tls_set_sw_offload(sk, ctx); tx_conf = TLS_SW_TX; @@ -450,6 +487,54 @@ static int tls_setsockopt(struct sock *sk, int level, int optname, return do_tls_setsockopt(sk, optname, optval, optlen); } +static int tls_hw_prot(struct sock *sk) +{ + struct tls_context *ctx = tls_get_ctx(sk); + struct tls_device *dev; + + mutex_lock(&device_mutex); + list_for_each_entry(dev, &device_list, dev_list) { + if (dev->feature && dev->feature(dev)) { + ctx->tx_conf = TLS_FULL_HW; + update_sk_prot(sk, ctx); + break; + } + } + mutex_unlock(&device_mutex); + return ctx->tx_conf; +} + +static void tls_hw_unhash(struct sock *sk) +{ + struct tls_device *dev; + + mutex_lock(&device_mutex); + list_for_each_entry(dev, &device_list, dev_list) { + if (dev->unhash) + dev->unhash(dev, sk); + } + mutex_unlock(&device_mutex); + sk->sk_prot->unhash(sk); +} + +static int tls_hw_hash(struct sock *sk) +{ + struct tls_device *dev; + int err; + + err = sk->sk_prot->hash(sk); + mutex_lock(&device_mutex); + list_for_each_entry(dev, &device_list, dev_list) { + if (dev->hash) + err |= dev->hash(dev, sk); + } + mutex_unlock(&device_mutex); + + if (err) + tls_hw_unhash(sk); + return err; +} + static int tls_init(struct sock *sk) { struct inet_connection_sock *icsk = inet_csk(sk); @@ -477,6 +562,9 @@ static int tls_init(struct sock *sk) ctx->sk_proto_close = sk->sk_prot->close; ctx->tx_conf = TLS_BASE_TX; + if (tls_hw_prot(sk) == TLS_FULL_HW) + goto out; + update_sk_prot(sk, ctx); out: return rc; @@ -500,7 +588,27 @@ static void build_protos(struct proto *prot, struct proto *base) prot[TLS_SW_TX] = prot[TLS_BASE_TX]; prot[TLS_SW_TX].sendmsg = tls_sw_sendmsg; prot[TLS_SW_TX].sendpage = tls_sw_sendpage; + + prot[TLS_FULL_HW] = prot[TLS_BASE_TX]; + prot[TLS_FULL_HW].hash = tls_hw_hash; + prot[TLS_FULL_HW].unhash = tls_hw_unhash; +} + +void tls_register_device(struct tls_device *device) +{ + mutex_lock(&device_mutex); + list_add_tail(&device->dev_list, &device_list); + mutex_unlock(&device_mutex); +} +EXPORT_SYMBOL(tls_register_device); + +void tls_unregister_device(struct tls_device *device) +{ + mutex_lock(&device_mutex); + list_del(&device->dev_list); + mutex_unlock(&device_mutex); } +EXPORT_SYMBOL(tls_unregister_device); static int __init tls_register(void) { @@ -510,6 +618,7 @@ static int __init tls_register(void) return 0; } +EXPORT_SYMBOL(tls_prots); static void __exit tls_unregister(void) { From patchwork Thu Feb 22 17:52:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876784 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMPp1DKKz9sW8 for ; Fri, 23 Feb 2018 04:52:30 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933560AbeBVRwY (ORCPT ); Thu, 22 Feb 2018 12:52:24 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:55573 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933460AbeBVRwV (ORCPT ); Thu, 22 Feb 2018 12:52:21 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHqA2x014680; Thu, 22 Feb 2018 09:52:11 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 04/12] chtls: structure and macro definiton Date: Thu, 22 Feb 2018 23:22:00 +0530 Message-Id: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Inline TLS state, connection management. Supporting macros definition. Signed-off-by: Atul Gupta --- drivers/crypto/chelsio/chtls/chtls.h | 487 ++++++++++++++++++++++++++++++++ drivers/crypto/chelsio/chtls/chtls_cm.h | 202 +++++++++++++ 2 files changed, 689 insertions(+) create mode 100644 drivers/crypto/chelsio/chtls/chtls.h create mode 100644 drivers/crypto/chelsio/chtls/chtls_cm.h diff --git a/drivers/crypto/chelsio/chtls/chtls.h b/drivers/crypto/chelsio/chtls/chtls.h new file mode 100644 index 0000000..3ae7145 --- /dev/null +++ b/drivers/crypto/chelsio/chtls/chtls.h @@ -0,0 +1,487 @@ +/* + * Copyright (c) 2016 Chelsio Communications, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#ifndef __CHTLS_H__ +#define __CHTLS_H__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "t4fw_api.h" +#include "t4_msg.h" +#include "cxgb4.h" +#include "cxgb4_uld.h" +#include "l2t.h" +#include "chcr_algo.h" +#include "chcr_core.h" +#include "chcr_crypto.h" + +#define CIPHER_BLOCK_SIZE 16 +#define MAX_IVS_PAGE 256 +#define TLS_KEY_CONTEXT_SZ 64 +#define TLS_HEADER_LENGTH 5 +#define SCMD_CIPH_MODE_AES_GCM 2 +#define GCM_TAG_SIZE 16 +#define AEAD_EXPLICIT_DATA_SIZE 8 +/* Any MFS size should work and come from openssl */ +#define TLS_MFS 16384 + +#define SOCK_INLINE (31) +#define RSS_HDR sizeof(struct rss_header) + +enum { + CHTLS_KEY_CONTEXT_DSGL, + CHTLS_KEY_CONTEXT_IMM, + CHTLS_KEY_CONTEXT_DDR, +}; + +enum { + CHTLS_LISTEN_START, + CHTLS_LISTEN_STOP, +}; + +/* Flags for return value of CPL message handlers */ +enum { + CPL_RET_BUF_DONE = 1, /* buffer processing done */ + CPL_RET_BAD_MSG = 2, /* bad CPL message */ + CPL_RET_UNKNOWN_TID = 4 /* unexpected unknown TID */ +}; + +#define TLS_RCV_ST_READ_HEADER 0xF0 +#define TLS_RCV_ST_READ_BODY 0xF1 +#define TLS_RCV_ST_READ_DONE 0xF2 +#define TLS_RCV_ST_READ_NB 0xF3 + +#define RSPQ_HASH_BITS 5 +#define LISTEN_INFO_HASH_SIZE 32 +struct listen_info { + struct listen_info *next; /* Link to next entry */ + struct sock *sk; /* The listening socket */ + unsigned int stid; /* The server TID */ +}; + +enum { + T4_LISTEN_START_PENDING, + T4_LISTEN_STARTED +}; + +enum csk_flags { + CSK_CALLBACKS_CHKD, /* socket callbacks have been sanitized */ + CSK_ABORT_REQ_RCVD, /* received one ABORT_REQ_RSS message */ + CSK_TX_MORE_DATA, /* sending ULP data; don't set SHOVE bit */ + CSK_TX_WAIT_IDLE, /* suspend Tx until in-flight data is ACKed */ + CSK_ABORT_SHUTDOWN, /* shouldn't send more abort requests */ + CSK_ABORT_RPL_PENDING, /* expecting an abort reply */ + CSK_CLOSE_CON_REQUESTED,/* we've sent a close_conn_req */ + CSK_TX_DATA_SENT, /* sent a TX_DATA WR on this connection */ + CSK_TX_FAILOVER, /* Tx traffic failing over */ + CSK_UPDATE_RCV_WND, /* Need to update rcv window */ + CSK_RST_ABORTED, /* outgoing RST was aborted */ + CSK_TLS_HANDSHK, /* TLS Handshake */ +}; + +struct listen_ctx { + struct sock *lsk; + struct chtls_dev *cdev; + u32 state; +}; + +struct key_map { + unsigned long *addr; + unsigned int start; + unsigned int available; + unsigned int size; + spinlock_t lock; /* lock for key id request from map */ +} __packed; + +struct tls_scmd { + u32 seqno_numivs; + u32 ivgen_hdrlen; +}; + +struct chtls_dev { + struct tls_device tlsdev; + struct list_head list; + struct cxgb4_lld_info *lldi; + struct pci_dev *pdev; + struct listen_info *listen_hash_tab[LISTEN_INFO_HASH_SIZE]; + spinlock_t listen_lock; /* lock for listen list */ + struct net_device **ports; + struct tid_info *tids; + unsigned int pfvf; + const unsigned short *mtus; + + spinlock_t aidr_lock ____cacheline_aligned_in_smp; + struct idr aidr; /* ATID id space */ + struct idr hwtid_idr; + struct idr stid_idr; + + spinlock_t idr_lock ____cacheline_aligned_in_smp; + + struct net_device *egr_dev[NCHAN * 2]; + struct sk_buff *rspq_skb_cache[1 << RSPQ_HASH_BITS]; + struct sk_buff *askb; + + struct sk_buff_head deferq; + struct work_struct deferq_task; + + struct list_head list_node; + struct list_head rcu_node; + struct list_head na_node; + unsigned int send_page_order; + struct key_map kmap; +}; + +struct chtls_hws { + struct sk_buff_head sk_recv_queue; + u8 txqid; + u8 ofld; + u16 type; + u16 rstate; + u16 keyrpl; + u16 pldlen; + u16 rcvpld; + u16 compute; + u16 expansion; + u16 keylen; + u16 pdus; + u16 adjustlen; + u16 ivsize; + u16 txleft; + u32 mfs; + s32 txkey; + s32 rxkey; + u32 fcplenmax; + u32 copied_seq; + u64 tx_seq_no; + struct tls_scmd scmd; + struct tls12_crypto_info_aes_gcm_128 crypto_info; +}; + +struct chtls_sock { + struct sock *sk; + struct chtls_dev *cdev; + struct l2t_entry *l2t_entry; /* pointer to the L2T entry */ + struct net_device *egress_dev; /* TX_CHAN for act open retry */ + + struct sk_buff_head txq; + struct sk_buff_head ooo_queue; + struct sk_buff *wr_skb_head; + struct sk_buff *wr_skb_tail; + struct sk_buff *ctrl_skb_cache; + struct sk_buff *txdata_skb_cache; /* abort path messages */ + struct kref kref; + unsigned long flags; + u32 opt2; + u32 wr_credits; + u32 wr_unacked; + u32 wr_max_credits; + u32 wr_nondata; + u32 hwtid; /* TCP Control Block ID */ + u32 txq_idx; + u32 rss_qid; + u32 tid; + u32 idr; + u32 mss; + u32 ulp_mode; + u32 tx_chan; + u32 rx_chan; + u32 sndbuf; + u32 txplen_max; + u32 mtu_idx; /* MTU table index */ + u32 smac_idx; + u8 port_id; + u8 tos; + u16 resv2; + u32 delack_mode; + u32 delack_seq; + + void *passive_reap_next; /* placeholder for passive */ + struct chtls_hws tlshws; +}; + +struct tls_hdr { + u8 type; + u16 version; + u16 length; +} __packed; + +struct tlsrx_cmp_hdr { + u8 type; + u16 version; + u16 length; + + u64 tls_seq; + u16 reserved1; + u8 res_to_mac_error; +} __packed; + +/* res_to_mac_error fields */ +#define TLSRX_HDR_PKT_INT_ERROR_S 4 +#define TLSRX_HDR_PKT_INT_ERROR_M 0x1 +#define TLSRX_HDR_PKT_INT_ERROR_V(x) \ + ((x) << TLSRX_HDR_PKT_INT_ERROR_S) +#define TLSRX_HDR_PKT_INT_ERROR_G(x) \ + (((x) >> TLSRX_HDR_PKT_INT_ERROR_S) & TLSRX_HDR_PKT_INT_ERROR_M) +#define TLSRX_HDR_PKT_INT_ERROR_F TLSRX_HDR_PKT_INT_ERROR_V(1U) + +#define TLSRX_HDR_PKT_SPP_ERROR_S 3 +#define TLSRX_HDR_PKT_SPP_ERROR_M 0x1 +#define TLSRX_HDR_PKT_SPP_ERROR_V(x) ((x) << TLSRX_HDR_PKT_SPP_ERROR) +#define TLSRX_HDR_PKT_SPP_ERROR_G(x) \ + (((x) >> TLSRX_HDR_PKT_SPP_ERROR_S) & TLSRX_HDR_PKT_SPP_ERROR_M) +#define TLSRX_HDR_PKT_SPP_ERROR_F TLSRX_HDR_PKT_SPP_ERROR_V(1U) + +#define TLSRX_HDR_PKT_CCDX_ERROR_S 2 +#define TLSRX_HDR_PKT_CCDX_ERROR_M 0x1 +#define TLSRX_HDR_PKT_CCDX_ERROR_V(x) ((x) << TLSRX_HDR_PKT_CCDX_ERROR_S) +#define TLSRX_HDR_PKT_CCDX_ERROR_G(x) \ + (((x) >> TLSRX_HDR_PKT_CCDX_ERROR_S) & TLSRX_HDR_PKT_CCDX_ERROR_M) +#define TLSRX_HDR_PKT_CCDX_ERROR_F TLSRX_HDR_PKT_CCDX_ERROR_V(1U) + +#define TLSRX_HDR_PKT_PAD_ERROR_S 1 +#define TLSRX_HDR_PKT_PAD_ERROR_M 0x1 +#define TLSRX_HDR_PKT_PAD_ERROR_V(x) ((x) << TLSRX_HDR_PKT_PAD_ERROR_S) +#define TLSRX_HDR_PKT_PAD_ERROR_G(x) \ + (((x) >> TLSRX_HDR_PKT_PAD_ERROR_S) & TLSRX_HDR_PKT_PAD_ERROR_M) +#define TLSRX_HDR_PKT_PAD_ERROR_F TLSRX_HDR_PKT_PAD_ERROR_V(1U) + +#define TLSRX_HDR_PKT_MAC_ERROR_S 0 +#define TLSRX_HDR_PKT_MAC_ERROR_M 0x1 +#define TLSRX_HDR_PKT_MAC_ERROR_V(x) ((x) << TLSRX_HDR_PKT_MAC_ERROR) +#define TLSRX_HDR_PKT_MAC_ERROR_G(x) \ + (((x) >> S_TLSRX_HDR_PKT_MAC_ERROR_S) & TLSRX_HDR_PKT_MAC_ERROR_M) +#define TLSRX_HDR_PKT_MAC_ERROR_F TLSRX_HDR_PKT_MAC_ERROR_V(1U) + +#define TLSRX_HDR_PKT_ERROR_M 0x1F + +struct ulp_mem_rw { + __be32 cmd; + __be32 len16; /* command length */ + __be32 dlen; /* data length in 32-byte units */ + __be32 lock_addr; +}; + +struct tls_key_wr { + __be32 op_to_compl; + __be32 flowid_len16; + __be32 ftid; + __u8 reneg_to_write_rx; + __u8 protocol; + __be16 mfs; +}; + +struct tls_key_req { + struct tls_key_wr wr; + struct ulp_mem_rw req; + struct ulptx_idata sc_imm; +}; + +/* + * This lives in skb->cb and is used to chain WRs in a linked list. + */ +struct wr_skb_cb { + struct l2t_skb_cb l2t; /* reserve space for l2t CB */ + struct sk_buff *next_wr; /* next write request */ +}; + +/* Per-skb backlog handler. Run when a socket's backlog is processed. */ +struct blog_skb_cb { + void (*backlog_rcv)(struct sock *sk, struct sk_buff *skb); + struct chtls_dev *cdev; +}; + +/* + * Similar to tcp_skb_cb but with ULP elements added to support TLS, + * etc. + */ +struct ulp_skb_cb { + struct wr_skb_cb wr; /* reserve space for write request */ + u16 flags; /* TCP-like flags */ + u8 psh; + u8 ulp_mode; /* ULP mode/submode of sk_buff */ + u32 seq; /* TCP sequence number */ + union { /* ULP-specific fields */ + struct { + u8 type; + u8 ofld; + u8 iv; + } tls; + } ulp; +}; + +#define ULP_SKB_CB(skb) ((struct ulp_skb_cb *)&((skb)->cb[0])) +#define BLOG_SKB_CB(skb) ((struct blog_skb_cb *)(skb)->cb) + +/* + * Flags for ulp_skb_cb.flags. + */ +enum { + ULPCB_FLAG_NEED_HDR = 1 << 0, /* packet needs a TX_DATA_WR header */ + ULPCB_FLAG_NO_APPEND = 1 << 1, /* don't grow this skb */ + ULPCB_FLAG_BARRIER = 1 << 2, /* set TX_WAIT_IDLE after sending */ + ULPCB_FLAG_HOLD = 1 << 3, /* skb not ready for Tx yet */ + ULPCB_FLAG_COMPL = 1 << 4, /* request WR completion */ + ULPCB_FLAG_URG = 1 << 5, /* urgent data */ + ULPCB_FLAG_TLS_ND = 1 << 6, /* payload of zero length */ + ULPCB_FLAG_NO_HDR = 1 << 7, /* not a ofld wr */ +}; + +/* The ULP mode/submode of an skbuff */ +#define skb_ulp_mode(skb) (ULP_SKB_CB(skb)->ulp_mode) +#define TCP_PAGE(sk) (sk->sk_frag.page) +#define TCP_OFF(sk) (sk->sk_frag.offset) + +static inline struct chtls_dev *to_chtls_dev(struct tls_device *tlsdev) +{ + return container_of(tlsdev, struct chtls_dev, tlsdev); +} + +static inline void csk_set_flag(struct chtls_sock *csk, + enum csk_flags flag) +{ + __set_bit(flag, &csk->flags); +} + +static inline void csk_reset_flag(struct chtls_sock *csk, + enum csk_flags flag) +{ + __clear_bit(flag, &csk->flags); +} + +static inline int csk_flag(const struct sock *sk, enum csk_flags flag) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + + if (!sock_flag(sk, SOCK_INLINE)) + return 0; + return test_bit(flag, &csk->flags); +} + +static inline int csk_flag_nochk(const struct chtls_sock *csk, + enum csk_flags flag) +{ + return test_bit(flag, &csk->flags); +} + +static inline void *cplhdr(struct sk_buff *skb) +{ + return skb->data; +} + +static inline void set_queue(struct sk_buff *skb, + unsigned int queue, const struct sock *sk) +{ + skb->queue_mapping = queue; +} + +static inline int is_neg_adv(unsigned int status) +{ + return status == CPL_ERR_RTX_NEG_ADVICE || + status == CPL_ERR_KEEPALV_NEG_ADVICE || + status == CPL_ERR_PERSIST_NEG_ADVICE; +} + +static inline void process_cpl_msg(void (*fn)(struct sock *, struct sk_buff *), + struct sock *sk, + struct sk_buff *skb) +{ + skb_reset_mac_header(skb); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + + bh_lock_sock(sk); + if (unlikely(sock_owned_by_user(sk))) { + BLOG_SKB_CB(skb)->backlog_rcv = fn; + __sk_add_backlog(sk, skb); + } else { + fn(sk, skb); + } + bh_unlock_sock(sk); +} + +static inline void chtls_sock_free(struct kref *ref) +{ + struct chtls_sock *csk = container_of(ref, struct chtls_sock, + kref); + kfree(csk); +} + +static inline void __chtls_sock_put(const char *fn, struct chtls_sock *csk) +{ + kref_put(&csk->kref, chtls_sock_free); +} + +static inline void __chtls_sock_get(const char *fn, + struct chtls_sock *csk) +{ + kref_get(&csk->kref); +} + +static inline void remove_handle(struct chtls_dev *cdev, int tid) +{ + spin_lock(&cdev->aidr_lock); + idr_remove(&cdev->aidr, tid); + spin_unlock(&cdev->aidr_lock); +} + +static inline void send_or_defer(struct sock *sk, struct tcp_sock *tp, + struct sk_buff *skb, int through_l2t) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + + if (unlikely(sk->sk_state == TCP_SYN_SENT)) { + /* defer */ + __skb_queue_tail(&csk->ooo_queue, skb); + } else if (through_l2t) { + /* send through L2T */ + cxgb4_l2t_send(csk->egress_dev, skb, csk->l2t_entry); + } else { + /* send directly */ + cxgb4_ofld_send(csk->egress_dev, skb); + } +} + +typedef int (*chtls_handler_func)(struct chtls_dev *, struct sk_buff *); +extern chtls_handler_func chtls_handlers[NUM_CPL_CMDS]; +void chtls_install_cpl_ops(struct sock *sk); +int chtls_init_kmap(struct chtls_dev *cdev, struct cxgb4_lld_info *lldi); +void chtls_free_kmap(struct chtls_dev *cdev); +void chtls_listen_stop(struct chtls_dev *cdev, struct sock *sk); +int chtls_listen_start(struct chtls_dev *cdev, struct sock *sk); +void chtls_close(struct sock *sk, long timeout); +int chtls_disconnect(struct sock *sk, int flags); +void chtls_shutdown(struct sock *sk, int how); +void chtls_destroy_sock(struct sock *sk); +int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size); +int chtls_recvmsg(struct sock *sk, struct msghdr *msg, + size_t len, int nonblock, int flags, int *addr_len); +int chtls_sendpage(struct sock *sk, struct page *page, + int offset, size_t size, int flags); +int send_tx_flowc_wr(struct sock *sk, int compl, + u32 snd_nxt, u32 rcv_nxt); +void chtls_tcp_push(struct sock *sk, int flags); +int chtls_push_frames(struct chtls_sock *csk, int comp); +int chtls_set_tcb_tflag(struct sock *sk, unsigned int bit_pos, int val); +int chtls_setkey(struct chtls_sock *csk, u32 keylen, u32 mode); +void skb_entail(struct sock *sk, struct sk_buff *skb, int flags); +void free_tls_keyid(struct sock *sk); +#endif diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.h b/drivers/crypto/chelsio/chtls/chtls_cm.h new file mode 100644 index 0000000..bc1bfd6 --- /dev/null +++ b/drivers/crypto/chelsio/chtls/chtls_cm.h @@ -0,0 +1,202 @@ +/* + * Copyright (c) 2017 Chelsio Communications, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#ifndef __CHTLS_CM_H__ +#define __CHTLS_CM_H__ + +/* + * TCB settings + */ +/* 3:0 */ +#define TCB_ULP_TYPE_W 0 +#define TCB_ULP_TYPE_S 0 +#define TCB_ULP_TYPE_M 0xfULL +#define TCB_ULP_TYPE_V(x) ((x) << TCB_ULP_TYPE_S) + +/* 11:4 */ +#define TCB_ULP_RAW_W 0 +#define TCB_ULP_RAW_S 4 +#define TCB_ULP_RAW_M 0xffULL +#define TCB_ULP_RAW_V(x) ((x) << TCB_ULP_RAW_S) + +#define TF_TLS_KEY_SIZE_S 7 +#define TF_TLS_KEY_SIZE_V(x) ((x) << TF_TLS_KEY_SIZE_S) + +#define TF_TLS_CONTROL_S 2 +#define TF_TLS_CONTROL_V(x) ((x) << TF_TLS_CONTROL_S) + +#define TF_TLS_ACTIVE_S 1 +#define TF_TLS_ACTIVE_V(x) ((x) << TF_TLS_ACTIVE_S) + +#define TF_TLS_ENABLE_S 0 +#define TF_TLS_ENABLE_V(x) ((x) << TF_TLS_ENABLE_S) + +#define TF_RX_QUIESCE_S 15 +#define TF_RX_QUIESCE_V(x) ((x) << TF_RX_QUIESCE_S) + +/* + * Max receive window supported by HW in bytes. Only a small part of it can + * be set through option0, the rest needs to be set through RX_DATA_ACK. + */ +#define MAX_RCV_WND ((1U << 27) - 1) +#define MAX_MSS 65536 + +/* + * Min receive window. We want it to be large enough to accommodate receive + * coalescing, handle jumbo frames, and not trigger sender SWS avoidance. + */ +#define MIN_RCV_WND (24 * 1024U) +#define LOOPBACK(x) (((x) & htonl(0xff000000)) == htonl(0x7f000000)) + +/* ulp_mem_io + ulptx_idata + payload + padding */ +#define MAX_IMM_ULPTX_WR_LEN (32 + 8 + 256 + 8) + +/* for TX: a skb must have a headroom of at least TX_HEADER_LEN bytes */ +#define TX_HEADER_LEN \ + (sizeof(struct fw_ofld_tx_data_wr) + sizeof(struct sge_opaque_hdr)) +#define TX_TLSHDR_LEN \ + (sizeof(struct fw_tlstx_data_wr) + sizeof(struct cpl_tx_tls_sfo) + \ + sizeof(struct sge_opaque_hdr)) +#define TXDATA_SKB_LEN 128 + +enum { + CPL_TX_TLS_SFO_TYPE_CCS, + CPL_TX_TLS_SFO_TYPE_ALERT, + CPL_TX_TLS_SFO_TYPE_HANDSHAKE, + CPL_TX_TLS_SFO_TYPE_DATA, + CPL_TX_TLS_SFO_TYPE_HEARTBEAT, +}; + +enum { + TLS_HDR_TYPE_CCS = 20, + TLS_HDR_TYPE_ALERT, + TLS_HDR_TYPE_HANDSHAKE, + TLS_HDR_TYPE_RECORD, + TLS_HDR_TYPE_HEARTBEAT, +}; + +typedef void (*defer_handler_t)(struct chtls_dev *dev, struct sk_buff *skb); + +struct deferred_skb_cb { + defer_handler_t handler; + struct chtls_dev *dev; +}; + +#define DEFERRED_SKB_CB(skb) ((struct deferred_skb_cb *)(skb)->cb) +#define failover_flowc_wr_len offsetof(struct fw_flowc_wr, mnemval[3]) +#define WR_SKB_CB(skb) ((struct wr_skb_cb *)(skb)->cb) +#define ACCEPT_QUEUE(sk) (&inet_csk(sk)->icsk_accept_queue.rskq_accept_head) + +#define SND_WSCALE(tp) ((tp)->rx_opt.snd_wscale) +#define RCV_WSCALE(tp) ((tp)->rx_opt.rcv_wscale) +#define USER_MSS(tp) ((tp)->rx_opt.user_mss) +#define TS_RECENT_STAMP(tp) ((tp)->rx_opt.ts_recent_stamp) +#define WSCALE_OK(tp) ((tp)->rx_opt.wscale_ok) +#define TSTAMP_OK(tp) ((tp)->rx_opt.tstamp_ok) +#define SACK_OK(tp) ((tp)->rx_opt.sack_ok) +#define INC_ORPHAN_COUNT(sk) percpu_counter_inc((sk)->sk_prot->orphan_count) + +/* TLS SKB */ +#define skb_ulp_tls_skb_flags(skb) (ULP_SKB_CB(skb)->ulp.tls.ofld) +#define skb_ulp_tls_skb_iv(skb) (ULP_SKB_CB(skb)->ulp.tls.iv) + +void chtls_defer_reply(struct sk_buff *skb, struct chtls_dev *dev, + defer_handler_t handler); + +/* + * Returns true if the socket is in one of the supplied states. + */ +static inline unsigned int sk_in_state(const struct sock *sk, + unsigned int states) +{ + return states & (1 << sk->sk_state); +} + +static void chtls_rsk_destructor(struct request_sock *req) +{ + /* do nothing */ +} + +static inline void chtls_init_rsk_ops(struct proto *chtls_tcp_prot, + struct request_sock_ops *chtls_tcp_ops, + struct proto *tcp_prot, int family) +{ + memset(chtls_tcp_ops, 0, sizeof(*chtls_tcp_ops)); + chtls_tcp_ops->family = family; + chtls_tcp_ops->obj_size = sizeof(struct tcp_request_sock); + chtls_tcp_ops->destructor = chtls_rsk_destructor; + chtls_tcp_ops->slab = tcp_prot->rsk_prot->slab; + chtls_tcp_prot->rsk_prot = chtls_tcp_ops; +} + +static inline void chtls_reqsk_free(struct request_sock *req) +{ + if (req->rsk_listener) + sock_put(req->rsk_listener); + kmem_cache_free(req->rsk_ops->slab, req); +} + +#define DECLARE_TASK_FUNC(task, task_param) \ + static void task(struct work_struct *task_param) + +static inline void sk_wakeup_sleepers(struct sock *sk, bool interruptable) +{ + struct socket_wq *wq; + + rcu_read_lock(); + wq = rcu_dereference(sk->sk_wq); + if (skwq_has_sleeper(wq)) { + if (interruptable) + wake_up_interruptible(sk_sleep(sk)); + else + wake_up_all(sk_sleep(sk)); + } + rcu_read_unlock(); +} + +static inline void chtls_set_req_port(struct request_sock *oreq, + __be16 source, __be16 dest) +{ + inet_rsk(oreq)->ir_rmt_port = source; + inet_rsk(oreq)->ir_num = ntohs(dest); +} + +static inline void chtls_set_req_addr(struct request_sock *oreq, + __be32 local_ip, __be32 peer_ip) +{ + inet_rsk(oreq)->ir_loc_addr = local_ip; + inet_rsk(oreq)->ir_rmt_addr = peer_ip; +} + +static inline void chtls_free_skb(struct sock *sk, struct sk_buff *skb) +{ + skb_dst_set(skb, NULL); + __skb_unlink(skb, &sk->sk_receive_queue); + __kfree_skb(skb); +} + +static inline void chtls_kfree_skb(struct sock *sk, struct sk_buff *skb) +{ + skb_dst_set(skb, NULL); + __skb_unlink(skb, &sk->sk_receive_queue); + kfree_skb(skb); +} + +static inline void enqueue_wr(struct chtls_sock *csk, struct sk_buff *skb) +{ + WR_SKB_CB(skb)->next_wr = NULL; + + skb_get(skb); + + if (!csk->wr_skb_head) + csk->wr_skb_head = skb; + else + WR_SKB_CB(csk->wr_skb_tail)->next_wr = skb; + csk->wr_skb_tail = skb; +} +#endif From patchwork Thu Feb 22 17:52:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876785 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMPv36CRz9ryG for ; Fri, 23 Feb 2018 04:52:35 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933615AbeBVRwd (ORCPT ); Thu, 22 Feb 2018 12:52:33 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:55035 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933470AbeBVRwZ (ORCPT ); Thu, 22 Feb 2018 12:52:25 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHqA30014680; Thu, 22 Feb 2018 09:52:15 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 05/12] cxgb4: Inline TLS FW Interface Date: Thu, 22 Feb 2018 23:22:01 +0530 Message-Id: <1519321928-22995-2-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> References: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Key area size in hw-config file. CPL struct for TLS request and response. Work request for Inline TLS. Signed-off-by: Atul Gupta --- drivers/net/ethernet/chelsio/cxgb4/t4_msg.h | 121 ++++++++++++++++++- drivers/net/ethernet/chelsio/cxgb4/t4_regs.h | 2 + drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h | 165 +++++++++++++++++++++++++- 3 files changed, 283 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h index d0db442..507cb5a 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h +++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h @@ -81,6 +81,7 @@ enum { CPL_RX_ISCSI_CMP = 0x45, CPL_TRACE_PKT_T5 = 0x48, CPL_RX_ISCSI_DDP = 0x49, + CPL_RX_TLS_CMP = 0x4E, CPL_RDMA_READ_REQ = 0x60, @@ -88,6 +89,7 @@ enum { CPL_ACT_OPEN_REQ6 = 0x83, CPL_TX_TLS_PDU = 0x88, + CPL_TX_TLS_SFO = 0x89, CPL_TX_SEC_PDU = 0x8A, CPL_TX_TLS_ACK = 0x8B, @@ -97,6 +99,7 @@ enum { CPL_RX_MPS_PKT = 0xAF, CPL_TRACE_PKT = 0xB0, + CPL_TLS_DATA = 0xB1, CPL_ISCSI_DATA = 0xB2, CPL_FW4_MSG = 0xC0, @@ -151,6 +154,7 @@ enum { ULP_MODE_RDMA = 4, ULP_MODE_TCPDDP = 5, ULP_MODE_FCOE = 6, + ULP_MODE_TLS = 8, }; enum { @@ -1415,6 +1419,14 @@ struct cpl_tx_data { #define TX_FORCE_S 13 #define TX_FORCE_V(x) ((x) << TX_FORCE_S) +#define TX_SHOVE_S 14 +#define TX_SHOVE_V(x) ((x) << TX_SHOVE_S) + +#define TX_ULP_MODE_S 10 +#define TX_ULP_MODE_M 0x7 +#define TX_ULP_MODE_V(x) ((x) << TX_ULP_MODE_S) +#define TX_ULP_MODE_G(x) (((x) >> TX_ULP_MODE_S) & TX_ULP_MODE_M) + #define T6_TX_FORCE_S 20 #define T6_TX_FORCE_V(x) ((x) << T6_TX_FORCE_S) #define T6_TX_FORCE_F T6_TX_FORCE_V(1U) @@ -1429,12 +1441,21 @@ enum { ULP_TX_SC_NOOP = 0x80, ULP_TX_SC_IMM = 0x81, ULP_TX_SC_DSGL = 0x82, - ULP_TX_SC_ISGL = 0x83 + ULP_TX_SC_ISGL = 0x83, + ULP_TX_SC_MEMRD = 0x86 }; #define ULPTX_CMD_S 24 #define ULPTX_CMD_V(x) ((x) << ULPTX_CMD_S) +#define ULPTX_LEN16_S 0 +#define ULPTX_LEN16_M 0xFF +#define ULPTX_LEN16_V(x) ((x) << ULPTX_LEN16_S) + +#define ULP_TX_SC_MORE_S 23 +#define ULP_TX_SC_MORE_V(x) ((x) << ULP_TX_SC_MORE_S) +#define ULP_TX_SC_MORE_F ULP_TX_SC_MORE_V(1U) + struct ulptx_sge_pair { __be32 len[2]; __be64 addr[2]; @@ -2112,4 +2133,102 @@ enum { X_CPL_RX_MPS_PKT_TYPE_QFC = 1 << 2, X_CPL_RX_MPS_PKT_TYPE_PTP = 1 << 3 }; + +struct cpl_tx_tls_sfo { + __be32 op_to_seg_len; + __be32 pld_len; + __be32 type_protover; + __be32 r1_lo; + __be32 seqno_numivs; + __be32 ivgen_hdrlen; + __be64 scmd1; +}; + +/* cpl_tx_tls_sfo macros */ +#define CPL_TX_TLS_SFO_OPCODE_S 24 +#define CPL_TX_TLS_SFO_OPCODE_V(x) ((x) << CPL_TX_TLS_SFO_OPCODE_S) + +#define CPL_TX_TLS_SFO_DATA_TYPE_S 20 +#define CPL_TX_TLS_SFO_DATA_TYPE_V(x) ((x) << CPL_TX_TLS_SFO_DATA_TYPE_S) + +#define CPL_TX_TLS_SFO_CPL_LEN_S 16 +#define CPL_TX_TLS_SFO_CPL_LEN_V(x) ((x) << CPL_TX_TLS_SFO_CPL_LEN_S) + +#define CPL_TX_TLS_SFO_SEG_LEN_S 0 +#define CPL_TX_TLS_SFO_SEG_LEN_M 0xffff +#define CPL_TX_TLS_SFO_SEG_LEN_V(x) ((x) << CPL_TX_TLS_SFO_SEG_LEN_S) +#define CPL_TX_TLS_SFO_SEG_LEN_G(x) \ + (((x) >> CPL_TX_TLS_SFO_SEG_LEN_S) & CPL_TX_TLS_SFO_SEG_LEN_M) + +#define CPL_TX_TLS_SFO_TYPE_S 24 +#define CPL_TX_TLS_SFO_TYPE_M 0xff +#define CPL_TX_TLS_SFO_TYPE_V(x) ((x) << CPL_TX_TLS_SFO_TYPE_S) +#define CPL_TX_TLS_SFO_TYPE_G(x) \ + (((x) >> CPL_TX_TLS_SFO_TYPE_S) & CPL_TX_TLS_SFO_TYPE_M) + +#define CPL_TX_TLS_SFO_PROTOVER_S 8 +#define CPL_TX_TLS_SFO_PROTOVER_M 0xffff +#define CPL_TX_TLS_SFO_PROTOVER_V(x) ((x) << CPL_TX_TLS_SFO_PROTOVER_S) +#define CPL_TX_TLS_SFO_PROTOVER_G(x) \ + (((x) >> CPL_TX_TLS_SFO_PROTOVER_S) & CPL_TX_TLS_SFO_PROTOVER_M) + +struct cpl_tls_data { + struct rss_header rsshdr; + union opcode_tid ot; + __be32 length_pkd; + __be32 seq; + __be32 r1; +}; + +#define CPL_TLS_DATA_OPCODE_S 24 +#define CPL_TLS_DATA_OPCODE_M 0xff +#define CPL_TLS_DATA_OPCODE_V(x) ((x) << CPL_TLS_DATA_OPCODE_S) +#define CPL_TLS_DATA_OPCODE_G(x) \ + (((x) >> CPL_TLS_DATA_OPCODE_S) & CPL_TLS_DATA_OPCODE_M) + +#define CPL_TLS_DATA_TID_S 0 +#define CPL_TLS_DATA_TID_M 0xffffff +#define CPL_TLS_DATA_TID_V(x) ((x) << CPL_TLS_DATA_TID_S) +#define CPL_TLS_DATA_TID_G(x) \ + (((x) >> CPL_TLS_DATA_TID_S) & CPL_TLS_DATA_TID_M) + +#define CPL_TLS_DATA_LENGTH_S 0 +#define CPL_TLS_DATA_LENGTH_M 0xffff +#define CPL_TLS_DATA_LENGTH_V(x) ((x) << CPL_TLS_DATA_LENGTH_S) +#define CPL_TLS_DATA_LENGTH_G(x) \ + (((x) >> CPL_TLS_DATA_LENGTH_S) & CPL_TLS_DATA_LENGTH_M) + +struct cpl_rx_tls_cmp { + struct rss_header rsshdr; + union opcode_tid ot; + __be32 pdulength_length; + __be32 seq; + __be32 ddp_report; + __be32 r; + __be32 ddp_valid; +}; + +#define CPL_RX_TLS_CMP_OPCODE_S 24 +#define CPL_RX_TLS_CMP_OPCODE_M 0xff +#define CPL_RX_TLS_CMP_OPCODE_V(x) ((x) << CPL_RX_TLS_CMP_OPCODE_S) +#define CPL_RX_TLS_CMP_OPCODE_G(x) \ + (((x) >> CPL_RX_TLS_CMP_OPCODE_S) & CPL_RX_TLS_CMP_OPCODE_M) + +#define CPL_RX_TLS_CMP_TID_S 0 +#define CPL_RX_TLS_CMP_TID_M 0xffffff +#define CPL_RX_TLS_CMP_TID_V(x) ((x) << CPL_RX_TLS_CMP_TID_S) +#define CPL_RX_TLS_CMP_TID_G(x) \ + (((x) >> CPL_RX_TLS_CMP_TID_S) & CPL_RX_TLS_CMP_TID_M) + +#define CPL_RX_TLS_CMP_PDULENGTH_S 16 +#define CPL_RX_TLS_CMP_PDULENGTH_M 0xffff +#define CPL_RX_TLS_CMP_PDULENGTH_V(x) ((x) << CPL_RX_TLS_CMP_PDULENGTH_S) +#define CPL_RX_TLS_CMP_PDULENGTH_G(x) \ + (((x) >> CPL_RX_TLS_CMP_PDULENGTH_S) & CPL_RX_TLS_CMP_PDULENGTH_M) + +#define CPL_RX_TLS_CMP_LENGTH_S 0 +#define CPL_RX_TLS_CMP_LENGTH_M 0xffff +#define CPL_RX_TLS_CMP_LENGTH_V(x) ((x) << CPL_RX_TLS_CMP_LENGTH_S) +#define CPL_RX_TLS_CMP_LENGTH_G(x) \ + (((x) >> CPL_RX_TLS_CMP_LENGTH_S) & CPL_RX_TLS_CMP_LENGTH_M) #endif /* __T4_MSG_H */ diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h index a6df733..276fdf2 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h +++ b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h @@ -2775,6 +2775,8 @@ #define ULP_RX_LA_RDPTR_A 0x19240 #define ULP_RX_LA_RDDATA_A 0x19244 #define ULP_RX_LA_WRPTR_A 0x19248 +#define ULP_RX_TLS_KEY_LLIMIT_A 0x192ac +#define ULP_RX_TLS_KEY_ULIMIT_A 0x192b0 #define HPZ3_S 24 #define HPZ3_V(x) ((x) << HPZ3_S) diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h index 0d83b40..c3f086a 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h +++ b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h @@ -104,6 +104,7 @@ enum fw_wr_opcodes { FW_RI_INV_LSTAG_WR = 0x1a, FW_ISCSI_TX_DATA_WR = 0x45, FW_PTP_TX_PKT_WR = 0x46, + FW_TLSTX_DATA_WR = 0x68, FW_CRYPTO_LOOKASIDE_WR = 0X6d, FW_LASTC2E_WR = 0x70, FW_FILTER2_WR = 0x77 @@ -634,6 +635,30 @@ struct fw_ofld_connection_wr { #define FW_OFLD_CONNECTION_WR_CPLPASSACCEPTRPL_F \ FW_OFLD_CONNECTION_WR_CPLPASSACCEPTRPL_V(1U) +enum fw_flowc_mnem_tcpstate { + FW_FLOWC_MNEM_TCPSTATE_CLOSED = 0, /* illegal */ + FW_FLOWC_MNEM_TCPSTATE_LISTEN = 1, /* illegal */ + FW_FLOWC_MNEM_TCPSTATE_SYNSENT = 2, /* illegal */ + FW_FLOWC_MNEM_TCPSTATE_SYNRECEIVED = 3, /* illegal */ + FW_FLOWC_MNEM_TCPSTATE_ESTABLISHED = 4, /* default */ + FW_FLOWC_MNEM_TCPSTATE_CLOSEWAIT = 5, /* got peer close already */ + FW_FLOWC_MNEM_TCPSTATE_FINWAIT1 = 6, /* haven't gotten ACK for FIN and + * will resend FIN - equiv ESTAB + */ + FW_FLOWC_MNEM_TCPSTATE_CLOSING = 7, /* haven't gotten ACK for FIN and + * will resend FIN but have + * received FIN + */ + FW_FLOWC_MNEM_TCPSTATE_LASTACK = 8, /* haven't gotten ACK for FIN and + * will resend FIN but have + * received FIN + */ + FW_FLOWC_MNEM_TCPSTATE_FINWAIT2 = 9, /* sent FIN and got FIN + ACK, + * waiting for FIN + */ + FW_FLOWC_MNEM_TCPSTATE_TIMEWAIT = 10, /* not expected */ +}; + enum fw_flowc_mnem { FW_FLOWC_MNEM_PFNVFN, /* PFN [15:8] VFN [7:0] */ FW_FLOWC_MNEM_CH, @@ -650,6 +675,8 @@ enum fw_flowc_mnem { FW_FLOWC_MNEM_DCBPRIO, FW_FLOWC_MNEM_SND_SCALE, FW_FLOWC_MNEM_RCV_SCALE, + FW_FLOWC_MNEM_ULD_MODE, + FW_FLOWC_MNEM_MAX, }; struct fw_flowc_mnemval { @@ -674,6 +701,14 @@ struct fw_ofld_tx_data_wr { __be32 tunnel_to_proxy; }; +#define FW_OFLD_TX_DATA_WR_ALIGNPLD_S 30 +#define FW_OFLD_TX_DATA_WR_ALIGNPLD_V(x) ((x) << FW_OFLD_TX_DATA_WR_ALIGNPLD_S) +#define FW_OFLD_TX_DATA_WR_ALIGNPLD_F FW_OFLD_TX_DATA_WR_ALIGNPLD_V(1U) + +#define FW_OFLD_TX_DATA_WR_SHOVE_S 29 +#define FW_OFLD_TX_DATA_WR_SHOVE_V(x) ((x) << FW_OFLD_TX_DATA_WR_SHOVE_S) +#define FW_OFLD_TX_DATA_WR_SHOVE_F FW_OFLD_TX_DATA_WR_SHOVE_V(1U) + #define FW_OFLD_TX_DATA_WR_TUNNEL_S 19 #define FW_OFLD_TX_DATA_WR_TUNNEL_V(x) ((x) << FW_OFLD_TX_DATA_WR_TUNNEL_S) @@ -690,10 +725,6 @@ struct fw_ofld_tx_data_wr { #define FW_OFLD_TX_DATA_WR_MORE_S 15 #define FW_OFLD_TX_DATA_WR_MORE_V(x) ((x) << FW_OFLD_TX_DATA_WR_MORE_S) -#define FW_OFLD_TX_DATA_WR_SHOVE_S 14 -#define FW_OFLD_TX_DATA_WR_SHOVE_V(x) ((x) << FW_OFLD_TX_DATA_WR_SHOVE_S) -#define FW_OFLD_TX_DATA_WR_SHOVE_F FW_OFLD_TX_DATA_WR_SHOVE_V(1U) - #define FW_OFLD_TX_DATA_WR_ULPMODE_S 10 #define FW_OFLD_TX_DATA_WR_ULPMODE_V(x) ((x) << FW_OFLD_TX_DATA_WR_ULPMODE_S) @@ -1119,6 +1150,12 @@ enum fw_caps_config_iscsi { FW_CAPS_CONFIG_ISCSI_TARGET_CNXOFLD = 0x00000008, }; +enum fw_caps_config_crypto { + FW_CAPS_CONFIG_CRYPTO_LOOKASIDE = 0x00000001, + FW_CAPS_CONFIG_TLS_INLINE = 0x00000002, + FW_CAPS_CONFIG_IPSEC_INLINE = 0x00000004, +}; + enum fw_caps_config_fcoe { FW_CAPS_CONFIG_FCOE_INITIATOR = 0x00000001, FW_CAPS_CONFIG_FCOE_TARGET = 0x00000002, @@ -1259,6 +1296,8 @@ enum fw_params_param_pfvf { FW_PARAMS_PARAM_PFVF_HPFILTER_START = 0x32, FW_PARAMS_PARAM_PFVF_HPFILTER_END = 0x33, FW_PARAMS_PARAM_PFVF_NCRYPTO_LOOKASIDE = 0x39, + FW_PARAMS_PARAM_PFVF_TLS_START = 0x34, + FW_PARAMS_PARAM_PFVF_TLS_END = 0x35, FW_PARAMS_PARAM_PFVF_PORT_CAPS32 = 0x3A, }; @@ -3778,4 +3817,122 @@ struct fw_crypto_lookaside_wr { (((x) >> FW_CRYPTO_LOOKASIDE_WR_HASH_SIZE_S) & \ FW_CRYPTO_LOOKASIDE_WR_HASH_SIZE_M) +struct fw_tlstx_data_wr { + __be32 op_to_immdlen; + __be32 flowid_len16; + __be32 plen; + __be32 lsodisable_to_flags; + __be32 r5; + __be32 ctxloc_to_exp; + __be16 mfs; + __be16 adjustedplen_pkd; + __be16 expinplenmax_pkd; + __u8 pdusinplenmax_pkd; + __u8 r10; +}; + +#define FW_TLSTX_DATA_WR_OPCODE_S 24 +#define FW_TLSTX_DATA_WR_OPCODE_M 0xff +#define FW_TLSTX_DATA_WR_OPCODE_V(x) ((x) << FW_TLSTX_DATA_WR_OPCODE_S) +#define FW_TLSTX_DATA_WR_OPCODE_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_OPCODE_S) & FW_TLSTX_DATA_WR_OPCODE_M) + +#define FW_TLSTX_DATA_WR_COMPL_S 21 +#define FW_TLSTX_DATA_WR_COMPL_M 0x1 +#define FW_TLSTX_DATA_WR_COMPL_V(x) ((x) << FW_TLSTX_DATA_WR_COMPL_S) +#define FW_TLSTX_DATA_WR_COMPL_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_COMPL_S) & FW_TLSTX_DATA_WR_COMPL_M) +#define FW_TLSTX_DATA_WR_COMPL_F FW_TLSTX_DATA_WR_COMPL_V(1U) + +#define FW_TLSTX_DATA_WR_IMMDLEN_S 0 +#define FW_TLSTX_DATA_WR_IMMDLEN_M 0xff +#define FW_TLSTX_DATA_WR_IMMDLEN_V(x) ((x) << FW_TLSTX_DATA_WR_IMMDLEN_S) +#define FW_TLSTX_DATA_WR_IMMDLEN_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_IMMDLEN_S) & FW_TLSTX_DATA_WR_IMMDLEN_M) + +#define FW_TLSTX_DATA_WR_FLOWID_S 8 +#define FW_TLSTX_DATA_WR_FLOWID_M 0xfffff +#define FW_TLSTX_DATA_WR_FLOWID_V(x) ((x) << FW_TLSTX_DATA_WR_FLOWID_S) +#define FW_TLSTX_DATA_WR_FLOWID_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_FLOWID_S) & FW_TLSTX_DATA_WR_FLOWID_M) + +#define FW_TLSTX_DATA_WR_LEN16_S 0 +#define FW_TLSTX_DATA_WR_LEN16_M 0xff +#define FW_TLSTX_DATA_WR_LEN16_V(x) ((x) << FW_TLSTX_DATA_WR_LEN16_S) +#define FW_TLSTX_DATA_WR_LEN16_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_LEN16_S) & FW_TLSTX_DATA_WR_LEN16_M) + +#define FW_TLSTX_DATA_WR_LSODISABLE_S 31 +#define FW_TLSTX_DATA_WR_LSODISABLE_M 0x1 +#define FW_TLSTX_DATA_WR_LSODISABLE_V(x) \ + ((x) << FW_TLSTX_DATA_WR_LSODISABLE_S) +#define FW_TLSTX_DATA_WR_LSODISABLE_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_LSODISABLE_S) & FW_TLSTX_DATA_WR_LSODISABLE_M) +#define FW_TLSTX_DATA_WR_LSODISABLE_F FW_TLSTX_DATA_WR_LSODISABLE_V(1U) + +#define FW_TLSTX_DATA_WR_ALIGNPLD_S 30 +#define FW_TLSTX_DATA_WR_ALIGNPLD_M 0x1 +#define FW_TLSTX_DATA_WR_ALIGNPLD_V(x) ((x) << FW_TLSTX_DATA_WR_ALIGNPLD_S) +#define FW_TLSTX_DATA_WR_ALIGNPLD_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_ALIGNPLD_S) & FW_TLSTX_DATA_WR_ALIGNPLD_M) +#define FW_TLSTX_DATA_WR_ALIGNPLD_F FW_TLSTX_DATA_WR_ALIGNPLD_V(1U) + +#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_S 29 +#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_M 0x1 +#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_V(x) \ + ((x) << FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_S) +#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_S) & \ + FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_M) +#define FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_F FW_TLSTX_DATA_WR_ALIGNPLDSHOVE_V(1U) + +#define FW_TLSTX_DATA_WR_FLAGS_S 0 +#define FW_TLSTX_DATA_WR_FLAGS_M 0xfffffff +#define FW_TLSTX_DATA_WR_FLAGS_V(x) ((x) << FW_TLSTX_DATA_WR_FLAGS_S) +#define FW_TLSTX_DATA_WR_FLAGS_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_FLAGS_S) & FW_TLSTX_DATA_WR_FLAGS_M) + +#define FW_TLSTX_DATA_WR_CTXLOC_S 30 +#define FW_TLSTX_DATA_WR_CTXLOC_M 0x3 +#define FW_TLSTX_DATA_WR_CTXLOC_V(x) ((x) << FW_TLSTX_DATA_WR_CTXLOC_S) +#define FW_TLSTX_DATA_WR_CTXLOC_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_CTXLOC_S) & FW_TLSTX_DATA_WR_CTXLOC_M) + +#define FW_TLSTX_DATA_WR_IVDSGL_S 29 +#define FW_TLSTX_DATA_WR_IVDSGL_M 0x1 +#define FW_TLSTX_DATA_WR_IVDSGL_V(x) ((x) << FW_TLSTX_DATA_WR_IVDSGL_S) +#define FW_TLSTX_DATA_WR_IVDSGL_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_IVDSGL_S) & FW_TLSTX_DATA_WR_IVDSGL_M) +#define FW_TLSTX_DATA_WR_IVDSGL_F FW_TLSTX_DATA_WR_IVDSGL_V(1U) + +#define FW_TLSTX_DATA_WR_KEYSIZE_S 24 +#define FW_TLSTX_DATA_WR_KEYSIZE_M 0x1f +#define FW_TLSTX_DATA_WR_KEYSIZE_V(x) ((x) << FW_TLSTX_DATA_WR_KEYSIZE_S) +#define FW_TLSTX_DATA_WR_KEYSIZE_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_KEYSIZE_S) & FW_TLSTX_DATA_WR_KEYSIZE_M) + +#define FW_TLSTX_DATA_WR_NUMIVS_S 14 +#define FW_TLSTX_DATA_WR_NUMIVS_M 0xff +#define FW_TLSTX_DATA_WR_NUMIVS_V(x) ((x) << FW_TLSTX_DATA_WR_NUMIVS_S) +#define FW_TLSTX_DATA_WR_NUMIVS_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_NUMIVS_S) & FW_TLSTX_DATA_WR_NUMIVS_M) + +#define FW_TLSTX_DATA_WR_EXP_S 0 +#define FW_TLSTX_DATA_WR_EXP_M 0x3fff +#define FW_TLSTX_DATA_WR_EXP_V(x) ((x) << FW_TLSTX_DATA_WR_EXP_S) +#define FW_TLSTX_DATA_WR_EXP_G(x) \ + (((x) >> FW_TLSTX_DATA_WR_EXP_S) & FW_TLSTX_DATA_WR_EXP_M) + +#define FW_TLSTX_DATA_WR_ADJUSTEDPLEN_S 1 +#define FW_TLSTX_DATA_WR_ADJUSTEDPLEN_V(x) \ + ((x) << FW_TLSTX_DATA_WR_ADJUSTEDPLEN_S) + +#define FW_TLSTX_DATA_WR_EXPINPLENMAX_S 4 +#define FW_TLSTX_DATA_WR_EXPINPLENMAX_V(x) \ + ((x) << FW_TLSTX_DATA_WR_EXPINPLENMAX_S) + +#define FW_TLSTX_DATA_WR_PDUSINPLENMAX_S 2 +#define FW_TLSTX_DATA_WR_PDUSINPLENMAX_V(x) \ + ((x) << FW_TLSTX_DATA_WR_PDUSINPLENMAX_S) + #endif /* _T4FW_INTERFACE_H_ */ From patchwork Thu Feb 22 17:52:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876794 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMRD6kZvz9sW8 for ; Fri, 23 Feb 2018 04:53:44 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933620AbeBVRwj (ORCPT ); Thu, 22 Feb 2018 12:52:39 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:19102 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933441AbeBVRwh (ORCPT ); Thu, 22 Feb 2018 12:52:37 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHqA31014680; Thu, 22 Feb 2018 09:52:19 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 06/12] cxgb4: LLD driver changes to enable TLS Date: Thu, 22 Feb 2018 23:22:02 +0530 Message-Id: <1519321928-22995-3-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> References: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Read FW capability. Read key area size. Dump the TLS record count. Signed-off-by: Atul Gupta --- drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 32 +++++--- drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h | 7 ++ drivers/net/ethernet/chelsio/cxgb4/sge.c | 98 ++++++++++++++++++++++++- 3 files changed, 126 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c index 56bc626..ab5937e 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c @@ -4284,18 +4284,32 @@ static int adap_init0(struct adapter *adap) adap->num_ofld_uld += 2; } if (caps_cmd.cryptocaps) { - /* Should query params here...TODO */ - params[0] = FW_PARAM_PFVF(NCRYPTO_LOOKASIDE); - ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 2, - params, val); - if (ret < 0) { - if (ret != -EINVAL) + if (ntohs(caps_cmd.cryptocaps) & + FW_CAPS_CONFIG_CRYPTO_LOOKASIDE) { + params[0] = FW_PARAM_PFVF(NCRYPTO_LOOKASIDE); + ret = t4_query_params(adap, adap->mbox, adap->pf, 0, + 2, params, val); + if (ret < 0) { + if (ret != -EINVAL) + goto bye; + } else { + adap->vres.ncrypto_fc = val[0]; + } + adap->num_ofld_uld += 1; + } + if (ntohs(caps_cmd.cryptocaps) & + FW_CAPS_CONFIG_TLS_INLINE) { + params[0] = FW_PARAM_PFVF(TLS_START); + params[1] = FW_PARAM_PFVF(TLS_END); + ret = t4_query_params(adap, adap->mbox, adap->pf, 0, + 2, params, val); + if (ret < 0) goto bye; - } else { - adap->vres.ncrypto_fc = val[0]; + adap->vres.key.start = val[0]; + adap->vres.key.size = val[1] - val[0] + 1; + adap->num_uld += 1; } adap->params.crypto = ntohs(caps_cmd.cryptocaps); - adap->num_uld += 1; } #undef FW_PARAM_PFVF #undef FW_PARAM_DEV diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h index a14e8db..3d3ef3f 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h @@ -237,6 +237,7 @@ enum cxgb4_uld { CXGB4_ULD_ISCSI, CXGB4_ULD_ISCSIT, CXGB4_ULD_CRYPTO, + CXGB4_ULD_TLS, CXGB4_ULD_MAX }; @@ -287,6 +288,7 @@ struct cxgb4_virt_res { /* virtualized HW resources */ struct cxgb4_range qp; struct cxgb4_range cq; struct cxgb4_range ocq; + struct cxgb4_range key; unsigned int ncrypto_fc; }; @@ -298,6 +300,9 @@ struct chcr_stats_debug { atomic_t error; atomic_t fallback; atomic_t ipsec_cnt; + atomic_t tls_pdu_tx; + atomic_t tls_pdu_rx; + atomic_t tls_key; }; #define OCQ_WIN_OFFSET(pdev, vres) \ @@ -378,6 +383,8 @@ struct cxgb4_uld_info { int cxgb4_register_uld(enum cxgb4_uld type, const struct cxgb4_uld_info *p); int cxgb4_unregister_uld(enum cxgb4_uld type); int cxgb4_ofld_send(struct net_device *dev, struct sk_buff *skb); +int cxgb4_immdata_send(struct net_device *dev, unsigned int idx, + const void *src, unsigned int len); int cxgb4_crypto_send(struct net_device *dev, struct sk_buff *skb); unsigned int cxgb4_dbfifo_count(const struct net_device *dev, int lpfifo); unsigned int cxgb4_port_chan(const struct net_device *dev); diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c index 6e310a0..32e3779 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c @@ -1740,9 +1740,9 @@ static void txq_stop_maperr(struct sge_uld_txq *q) * Stops an offload Tx queue that has become full and modifies the packet * being written to request a wakeup. */ -static void ofldtxq_stop(struct sge_uld_txq *q, struct sk_buff *skb) +static void ofldtxq_stop(struct sge_uld_txq *q, void *src) { - struct fw_wr_hdr *wr = (struct fw_wr_hdr *)skb->data; + struct fw_wr_hdr *wr = (struct fw_wr_hdr *)src; wr->lo |= htonl(FW_WR_EQUEQ_F | FW_WR_EQUIQ_F); q->q.stops++; @@ -2005,6 +2005,100 @@ int cxgb4_ofld_send(struct net_device *dev, struct sk_buff *skb) } EXPORT_SYMBOL(cxgb4_ofld_send); +static void *inline_tx_header(const void *src, + const struct sge_txq *q, + void *pos, int length) +{ + u64 *p; + int left = (void *)q->stat - pos; + + if (likely(length <= left)) { + memcpy(pos, src, length); + pos += length; + } else { + memcpy(pos, src, left); + memcpy(q->desc, src + left, length - left); + pos = (void *)q->desc + (length - left); + } + /* 0-pad to multiple of 16 */ + p = PTR_ALIGN(pos, 8); + if ((uintptr_t)p & 8) { + *p = 0; + return p + 1; + } + return p; +} + +/** + * ofld_xmit_direct - copy a WR into offload queue + * @q: the Tx offload queue + * @src: location of WR + * @len: WR length + * + * Copy an immediate WR into an uncontended SGE offload queue. + */ +static int ofld_xmit_direct(struct sge_uld_txq *q, const void *src, + unsigned int len) +{ + u64 *pos; + int credits; + unsigned int ndesc; + + /* Use the lower limit as the cut-off */ + if (len > MAX_IMM_OFLD_TX_DATA_WR_LEN) { + WARN_ON(1); + return NET_XMIT_DROP; + } + + /* Don't return NET_XMIT_CN here as the current + * implementation doesn't queue the request + * using an skb when the following conditions not met + */ + if (!spin_trylock(&q->sendq.lock)) + return NET_XMIT_DROP; + + if (q->full || !skb_queue_empty(&q->sendq) || + q->service_ofldq_running) { + spin_unlock(&q->sendq.lock); + return NET_XMIT_DROP; + } + ndesc = flits_to_desc(DIV_ROUND_UP(len, 8)); + credits = txq_avail(&q->q) - ndesc; + pos = (u64 *)&q->q.desc[q->q.pidx]; + + /* ofldtxq_stop modifies WR header in-situ */ + inline_tx_header(src, &q->q, pos, len); + if (unlikely(credits < TXQ_STOP_THRES)) + ofldtxq_stop(q, pos); + txq_advance(&q->q, ndesc); + cxgb4_ring_tx_db(q->adap, &q->q, ndesc); + + spin_unlock(&q->sendq.lock); + return NET_XMIT_SUCCESS; +} + +int cxgb4_immdata_send(struct net_device *dev, unsigned int idx, + const void *src, unsigned int len) +{ + struct adapter *adap = netdev2adap(dev); + struct sge_uld_txq_info *txq_info; + struct sge_uld_txq *txq; + int ret; + + local_bh_disable(); + txq_info = adap->sge.uld_txq_info[CXGB4_TX_OFLD]; + if (unlikely(!txq_info)) { + WARN_ON(true); + return NET_XMIT_DROP; + } + txq = &txq_info->uldtxq[idx]; + + ret = ofld_xmit_direct(txq, src, len); + local_bh_enable(); + return net_xmit_eval(ret); +} +EXPORT_SYMBOL(cxgb4_immdata_send); + /** * t4_crypto_send - send crypto packet * @adap: the adapter From patchwork Thu Feb 22 17:52:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876786 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMQ22CX3z9ryG for ; Fri, 23 Feb 2018 04:52:42 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933661AbeBVRwk (ORCPT ); Thu, 22 Feb 2018 12:52:40 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:3721 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933516AbeBVRwb (ORCPT ); Thu, 22 Feb 2018 12:52:31 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHqA32014680; Thu, 22 Feb 2018 09:52:22 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 07/12] chcr: Key Macro Date: Thu, 22 Feb 2018 23:22:03 +0530 Message-Id: <1519321928-22995-4-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> References: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Define macro for TLS Key context Signed-off-by: Atul Gupta --- drivers/crypto/chelsio/chcr_algo.h | 42 +++++++++++++++++++++++++++++ drivers/crypto/chelsio/chcr_core.h | 55 +++++++++++++++++++++++++++++++++++++- 2 files changed, 96 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/chelsio/chcr_algo.h b/drivers/crypto/chelsio/chcr_algo.h index d1673a5..f263cd4 100644 --- a/drivers/crypto/chelsio/chcr_algo.h +++ b/drivers/crypto/chelsio/chcr_algo.h @@ -86,6 +86,39 @@ KEY_CONTEXT_OPAD_PRESENT_M) #define KEY_CONTEXT_OPAD_PRESENT_F KEY_CONTEXT_OPAD_PRESENT_V(1U) +#define TLS_KEYCTX_RXFLIT_CNT_S 24 +#define TLS_KEYCTX_RXFLIT_CNT_V(x) ((x) << TLS_KEYCTX_RXFLIT_CNT_S) + +#define TLS_KEYCTX_RXPROT_VER_S 20 +#define TLS_KEYCTX_RXPROT_VER_M 0xf +#define TLS_KEYCTX_RXPROT_VER_V(x) ((x) << TLS_KEYCTX_RXPROT_VER_S) + +#define TLS_KEYCTX_RXCIPH_MODE_S 16 +#define TLS_KEYCTX_RXCIPH_MODE_M 0xf +#define TLS_KEYCTX_RXCIPH_MODE_V(x) ((x) << TLS_KEYCTX_RXCIPH_MODE_S) + +#define TLS_KEYCTX_RXAUTH_MODE_S 12 +#define TLS_KEYCTX_RXAUTH_MODE_M 0xf +#define TLS_KEYCTX_RXAUTH_MODE_V(x) ((x) << TLS_KEYCTX_RXAUTH_MODE_S) + +#define TLS_KEYCTX_RXCIAU_CTRL_S 11 +#define TLS_KEYCTX_RXCIAU_CTRL_V(x) ((x) << TLS_KEYCTX_RXCIAU_CTRL_S) + +#define TLS_KEYCTX_RX_SEQCTR_S 9 +#define TLS_KEYCTX_RX_SEQCTR_M 0x3 +#define TLS_KEYCTX_RX_SEQCTR_V(x) ((x) << TLS_KEYCTX_RX_SEQCTR_S) + +#define TLS_KEYCTX_RX_VALID_S 8 +#define TLS_KEYCTX_RX_VALID_V(x) ((x) << TLS_KEYCTX_RX_VALID_S) + +#define TLS_KEYCTX_RXCK_SIZE_S 3 +#define TLS_KEYCTX_RXCK_SIZE_M 0x7 +#define TLS_KEYCTX_RXCK_SIZE_V(x) ((x) << TLS_KEYCTX_RXCK_SIZE_S) + +#define TLS_KEYCTX_RXMK_SIZE_S 0 +#define TLS_KEYCTX_RXMK_SIZE_M 0x7 +#define TLS_KEYCTX_RXMK_SIZE_V(x) ((x) << TLS_KEYCTX_RXMK_SIZE_S) + #define CHCR_HASH_MAX_DIGEST_SIZE 64 #define CHCR_MAX_SHA_DIGEST_SIZE 64 @@ -176,6 +209,15 @@ KEY_CONTEXT_SALT_PRESENT_V(1) | \ KEY_CONTEXT_CTX_LEN_V((ctx_len))) +#define FILL_KEY_CRX_HDR(ck_size, mk_size, d_ck, opad, ctx_len) \ + htonl(TLS_KEYCTX_RXMK_SIZE_V(mk_size) | \ + TLS_KEYCTX_RXCK_SIZE_V(ck_size) | \ + TLS_KEYCTX_RX_VALID_V(1) | \ + TLS_KEYCTX_RX_SEQCTR_V(3) | \ + TLS_KEYCTX_RXAUTH_MODE_V(4) | \ + TLS_KEYCTX_RXCIPH_MODE_V(2) | \ + TLS_KEYCTX_RXFLIT_CNT_V((ctx_len))) + #define FILL_WR_OP_CCTX_SIZE \ htonl( \ FW_CRYPTO_LOOKASIDE_WR_OPCODE_V( \ diff --git a/drivers/crypto/chelsio/chcr_core.h b/drivers/crypto/chelsio/chcr_core.h index 3c29ee0..77056a9 100644 --- a/drivers/crypto/chelsio/chcr_core.h +++ b/drivers/crypto/chelsio/chcr_core.h @@ -65,10 +65,58 @@ struct _key_ctx { __be32 ctx_hdr; u8 salt[MAX_SALT]; - __be64 reserverd; + __be64 iv_to_auth; unsigned char key[0]; }; +#define KEYCTX_TX_WR_IV_S 55 +#define KEYCTX_TX_WR_IV_M 0x1ffULL +#define KEYCTX_TX_WR_IV_V(x) ((x) << KEYCTX_TX_WR_IV_S) +#define KEYCTX_TX_WR_IV_G(x) \ + (((x) >> KEYCTX_TX_WR_IV_S) & KEYCTX_TX_WR_IV_M) + +#define KEYCTX_TX_WR_AAD_S 47 +#define KEYCTX_TX_WR_AAD_M 0xffULL +#define KEYCTX_TX_WR_AAD_V(x) ((x) << KEYCTX_TX_WR_AAD_S) +#define KEYCTX_TX_WR_AAD_G(x) (((x) >> KEYCTX_TX_WR_AAD_S) & \ + KEYCTX_TX_WR_AAD_M) + +#define KEYCTX_TX_WR_AADST_S 39 +#define KEYCTX_TX_WR_AADST_M 0xffULL +#define KEYCTX_TX_WR_AADST_V(x) ((x) << KEYCTX_TX_WR_AADST_S) +#define KEYCTX_TX_WR_AADST_G(x) \ + (((x) >> KEYCTX_TX_WR_AADST_S) & KEYCTX_TX_WR_AADST_M) + +#define KEYCTX_TX_WR_CIPHER_S 30 +#define KEYCTX_TX_WR_CIPHER_M 0x1ffULL +#define KEYCTX_TX_WR_CIPHER_V(x) ((x) << KEYCTX_TX_WR_CIPHER_S) +#define KEYCTX_TX_WR_CIPHER_G(x) \ + (((x) >> KEYCTX_TX_WR_CIPHER_S) & KEYCTX_TX_WR_CIPHER_M) + +#define KEYCTX_TX_WR_CIPHERST_S 23 +#define KEYCTX_TX_WR_CIPHERST_M 0x7f +#define KEYCTX_TX_WR_CIPHERST_V(x) ((x) << KEYCTX_TX_WR_CIPHERST_S) +#define KEYCTX_TX_WR_CIPHERST_G(x) \ + (((x) >> KEYCTX_TX_WR_CIPHERST_S) & KEYCTX_TX_WR_CIPHERST_M) + +#define KEYCTX_TX_WR_AUTH_S 14 +#define KEYCTX_TX_WR_AUTH_M 0x1ff +#define KEYCTX_TX_WR_AUTH_V(x) ((x) << KEYCTX_TX_WR_AUTH_S) +#define KEYCTX_TX_WR_AUTH_G(x) \ + (((x) >> KEYCTX_TX_WR_AUTH_S) & KEYCTX_TX_WR_AUTH_M) + +#define KEYCTX_TX_WR_AUTHST_S 7 +#define KEYCTX_TX_WR_AUTHST_M 0x7f +#define KEYCTX_TX_WR_AUTHST_V(x) ((x) << KEYCTX_TX_WR_AUTHST_S) +#define KEYCTX_TX_WR_AUTHST_G(x) \ + (((x) >> KEYCTX_TX_WR_AUTHST_S) & KEYCTX_TX_WR_AUTHST_M) + +#define KEYCTX_TX_WR_AUTHIN_S 0 +#define KEYCTX_TX_WR_AUTHIN_M 0x7f +#define KEYCTX_TX_WR_AUTHIN_V(x) ((x) << KEYCTX_TX_WR_AUTHIN_S) +#define KEYCTX_TX_WR_AUTHIN_G(x) \ + (((x) >> KEYCTX_TX_WR_AUTHIN_S) & KEYCTX_TX_WR_AUTHIN_M) + struct chcr_wr { struct fw_crypto_lookaside_wr wreq; struct ulp_txpkt ulptx; @@ -90,6 +138,11 @@ struct uld_ctx { struct chcr_dev *dev; }; +struct sge_opaque_hdr { + void *dev; + dma_addr_t addr[MAX_SKB_FRAGS + 1]; +}; + struct chcr_ipsec_req { struct ulp_txpkt ulptx; struct ulptx_idata sc_imm; From patchwork Thu Feb 22 17:52:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876793 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMR35jxrz9ryG for ; Fri, 23 Feb 2018 04:53:35 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933664AbeBVRwp (ORCPT ); Thu, 22 Feb 2018 12:52:45 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:53622 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933660AbeBVRwl (ORCPT ); Thu, 22 Feb 2018 12:52:41 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHqA33014680; Thu, 22 Feb 2018 09:52:26 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 08/12] chtls: Key program Date: Thu, 22 Feb 2018 23:22:04 +0530 Message-Id: <1519321928-22995-5-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> References: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Program the tx and rx key on chip. Signed-off-by: Atul Gupta --- drivers/crypto/chelsio/chtls/chtls_hw.c | 394 ++++++++++++++++++++++++++++++++ 1 file changed, 394 insertions(+) create mode 100644 drivers/crypto/chelsio/chtls/chtls_hw.c diff --git a/drivers/crypto/chelsio/chtls/chtls_hw.c b/drivers/crypto/chelsio/chtls/chtls_hw.c new file mode 100644 index 0000000..c3e17159 --- /dev/null +++ b/drivers/crypto/chelsio/chtls/chtls_hw.c @@ -0,0 +1,394 @@ +/* + * Copyright (c) 2017 Chelsio Communications, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * Written by: Atul Gupta (atul.gupta@chelsio.com) + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "chtls.h" +#include "chtls_cm.h" + +static void __set_tcb_field_direct(struct chtls_sock *csk, + struct cpl_set_tcb_field *req, u16 word, + u64 mask, u64 val, u8 cookie, int no_reply) +{ + struct ulptx_idata *sc; + + INIT_TP_WR_CPL(req, CPL_SET_TCB_FIELD, csk->tid); + req->wr.wr_mid |= htonl(FW_WR_FLOWID_V(csk->tid)); + req->reply_ctrl = htons(NO_REPLY_V(no_reply) | + QUEUENO_V(csk->rss_qid)); + req->word_cookie = htons(TCB_WORD_V(word) | TCB_COOKIE_V(cookie)); + req->mask = cpu_to_be64(mask); + req->val = cpu_to_be64(val); + sc = (struct ulptx_idata *)(req + 1); + sc->cmd_more = htonl(ULPTX_CMD_V(ULP_TX_SC_NOOP)); + sc->len = htonl(0); +} + +static void __set_tcb_field(struct sock *sk, struct sk_buff *skb, u16 word, + u64 mask, u64 val, u8 cookie, int no_reply) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct cpl_set_tcb_field *req; + struct ulptx_idata *sc; + unsigned int wrlen = roundup(sizeof(*req) + sizeof(*sc), 16); + + req = (struct cpl_set_tcb_field *)__skb_put(skb, wrlen); + __set_tcb_field_direct(csk, req, word, mask, val, cookie, no_reply); + set_wr_txq(skb, CPL_PRIORITY_CONTROL, csk->port_id); +} + +static int chtls_set_tcb_field(struct sock *sk, u16 word, u64 mask, u64 val) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct sk_buff *skb; + struct cpl_set_tcb_field *req; + struct ulptx_idata *sc; + unsigned int wrlen = roundup(sizeof(*req) + sizeof(*sc), 16); + unsigned int credits_needed = DIV_ROUND_UP(wrlen, 16); + + skb = alloc_skb(wrlen, GFP_ATOMIC); + if (!skb) + return -ENOMEM; + + __set_tcb_field(sk, skb, word, mask, val, 0, 1); + set_queue(skb, (csk->txq_idx << 1) | CPL_PRIORITY_DATA, sk); + csk->wr_credits -= credits_needed; + csk->wr_unacked += credits_needed; + enqueue_wr(csk, skb); + cxgb4_ofld_send(csk->egress_dev, skb); + return 0; +} + +/* + * Set one of the t_flags bits in the TCB. + */ +int chtls_set_tcb_tflag(struct sock *sk, unsigned int bit_pos, int val) +{ + return chtls_set_tcb_field(sk, 1, 1ULL << bit_pos, + val << bit_pos); +} + +static int chtls_set_tcb_keyid(struct sock *sk, int keyid) +{ + return chtls_set_tcb_field(sk, 31, 0xFFFFFFFFULL, keyid); +} + +static int chtls_set_tcb_seqno(struct sock *sk) +{ + return chtls_set_tcb_field(sk, 28, ~0ULL, 0); +} + +static int chtls_set_tcb_quiesce(struct sock *sk, int val) +{ + return chtls_set_tcb_field(sk, 1, (1ULL << TF_RX_QUIESCE_S), + TF_RX_QUIESCE_V(val)); +} + +static void *chtls_alloc_mem(unsigned long size) +{ + void *p = kmalloc(size, GFP_KERNEL); + + if (!p) + p = vmalloc(size); + if (p) + memset(p, 0, size); + return p; +} + +static void chtls_free_mem(void *addr) +{ + unsigned long p = (unsigned long)addr; + + if (p >= VMALLOC_START && p < VMALLOC_END) + vfree(addr); + else + kfree(addr); +} + +/* TLS Key bitmap processing */ +int chtls_init_kmap(struct chtls_dev *cdev, struct cxgb4_lld_info *lldi) +{ + unsigned int num_key_ctx, bsize; + + num_key_ctx = (lldi->vr->key.size / TLS_KEY_CONTEXT_SZ); + bsize = BITS_TO_LONGS(num_key_ctx); + + cdev->kmap.size = num_key_ctx; + cdev->kmap.available = bsize; + cdev->kmap.addr = chtls_alloc_mem(sizeof(*cdev->kmap.addr) * + bsize); + if (!cdev->kmap.addr) + return -1; + + cdev->kmap.start = lldi->vr->key.start; + spin_lock_init(&cdev->kmap.lock); + return 0; +} + +void chtls_free_kmap(struct chtls_dev *cdev) +{ + if (cdev->kmap.addr) + chtls_free_mem(cdev->kmap.addr); +} + +static int get_new_keyid(struct chtls_sock *csk, u32 optname) +{ + struct chtls_dev *cdev = csk->cdev; + struct chtls_hws *hws = &csk->tlshws; + struct net_device *dev = csk->egress_dev; + struct adapter *adap = netdev2adap(dev); + int keyid; + + spin_lock_bh(&cdev->kmap.lock); + keyid = find_first_zero_bit(cdev->kmap.addr, cdev->kmap.size); + if (keyid < cdev->kmap.size) { + __set_bit(keyid, cdev->kmap.addr); + if (optname == TLS_RX) + hws->rxkey = keyid; + else + hws->txkey = keyid; + atomic_inc(&adap->chcr_stats.tls_key); + } else { + keyid = -1; + } + spin_unlock_bh(&cdev->kmap.lock); + return keyid; +} + +void free_tls_keyid(struct sock *sk) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct net_device *dev = csk->egress_dev; + struct adapter *adap = netdev2adap(dev); + struct chtls_dev *cdev = csk->cdev; + struct chtls_hws *hws = &csk->tlshws; + + if (!cdev->kmap.addr) + return; + + spin_lock_bh(&cdev->kmap.lock); + if (hws->rxkey >= 0) { + __clear_bit(hws->rxkey, cdev->kmap.addr); + atomic_dec(&adap->chcr_stats.tls_key); + hws->rxkey = -1; + } + if (hws->txkey >= 0) { + __clear_bit(hws->txkey, cdev->kmap.addr); + atomic_dec(&adap->chcr_stats.tls_key); + hws->txkey = -1; + } + spin_unlock_bh(&cdev->kmap.lock); +} + +static unsigned int keyid_to_addr(int start_addr, int keyid) +{ + return ((start_addr + (keyid * TLS_KEY_CONTEXT_SZ)) >> 5); +} + +static void chtls_rxkey_ivauth(struct _key_ctx *kctx) +{ + kctx->iv_to_auth = cpu_to_be64(KEYCTX_TX_WR_IV_V(6ULL) | + KEYCTX_TX_WR_AAD_V(1ULL) | + KEYCTX_TX_WR_AADST_V(5ULL) | + KEYCTX_TX_WR_CIPHER_V(14ULL) | + KEYCTX_TX_WR_CIPHERST_V(0ULL) | + KEYCTX_TX_WR_AUTH_V(14ULL) | + KEYCTX_TX_WR_AUTHST_V(16ULL) | + KEYCTX_TX_WR_AUTHIN_V(16ULL)); +} + +static int chtls_key_info(struct chtls_sock *csk, + struct _key_ctx *kctx, + u32 keylen, u32 optname) +{ + unsigned char key[CHCR_KEYCTX_CIPHER_KEY_SIZE_256]; + struct crypto_cipher *cipher; + struct tls12_crypto_info_aes_gcm_128 *gcm_ctx = + (struct tls12_crypto_info_aes_gcm_128 *) + &csk->tlshws.crypto_info; + unsigned char ghash_h[AEAD_H_SIZE]; + int ck_size, key_ctx_size; + int ret; + + key_ctx_size = sizeof(struct _key_ctx) + + roundup(keylen, 16) + AEAD_H_SIZE; + + if (keylen == AES_KEYSIZE_128) { + ck_size = CHCR_KEYCTX_CIPHER_KEY_SIZE_128; + } else if (keylen == AES_KEYSIZE_192) { + ck_size = CHCR_KEYCTX_CIPHER_KEY_SIZE_192; + } else if (keylen == AES_KEYSIZE_256) { + ck_size = CHCR_KEYCTX_CIPHER_KEY_SIZE_256; + } else { + pr_err("GCM: Invalid key length %d\n", keylen); + return -EINVAL; + } + memcpy(key, gcm_ctx->key, keylen); + + /* Calculate the H = CIPH(K, 0 repeated 16 times). + * It will go in key context + */ + cipher = crypto_alloc_cipher("aes", 0, 0); + if (IS_ERR(cipher)) { + ret = -ENOMEM; + goto out; + } + + ret = crypto_cipher_setkey(cipher, key, keylen); + if (ret) + goto out1; + + memset(ghash_h, 0, AEAD_H_SIZE); + crypto_cipher_encrypt_one(cipher, ghash_h, ghash_h); + csk->tlshws.keylen = key_ctx_size; + + /* Copy the Key context */ + if (optname == TLS_RX) { + int key_ctx; + + key_ctx = ((key_ctx_size >> 4) << 3); + kctx->ctx_hdr = FILL_KEY_CRX_HDR(ck_size, + CHCR_KEYCTX_MAC_KEY_SIZE_128, + 0, 0, key_ctx); + chtls_rxkey_ivauth(kctx); + } else { + kctx->ctx_hdr = FILL_KEY_CTX_HDR(ck_size, + CHCR_KEYCTX_MAC_KEY_SIZE_128, + 0, 0, key_ctx_size >> 4); + } + + memcpy(kctx->salt, gcm_ctx->salt, TLS_CIPHER_AES_GCM_128_SALT_SIZE); + memcpy(kctx->key, gcm_ctx->key, keylen); + memcpy(kctx->key + keylen, ghash_h, AEAD_H_SIZE); + /* erase key info from driver */ + memset(gcm_ctx->key, 0, keylen); + +out1: + crypto_free_cipher(cipher); +out: + return ret; +} + +static void chtls_set_scmd(struct chtls_sock *csk) +{ + struct chtls_hws *hws = &csk->tlshws; + + hws->scmd.seqno_numivs = + SCMD_SEQ_NO_CTRL_V(3) | + SCMD_PROTO_VERSION_V(0) | + SCMD_ENC_DEC_CTRL_V(0) | + SCMD_CIPH_AUTH_SEQ_CTRL_V(1) | + SCMD_CIPH_MODE_V(2) | + SCMD_AUTH_MODE_V(4) | + SCMD_HMAC_CTRL_V(0) | + SCMD_IV_SIZE_V(4) | + SCMD_NUM_IVS_V(1); + + hws->scmd.ivgen_hdrlen = + SCMD_IV_GEN_CTRL_V(1) | + SCMD_KEY_CTX_INLINE_V(0) | + SCMD_TLS_FRAG_ENABLE_V(1); +} + +int chtls_setkey(struct chtls_sock *csk, u32 keylen, u32 optname) +{ + struct chtls_dev *cdev = csk->cdev; + struct sock *sk = csk->sk; + struct tls_key_req *kwr; + struct _key_ctx *kctx; + struct sk_buff *skb; + int wrlen, klen, len; + int keyid; + int ret = 0; + + klen = roundup((keylen + AEAD_H_SIZE) + sizeof(*kctx), 32); + wrlen = roundup(sizeof(*kwr), 16); + len = klen + wrlen; + + /* Flush out-standing data before new key takes effect */ + if (optname == TLS_TX) { + lock_sock(sk); + if (skb_queue_len(&csk->txq)) + chtls_push_frames(csk, 0); + release_sock(sk); + } + + keyid = get_new_keyid(csk, optname); + if (keyid < 0) + return -ENOSPC; + + skb = alloc_skb(len, GFP_KERNEL); + if (!skb) + return -ENOMEM; + + kwr = (struct tls_key_req *)__skb_put_zero(skb, len); + kwr->wr.op_to_compl = + cpu_to_be32(FW_WR_OP_V(FW_ULPTX_WR) | FW_WR_COMPL_F | + FW_WR_ATOMIC_V(1U)); + kwr->wr.flowid_len16 = + cpu_to_be32(FW_WR_LEN16_V(DIV_ROUND_UP(len, 16) | + FW_WR_FLOWID_V(csk->tid))); + kwr->wr.protocol = 0; + kwr->wr.mfs = htons(TLS_MFS); + kwr->wr.reneg_to_write_rx = optname; + + /* ulptx command */ + kwr->req.cmd = cpu_to_be32(ULPTX_CMD_V(ULP_TX_MEM_WRITE) | + T5_ULP_MEMIO_ORDER_V(1) | + T5_ULP_MEMIO_IMM_V(1)); + kwr->req.len16 = cpu_to_be32((csk->tid << 8) | + DIV_ROUND_UP(len - sizeof(kwr->wr), 16)); + kwr->req.dlen = cpu_to_be32(ULP_MEMIO_DATA_LEN_V(klen >> 5)); + kwr->req.lock_addr = cpu_to_be32(ULP_MEMIO_ADDR_V(keyid_to_addr + (cdev->kmap.start, keyid))); + + /* sub command */ + kwr->sc_imm.cmd_more = cpu_to_be32(ULPTX_CMD_V(ULP_TX_SC_IMM)); + kwr->sc_imm.len = cpu_to_be32(klen); + + /* key info */ + kctx = (struct _key_ctx *)(kwr + 1); + ret = chtls_key_info(csk, kctx, keylen, optname); + + csk->wr_credits -= DIV_ROUND_UP(len, 16); + csk->wr_unacked += DIV_ROUND_UP(len, 16); + enqueue_wr(csk, skb); + cxgb4_ofld_send(csk->egress_dev, skb); + + chtls_set_scmd(csk); + /* Clear quiesce for Rx key */ + if (optname == TLS_RX) { + chtls_set_tcb_keyid(sk, keyid); + chtls_set_tcb_field(sk, 0, + TCB_ULP_RAW_V(TCB_ULP_RAW_M), + TCB_ULP_RAW_V((TF_TLS_KEY_SIZE_V(1) | + TF_TLS_CONTROL_V(1) | + TF_TLS_ACTIVE_V(1) | + TF_TLS_ENABLE_V(1)))); + chtls_set_tcb_seqno(sk); + chtls_set_tcb_quiesce(sk, 0); + csk->tlshws.rxkey = keyid; + } else { + csk->tlshws.tx_seq_no = 0; + csk->tlshws.txkey = keyid; + } + + return ret; +} From patchwork Thu Feb 22 17:52:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876787 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMQ855rGz9ryG for ; Fri, 23 Feb 2018 04:52:48 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933669AbeBVRwq (ORCPT ); Thu, 22 Feb 2018 12:52:46 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:55854 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933441AbeBVRwk (ORCPT ); Thu, 22 Feb 2018 12:52:40 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHqA34014680; Thu, 22 Feb 2018 09:52:29 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 09/12] chtls: CPL handler definition Date: Thu, 22 Feb 2018 23:22:05 +0530 Message-Id: <1519321928-22995-6-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> References: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org CPL handlers for TLS session, record transmit and receive. Signed-off-by: Atul Gupta --- drivers/crypto/chelsio/chtls/chtls_cm.c | 2041 +++++++++++++++++++++++++++++++ net/ipv4/tcp_minisocks.c | 1 + 2 files changed, 2042 insertions(+) create mode 100644 drivers/crypto/chelsio/chtls/chtls_cm.c diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.c b/drivers/crypto/chelsio/chtls/chtls_cm.c new file mode 100644 index 0000000..1c95e87 --- /dev/null +++ b/drivers/crypto/chelsio/chtls/chtls_cm.c @@ -0,0 +1,2041 @@ +/* + * Copyright (c) 2017 Chelsio Communications, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * Written by: Atul Gupta (atul.gupta@chelsio.com) + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "chtls.h" +#include "chtls_cm.h" + +extern struct request_sock_ops chtls_rsk_ops; + +/* + * State transitions and actions for close. Note that if we are in SYN_SENT + * we remain in that state as we cannot control a connection while it's in + * SYN_SENT; such connections are allowed to establish and are then aborted. + */ +static unsigned char new_state[16] = { + /* current state: new state: action: */ + /* (Invalid) */ TCP_CLOSE, + /* TCP_ESTABLISHED */ TCP_FIN_WAIT1 | TCP_ACTION_FIN, + /* TCP_SYN_SENT */ TCP_SYN_SENT, + /* TCP_SYN_RECV */ TCP_FIN_WAIT1 | TCP_ACTION_FIN, + /* TCP_FIN_WAIT1 */ TCP_FIN_WAIT1, + /* TCP_FIN_WAIT2 */ TCP_FIN_WAIT2, + /* TCP_TIME_WAIT */ TCP_CLOSE, + /* TCP_CLOSE */ TCP_CLOSE, + /* TCP_CLOSE_WAIT */ TCP_LAST_ACK | TCP_ACTION_FIN, + /* TCP_LAST_ACK */ TCP_LAST_ACK, + /* TCP_LISTEN */ TCP_CLOSE, + /* TCP_CLOSING */ TCP_CLOSING, +}; + +static struct chtls_sock *chtls_sock_create(struct chtls_dev *cdev) +{ + struct chtls_sock *csk = kzalloc(sizeof(*csk), GFP_ATOMIC); + + if (!csk) + return NULL; + + csk->txdata_skb_cache = alloc_skb(TXDATA_SKB_LEN, GFP_ATOMIC); + if (!csk->txdata_skb_cache) { + kfree(csk); + return NULL; + } + + kref_init(&csk->kref); + csk->cdev = cdev; + skb_queue_head_init(&csk->txq); + csk->wr_skb_head = NULL; + csk->wr_skb_tail = NULL; + csk->mss = MAX_MSS; + csk->tlshws.ofld = 1; + csk->tlshws.txkey = -1; + csk->tlshws.rxkey = -1; + csk->tlshws.mfs = TLS_MFS; + skb_queue_head_init(&csk->tlshws.sk_recv_queue); + return csk; +} + +static void chtls_sock_release(struct kref *ref) +{ + struct chtls_sock *csk = + container_of(ref, struct chtls_sock, kref); + + kfree(csk); +} + +static struct net_device *chtls_ipv4_netdev(struct chtls_dev *cdev, + struct sock *sk) +{ + struct net_device *ndev = cdev->ports[0]; + + if (likely(!inet_sk(sk)->inet_rcv_saddr)) + return ndev; + + ndev = ip_dev_find(&init_net, inet_sk(sk)->inet_rcv_saddr); + if (!ndev) + return NULL; + + if (is_vlan_dev(ndev)) + return vlan_dev_real_dev(ndev); + return ndev; +} + +static void assign_rxopt(struct sock *sk, unsigned int opt) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct tcp_sock *tp = tcp_sk(sk); + const struct chtls_dev *cdev; + + cdev = csk->cdev; + tp->tcp_header_len = sizeof(struct tcphdr); + tp->rx_opt.mss_clamp = cdev->mtus[TCPOPT_MSS_G(opt)] - 40; + tp->mss_cache = tp->rx_opt.mss_clamp; + tp->rx_opt.tstamp_ok = TCPOPT_TSTAMP_G(opt); + tp->rx_opt.snd_wscale = TCPOPT_SACK_G(opt); + tp->rx_opt.wscale_ok = TCPOPT_WSCALE_OK_G(opt); + SND_WSCALE(tp) = TCPOPT_SND_WSCALE_G(opt); + if (!tp->rx_opt.wscale_ok) + tp->rx_opt.rcv_wscale = 0; + if (tp->rx_opt.tstamp_ok) { + tp->tcp_header_len += TCPOLEN_TSTAMP_ALIGNED; + tp->rx_opt.mss_clamp -= TCPOLEN_TSTAMP_ALIGNED; + } else if (csk->opt2 & TSTAMPS_EN_F) { + csk->opt2 &= ~TSTAMPS_EN_F; + csk->mtu_idx = TCPOPT_MSS_G(opt); + } +} + +static void chtls_purge_rcv_queue(struct sock *sk) +{ + struct sk_buff *skb; + + while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) { + skb_dst_set(skb, (void *)NULL); + kfree_skb(skb); + } +} + +static void chtls_purge_write_queue(struct sock *sk) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct sk_buff *skb; + + while ((skb = __skb_dequeue(&csk->txq))) { + sk->sk_wmem_queued -= skb->truesize; + __kfree_skb(skb); + } +} + +static void chtls_purge_receive_queue(struct sock *sk) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct chtls_hws *tlsk = &csk->tlshws; + struct sk_buff *skb; + + while ((skb = __skb_dequeue(&tlsk->sk_recv_queue)) != NULL) { + skb_dst_set(skb, NULL); + kfree_skb(skb); + } +} + +static void abort_arp_failure(void *handle, struct sk_buff *skb) +{ + struct chtls_dev *cdev = (struct chtls_dev *)handle; + struct cpl_abort_req *req = cplhdr(skb); + + req->cmd = CPL_ABORT_NO_RST; + cxgb4_ofld_send(cdev->lldi->ports[0], skb); +} + +static struct sk_buff *alloc_ctrl_skb(struct sk_buff *skb, int len) +{ + if (likely(skb && !skb_shared(skb) && !skb_cloned(skb))) { + __skb_trim(skb, 0); + refcount_add(2, &skb->users); + } else { + skb = alloc_skb(len, GFP_KERNEL | __GFP_NOFAIL); + } + return skb; +} + +static void chtls_send_abort(struct sock *sk, int mode, struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct tcp_sock *tp = tcp_sk(sk); + struct cpl_abort_req *req; + + if (!skb) + skb = alloc_ctrl_skb(csk->txdata_skb_cache, sizeof(*req)); + + req = (struct cpl_abort_req *)skb_put(skb, sizeof(*req)); + INIT_TP_WR_CPL(req, CPL_ABORT_REQ, csk->tid); + set_queue(skb, (csk->txq_idx << 1) | CPL_PRIORITY_DATA, sk); + req->rsvd0 = htonl(tp->snd_nxt); + req->rsvd1 = !csk_flag_nochk(csk, CSK_TX_DATA_SENT); + req->cmd = mode; + t4_set_arp_err_handler(skb, csk->cdev, abort_arp_failure); + send_or_defer(sk, tp, skb, mode == CPL_ABORT_SEND_RST); +} + +static int chtls_send_reset(struct sock *sk, int mode, struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + + if (unlikely(csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN) || + !csk->cdev)) { + if (sk->sk_state == TCP_SYN_RECV) + csk_set_flag(csk, CSK_RST_ABORTED); + goto out; + } + + if (!csk_flag_nochk(csk, CSK_TX_DATA_SENT)) { + struct tcp_sock *tp = tcp_sk(sk); + + if (send_tx_flowc_wr(sk, 0, tp->snd_nxt, tp->rcv_nxt) < 0) + WARN_ONCE(1, "send tx flowc error"); + csk_set_flag(csk, CSK_TX_DATA_SENT); + } + + csk_set_flag(csk, CSK_ABORT_RPL_PENDING); + chtls_purge_write_queue(sk); + + csk_set_flag(csk, CSK_ABORT_SHUTDOWN); + if (sk->sk_state != TCP_SYN_RECV) + chtls_send_abort(sk, mode, skb); + else + goto out; + + return 0; +out: + if (skb) + kfree_skb(skb); + return 1; +} + +static void release_tcp_port(struct sock *sk) +{ + if (inet_csk(sk)->icsk_bind_hash) + inet_put_port(sk); +} + +static void tcp_uncork(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + + if (tp->nonagle & TCP_NAGLE_CORK) { + tp->nonagle &= ~TCP_NAGLE_CORK; + chtls_tcp_push(sk, 0); + } +} + +static void chtls_close_conn(struct sock *sk) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + unsigned int len = roundup(sizeof(struct cpl_close_con_req), 16); + struct cpl_close_con_req *req; + unsigned int tid = csk->tid; + struct sk_buff *skb; + + skb = alloc_skb(len, GFP_KERNEL | __GFP_NOFAIL); + req = (struct cpl_close_con_req *)__skb_put(skb, len); + memset(req, 0, len); + req->wr.wr_hi = htonl(FW_WR_OP_V(FW_TP_WR) | + FW_WR_IMMDLEN_V(sizeof(*req) - + sizeof(req->wr))); + req->wr.wr_mid = htonl(FW_WR_LEN16_V(DIV_ROUND_UP(sizeof(*req), 16)) | + FW_WR_FLOWID_V(tid)); + + OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_CLOSE_CON_REQ, tid)); + + tcp_uncork(sk); + skb_entail(sk, skb, ULPCB_FLAG_NO_HDR | ULPCB_FLAG_NO_APPEND); + if (sk->sk_state != TCP_SYN_SENT) + chtls_push_frames(csk, 1); +} + +/* + * Perform a state transition during close and return the actions indicated + * for the transition. Do not make this function inline, the main reason + * it exists at all is to avoid multiple inlining of tcp_set_state. + */ +static int make_close_transition(struct sock *sk) +{ + int next = (int)new_state[sk->sk_state]; + + tcp_set_state(sk, next & TCP_STATE_MASK); + return next & TCP_ACTION_FIN; +} + +void chtls_close(struct sock *sk, long timeout) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + int data_lost, prev_state; + + lock_sock(sk); + if (sk->sk_prot->close != chtls_close) { + release_sock(sk); + return sk->sk_prot->close(sk, timeout); + } + + sk->sk_shutdown |= SHUTDOWN_MASK; + + data_lost = skb_queue_len(&sk->sk_receive_queue); + data_lost |= skb_queue_len(&csk->tlshws.sk_recv_queue); + chtls_purge_receive_queue(sk); + chtls_purge_rcv_queue(sk); + + if (sk->sk_state == TCP_CLOSE) { + goto wait; + } else if (data_lost || sk->sk_state == TCP_SYN_SENT) { + chtls_send_reset(sk, CPL_ABORT_SEND_RST, NULL); + release_tcp_port(sk); + goto unlock; + } else if (sock_flag(sk, SOCK_LINGER) && !sk->sk_lingertime) { + sk->sk_prot->disconnect(sk, 0); + } else if (make_close_transition(sk)) { + chtls_close_conn(sk); + } +wait: + if (timeout) + sk_stream_wait_close(sk, timeout); + +unlock: + prev_state = sk->sk_state; + sock_hold(sk); + sock_orphan(sk); + + release_sock(sk); + + local_bh_disable(); + bh_lock_sock(sk); + + if (prev_state != TCP_CLOSE && sk->sk_state == TCP_CLOSE) + goto out; + + if (sk->sk_state == TCP_FIN_WAIT2 && tcp_sk(sk)->linger2 < 0 && + !csk_flag(sk, CSK_ABORT_SHUTDOWN)) { + struct sk_buff *skb; + + skb = alloc_skb(sizeof(struct cpl_abort_req), GFP_ATOMIC); + if (skb) + chtls_send_reset(sk, CPL_ABORT_SEND_RST, skb); + } + + if (sk->sk_state == TCP_CLOSE) + inet_csk_destroy_sock(sk); + +out: + bh_unlock_sock(sk); + local_bh_enable(); + sock_put(sk); +} + +/* + * Wait until a socket enters on of the given states. + */ +static int wait_for_states(struct sock *sk, unsigned int states) +{ + struct socket_wq _sk_wq; + long current_timeo = 200; + DECLARE_WAITQUEUE(wait, current); + int err = 0; + + /* + * We want this to work even when there's no associated struct socket. + * In that case we provide a temporary wait_queue_head_t. + */ + if (!sk->sk_wq) { + init_waitqueue_head(&_sk_wq.wait); + _sk_wq.fasync_list = NULL; + init_rcu_head_on_stack(&_sk_wq.rcu); + RCU_INIT_POINTER(sk->sk_wq, &_sk_wq); + } + + add_wait_queue(sk_sleep(sk), &wait); + while (!sk_in_state(sk, states)) { + if (!current_timeo) { + err = -EBUSY; + break; + } + if (signal_pending(current)) { + err = sock_intr_errno(current_timeo); + break; + } + set_current_state(TASK_UNINTERRUPTIBLE); + release_sock(sk); + if (!sk_in_state(sk, states)) + current_timeo = schedule_timeout(current_timeo); + __set_current_state(TASK_RUNNING); + lock_sock(sk); + } + remove_wait_queue(sk_sleep(sk), &wait); + + if (rcu_dereference(sk->sk_wq) == &_sk_wq) + sk->sk_wq = NULL; + return err; +} + +int chtls_disconnect(struct sock *sk, int flags) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct chtls_sock *csk; + int err; + + if (sk->sk_prot->disconnect != chtls_disconnect) + return sk->sk_prot->disconnect(sk, flags); + + csk = rcu_dereference_sk_user_data(sk); + chtls_purge_receive_queue(sk); + chtls_purge_rcv_queue(sk); + chtls_purge_write_queue(sk); + + if (sk->sk_state != TCP_CLOSE) { + sk->sk_err = ECONNRESET; + chtls_send_reset(sk, CPL_ABORT_SEND_RST, NULL); + err = wait_for_states(sk, TCPF_CLOSE); + if (err) + return err; + } + if (sk->sk_prot->disconnect != chtls_disconnect) + return sk->sk_prot->disconnect(sk, flags); + + chtls_purge_receive_queue(sk); + chtls_purge_rcv_queue(sk); + tp->max_window = 0xFFFF << (tp->rx_opt.snd_wscale); + return tcp_disconnect(sk, flags); +} + +#define SHUTDOWN_ELIGIBLE_STATE (TCPF_ESTABLISHED | \ + TCPF_SYN_RECV | TCPF_CLOSE_WAIT) +void chtls_shutdown(struct sock *sk, int how) +{ + if (sk->sk_prot->shutdown != chtls_shutdown) + return sk->sk_prot->shutdown(sk, how); + + if ((how & SEND_SHUTDOWN) && + sk_in_state(sk, SHUTDOWN_ELIGIBLE_STATE) && + make_close_transition(sk)) + chtls_close_conn(sk); +} + +void chtls_destroy_sock(struct sock *sk) +{ + struct chtls_sock *csk; + + if (sk->sk_prot->destroy != chtls_destroy_sock) + return sk->sk_prot->destroy(sk); + + csk = rcu_dereference_sk_user_data(sk); + chtls_purge_receive_queue(sk); + csk->ulp_mode = ULP_MODE_NONE; + chtls_purge_write_queue(sk); + free_tls_keyid(sk); + kref_put(&csk->kref, chtls_sock_release); + + sk->sk_prot = &tcp_prot; + sk->sk_prot->destroy(sk); +} + +static void reset_listen_child(struct sock *child) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(child); + struct sk_buff *skb; + + skb = alloc_ctrl_skb(csk->txdata_skb_cache, + sizeof(struct cpl_abort_req)); + + chtls_send_reset(child, CPL_ABORT_SEND_RST, skb); + sock_orphan(child); + INC_ORPHAN_COUNT(child); + if (child->sk_state == TCP_CLOSE) + inet_csk_destroy_sock(child); +} + +static void chtls_disconnect_acceptq(struct sock *listen_sk) +{ + struct request_sock **pprev; + + pprev = ACCEPT_QUEUE(listen_sk); + while (*pprev) { + struct request_sock *req = *pprev; + + if (req->rsk_ops == &chtls_rsk_ops) { + struct sock *child = req->sk; + + *pprev = req->dl_next; + sk_acceptq_removed(listen_sk); + reqsk_put(req); + sock_hold(child); + local_bh_disable(); + bh_lock_sock(child); + release_tcp_port(child); + reset_listen_child(child); + bh_unlock_sock(child); + local_bh_enable(); + sock_put(child); + } else { + pprev = &req->dl_next; + } + } +} + +static int listen_hashfn(const struct sock *sk) +{ + return ((unsigned long)sk >> 10) & (LISTEN_INFO_HASH_SIZE - 1); +} + +static struct listen_info *listen_hash_add(struct chtls_dev *cdev, + struct sock *sk, + unsigned int stid) +{ + struct listen_info *p = kmalloc(sizeof(*p), GFP_KERNEL); + + if (p) { + int key = listen_hashfn(sk); + + p->sk = sk; + p->stid = stid; + spin_lock(&cdev->listen_lock); + p->next = cdev->listen_hash_tab[key]; + cdev->listen_hash_tab[key] = p; + spin_unlock(&cdev->listen_lock); + } + return p; +} + +static int listen_hash_find(struct chtls_dev *cdev, + struct sock *sk) +{ + int key = listen_hashfn(sk); + struct listen_info *p; + int stid = -1; + + spin_lock(&cdev->listen_lock); + for (p = cdev->listen_hash_tab[key]; p; p = p->next) + if (p->sk == sk) { + stid = p->stid; + break; + } + spin_unlock(&cdev->listen_lock); + return stid; +} + +static int listen_hash_del(struct chtls_dev *cdev, + struct sock *sk) +{ + int key = listen_hashfn(sk); + struct listen_info *p, **prev = &cdev->listen_hash_tab[key]; + int stid = -1; + + spin_lock(&cdev->listen_lock); + for (p = *prev; p; prev = &p->next, p = p->next) + if (p->sk == sk) { + stid = p->stid; + *prev = p->next; + kfree(p); + break; + } + spin_unlock(&cdev->listen_lock); + return stid; +} + +int chtls_listen_start(struct chtls_dev *cdev, struct sock *sk) +{ + struct net_device *ndev; + struct listen_ctx *ctx; + struct adapter *adap; + struct port_info *pi; + int stid; + int ret; + + if (sk->sk_family != PF_INET) + return -EAGAIN; + + rcu_read_lock(); + ndev = chtls_ipv4_netdev(cdev, sk); + rcu_read_unlock(); + if (!ndev) + return -EBADF; + + pi = netdev_priv(ndev); + adap = pi->adapter; + if (!(adap->flags & FULL_INIT_DONE)) + return -EBADF; + + if (listen_hash_find(cdev, sk) >= 0) /* already have it */ + return -EADDRINUSE; + + ctx = kmalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + __module_get(THIS_MODULE); + ctx->lsk = sk; + ctx->cdev = cdev; + ctx->state = T4_LISTEN_START_PENDING; + + if (cdev->lldi->enable_fw_ofld_conn && + sk->sk_family == PF_INET) + stid = cxgb4_alloc_sftid(cdev->tids, sk->sk_family, ctx); + else + stid = cxgb4_alloc_stid(cdev->tids, sk->sk_family, ctx); + + if (stid < 0) + goto free_ctx; + + sock_hold(sk); + if (!listen_hash_add(cdev, sk, stid)) + goto free_stid; + + if (cdev->lldi->enable_fw_ofld_conn) { + ret = cxgb4_create_server_filter(ndev, stid, + inet_sk(sk)->inet_rcv_saddr, + inet_sk(sk)->inet_sport, 0, + cdev->lldi->rxq_ids[0], 0, 0); + } else { + ret = cxgb4_create_server(ndev, stid, + inet_sk(sk)->inet_rcv_saddr, + inet_sk(sk)->inet_sport, 0, + cdev->lldi->rxq_ids[0]); + } + if (ret > 0) + ret = net_xmit_errno(ret); + if (ret) + goto del_hash; + return 0; +del_hash: + listen_hash_del(cdev, sk); +free_stid: + cxgb4_free_stid(cdev->tids, stid, sk->sk_family); + sock_put(sk); +free_ctx: + kfree(ctx); + module_put(THIS_MODULE); + return -EBADF; +} + +void chtls_listen_stop(struct chtls_dev *cdev, struct sock *sk) +{ + int stid; + + stid = listen_hash_del(cdev, sk); + if (stid < 0) + return; + + if (cdev->lldi->enable_fw_ofld_conn) { + cxgb4_remove_server_filter(cdev->lldi->ports[0], stid, + cdev->lldi->rxq_ids[0], 0); + } else { + cxgb4_remove_server(cdev->lldi->ports[0], stid, + cdev->lldi->rxq_ids[0], 0); + } + chtls_disconnect_acceptq(sk); +} + +static int chtls_pass_open_rpl(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_pass_open_rpl *rpl = cplhdr(skb) + RSS_HDR; + unsigned int stid = GET_TID(rpl); + struct listen_ctx *listen_ctx; + + listen_ctx = (struct listen_ctx *)lookup_stid(cdev->tids, stid); + if (!listen_ctx) + return 1; + + if (listen_ctx->state == T4_LISTEN_START_PENDING) { + listen_ctx->state = T4_LISTEN_STARTED; + return 1; + } + + if (rpl->status != CPL_ERR_NONE) { + pr_info("Unexpected PASS_OPEN_RPL status %u for STID %u\n", + rpl->status, stid); + return 1; + } + cxgb4_free_stid(cdev->tids, stid, listen_ctx->lsk->sk_family); + sock_put(listen_ctx->lsk); + kfree(listen_ctx); + module_put(THIS_MODULE); + + return 0; +} + +static int chtls_close_listsrv_rpl(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_close_listsvr_rpl *rpl = cplhdr(skb) + RSS_HDR; + unsigned int stid = GET_TID(rpl); + void *data = lookup_stid(cdev->tids, stid); + struct listen_ctx *listen_ctx = (struct listen_ctx *)data; + + if (rpl->status != CPL_ERR_NONE) { + pr_info("Unexpected CLOSE_LISTSRV_RPL status %u for STID %u\n", + rpl->status, stid); + return 1; + } + + cxgb4_free_stid(cdev->tids, stid, listen_ctx->lsk->sk_family); + sock_put(listen_ctx->lsk); + kfree(listen_ctx); + module_put(THIS_MODULE); + + return 0; +} + +static void conn_remove_handle(struct chtls_dev *cdev, + int tid) +{ + spin_lock_bh(&cdev->aidr_lock); + idr_remove(&cdev->aidr, tid); + spin_unlock_bh(&cdev->aidr_lock); +} + +static void free_atid(struct chtls_sock *csk, struct chtls_dev *cdev, + unsigned int atid) +{ + struct tid_info *tids = cdev->tids; + + conn_remove_handle(cdev, atid); + cxgb4_free_atid(tids, atid); + sock_put(csk->sk); + kref_put(&csk->kref, chtls_sock_release); +} + +static void chtls_release_resources(struct sock *sk) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct chtls_dev *cdev = csk->cdev; + unsigned int tid = csk->tid; + struct tid_info *tids; + + if (!cdev) + return; + + tids = cdev->tids; + kfree_skb(csk->txdata_skb_cache); + csk->txdata_skb_cache = NULL; + + if (csk->l2t_entry) { + cxgb4_l2t_release(csk->l2t_entry); + csk->l2t_entry = NULL; + } + + if (sk->sk_state == TCP_SYN_SENT) { + free_atid(csk, cdev, tid); + __skb_queue_purge(&csk->ooo_queue); + } else { + cxgb4_remove_tid(tids, csk->port_id, tid, sk->sk_family); + sock_put(sk); + } +} + +static void cleanup_syn_rcv_conn(struct sock *child, struct sock *parent) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(child); + struct request_sock *req = csk->passive_reap_next; + + reqsk_queue_removed(&inet_csk(parent)->icsk_accept_queue, req); + chtls_reqsk_free(req); + csk->passive_reap_next = NULL; +} + +static void chtls_conn_done(struct sock *sk) +{ + if (sock_flag(sk, SOCK_DEAD)) + chtls_purge_rcv_queue(sk); + sk_wakeup_sleepers(sk, 0); + tcp_done(sk); +} + +static void do_abort_syn_rcv(struct sock *child, struct sock *parent) +{ + /* + * If the server is still open we clean up the child connection, + * otherwise the server already did the clean up as it was purging + * its SYN queue and the skb was just sitting in its backlog. + */ + if (likely(parent->sk_state == TCP_LISTEN)) { + cleanup_syn_rcv_conn(child, parent); + /* Without the below call to sock_orphan, + * we leak the socket resource with syn_flood test + * as inet_csk_destroy_sock will not be called + * in tcp_done since SOCK_DEAD flag is not set. + * Kernel handles this differently where new socket is + * created only after 3 way handshake is done. + */ + sock_orphan(child); + percpu_counter_inc((child)->sk_prot->orphan_count); + chtls_release_resources(child); + chtls_conn_done(child); + } else { + if (csk_flag(child, CSK_RST_ABORTED)) { + chtls_release_resources(child); + chtls_conn_done(child); + } + } +} + +static void pass_open_abort(struct sock *child, struct sock *parent, + struct sk_buff *skb) +{ + do_abort_syn_rcv(child, parent); + kfree_skb(skb); +} + +static void bl_pass_open_abort(struct sock *lsk, struct sk_buff *skb) +{ + pass_open_abort(skb->sk, lsk, skb); +} + +static void chtls_pass_open_arp_failure(struct sock *sk, + struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct chtls_dev *cdev = csk->cdev; + const struct request_sock *oreq; + struct sock *parent; + void *data; + + /* + * If the connection is being aborted due to the parent listening + * socket going away there's nothing to do, the ABORT_REQ will close + * the connection. + */ + if (csk_flag(sk, CSK_ABORT_RPL_PENDING)) { + kfree_skb(skb); + return; + } + + oreq = csk->passive_reap_next; + data = lookup_stid(cdev->tids, oreq->ts_recent); + parent = ((struct listen_ctx *)data)->lsk; + + bh_lock_sock(parent); + if (!sock_owned_by_user(parent)) { + pass_open_abort(sk, parent, skb); + } else { + BLOG_SKB_CB(skb)->backlog_rcv = bl_pass_open_abort; + __sk_add_backlog(parent, skb); + } + bh_unlock_sock(parent); +} + +static void chtls_accept_rpl_arp_failure(void *handle, + struct sk_buff *skb) +{ + struct sock *sk = (struct sock *)handle; + + sock_hold(sk); + process_cpl_msg(chtls_pass_open_arp_failure, sk, skb); + sock_put(sk); +} + +static unsigned int chtls_select_mss(const struct chtls_sock *csk, + unsigned int pmtu, + struct cpl_pass_accept_req *req) +{ + struct sock *sk = csk->sk; + struct tcp_sock *tp = tcp_sk(sk); + struct dst_entry *dst = __sk_dst_get(sk); + struct chtls_dev *cdev = csk->cdev; + unsigned int iphdrsz; + unsigned int tcpoptsz = 0; + unsigned int mtu_idx; + unsigned int mss = ntohs(req->tcpopt.mss); + + iphdrsz = sizeof(struct iphdr) + sizeof(struct tcphdr); + if (req->tcpopt.tstamp) + tcpoptsz += round_up(TCPOLEN_TIMESTAMP, 4); + + tp->advmss = dst_metric_advmss(dst); + if (USER_MSS(tp) && tp->advmss > USER_MSS(tp)) + tp->advmss = USER_MSS(tp); + if (tp->advmss > pmtu - iphdrsz) + tp->advmss = pmtu - iphdrsz; + if (mss && tp->advmss > mss) + tp->advmss = mss; + + tp->advmss = cxgb4_best_aligned_mtu(cdev->lldi->mtus, + iphdrsz + tcpoptsz, + tp->advmss - tcpoptsz, + 8, &mtu_idx); + tp->advmss -= iphdrsz; + + inet_csk(sk)->icsk_pmtu_cookie = pmtu; + return mtu_idx; +} + +static unsigned int select_rcv_wnd(struct chtls_sock *csk) +{ + struct sock *sk = csk->sk; + unsigned int wnd = tcp_full_space(sk); + unsigned int rcvwnd; + + if (wnd < MIN_RCV_WND) + wnd = MIN_RCV_WND; + + rcvwnd = MAX_RCV_WND; + + csk_set_flag(csk, CSK_UPDATE_RCV_WND); + return min(wnd, rcvwnd); +} + +static void chtls_pass_accept_rpl(struct sk_buff *skb, + struct cpl_pass_accept_req *req, + unsigned int tid) + +{ + struct cpl_t5_pass_accept_rpl *rpl5; + unsigned int len = roundup(sizeof(*rpl5), 16); + struct cxgb4_lld_info *lldi; + const struct tcphdr *tcph; + const struct tcp_sock *tp; + struct chtls_sock *csk; + struct sock *sk; + u64 opt0; + u32 opt2, hlen; + + sk = skb->sk; + tp = tcp_sk(sk); + csk = sk->sk_user_data; + csk->tid = tid; + lldi = csk->cdev->lldi; + + rpl5 = __skb_put_zero(skb, len); + INIT_TP_WR(rpl5, tid); + + OPCODE_TID(rpl5) = cpu_to_be32(MK_OPCODE_TID(CPL_PASS_ACCEPT_RPL, + csk->tid)); + csk->mtu_idx = chtls_select_mss(csk, dst_mtu(__sk_dst_get(sk)), + req); + opt0 = TCAM_BYPASS_F | + WND_SCALE_V((tp)->rx_opt.rcv_wscale) | + MSS_IDX_V(csk->mtu_idx) | + L2T_IDX_V(csk->l2t_entry->idx) | + NAGLE_V(!(tp->nonagle & TCP_NAGLE_OFF)) | + TX_CHAN_V(csk->tx_chan) | + SMAC_SEL_V(csk->smac_idx) | + DSCP_V(csk->tos >> 2) | + ULP_MODE_V(ULP_MODE_TLS) | + RCV_BUFSIZ_V(min(tp->rcv_wnd >> 10, RCV_BUFSIZ_M)); + + opt2 = RX_CHANNEL_V(0) | + RSS_QUEUE_VALID_F | RSS_QUEUE_V(csk->rss_qid); + + if (!is_t5(lldi->adapter_type)) + opt2 |= RX_FC_DISABLE_F; + if (req->tcpopt.tstamp) + opt2 |= TSTAMPS_EN_F; + if (req->tcpopt.sack) + opt2 |= SACK_EN_F; + hlen = ntohl(req->hdr_len); + + tcph = (struct tcphdr *)((u8 *)(req + 1) + + T6_ETH_HDR_LEN_G(hlen) + T6_IP_HDR_LEN_G(hlen)); + if (tcph->ece && tcph->cwr) + opt2 |= CCTRL_ECN_V(1); + opt2 |= CONG_CNTRL_V(CONG_ALG_NEWRENO); + opt2 |= T5_ISS_F; + opt2 |= T5_OPT_2_VALID_F; + rpl5->opt0 = cpu_to_be64(opt0); + rpl5->opt2 = cpu_to_be32(opt2); + rpl5->iss = cpu_to_be32((prandom_u32() & ~7UL) - 1); + set_wr_txq(skb, CPL_PRIORITY_SETUP, csk->port_id); + t4_set_arp_err_handler(skb, sk, chtls_accept_rpl_arp_failure); + cxgb4_l2t_send(csk->egress_dev, skb, csk->l2t_entry); +} + +static void inet_inherit_port(struct inet_hashinfo *hash_info, + struct sock *lsk, struct sock *newsk) +{ + local_bh_disable(); + __inet_inherit_port(lsk, newsk); + local_bh_enable(); +} + +static int chtls_backlog_rcv(struct sock *sk, struct sk_buff *skb) +{ + if (skb->protocol) { + kfree_skb(skb); + return 0; + } + BLOG_SKB_CB(skb)->backlog_rcv(sk, skb); + return 0; +} + +static struct sock *chtls_recv_sock(struct sock *lsk, + struct request_sock *oreq, + void *network_hdr, + const struct cpl_pass_accept_req *req, + struct chtls_dev *cdev) + +{ + const struct iphdr *iph = (const struct iphdr *)network_hdr; + struct dst_entry *dst = NULL; + const struct tcphdr *tcph; + struct inet_sock *newinet; + struct net_device *ndev; + struct chtls_sock *csk; + struct neighbour *n; + struct tcp_sock *tp; + struct sock *newsk; + u16 port_id; + int rxq_idx; + int step; + + newsk = tcp_create_openreq_child(lsk, oreq, cdev->askb); + if (!newsk) + goto free_oreq; + + dst = inet_csk_route_child_sock(lsk, newsk, oreq); + if (!dst) + goto free_sk; + + tcph = (struct tcphdr *)(iph + 1); + n = dst_neigh_lookup(dst, &iph->saddr); + if (!n) + goto free_sk; + + ndev = n->dev; + if (!ndev) + goto free_dst; + port_id = cxgb4_port_idx(ndev); + + csk = chtls_sock_create(cdev); + if (!csk) + goto free_dst; + + csk->l2t_entry = cxgb4_l2t_get(cdev->lldi->l2t, n, ndev, 0); + if (!csk->l2t_entry) + goto free_csk; + + newsk->sk_user_data = csk; + newsk->sk_backlog_rcv = chtls_backlog_rcv; + + tp = tcp_sk(newsk); + newinet = inet_sk(newsk); + + newinet->inet_daddr = iph->saddr; + newinet->inet_rcv_saddr = iph->daddr; + newinet->inet_saddr = iph->daddr; + + oreq->ts_recent = PASS_OPEN_TID_G(ntohl(req->tos_stid)); + sk_setup_caps(newsk, dst); + csk->sk = newsk; + csk->passive_reap_next = oreq; + csk->tx_chan = cxgb4_port_chan(ndev); + csk->port_id = port_id; + csk->egress_dev = ndev; + csk->tos = PASS_OPEN_TOS_G(ntohl(req->tos_stid)); + csk->ulp_mode = ULP_MODE_TLS; + step = cdev->lldi->nrxq / cdev->lldi->nchan; + csk->rss_qid = cdev->lldi->rxq_ids[port_id * step]; + rxq_idx = port_id * step; + csk->txq_idx = (rxq_idx < cdev->lldi->ntxq) ? rxq_idx : + port_id * step; + csk->sndbuf = newsk->sk_sndbuf; + csk->smac_idx = cxgb4_tp_smt_idx(cdev->lldi->adapter_type, + cxgb4_port_viid(ndev)); + tp->rcv_wnd = select_rcv_wnd(csk); + bh_unlock_sock(newsk); /* tcp_create_openreq_child ->sk_clone_lock */ + + neigh_release(n); + lsk->sk_prot->hash(newsk); + inet_inherit_port(&tcp_hashinfo, lsk, newsk); + + return newsk; +free_csk: + chtls_sock_release(&csk->kref); +free_dst: + dst_release(dst); +free_sk: + inet_csk_prepare_forced_close(newsk); + tcp_done(newsk); +free_oreq: + chtls_reqsk_free(oreq); + return NULL; +} + +/* + * Populate a TID_RELEASE WR. The skb must be already propely sized. + */ +static void mk_tid_release(struct sk_buff *skb, + unsigned int chan, unsigned int tid) +{ + unsigned int len = roundup(sizeof(struct cpl_tid_release), 16); + struct cpl_tid_release *req; + + req = (struct cpl_tid_release *)__skb_put(skb, len); + memset(req, 0, len); + set_wr_txq(skb, CPL_PRIORITY_SETUP, chan); + INIT_TP_WR_CPL(req, CPL_TID_RELEASE, tid); +} + +static int chtls_get_module(struct sock *sk) +{ + struct inet_connection_sock *icsk = inet_csk(sk); + + if (!try_module_get(icsk->icsk_ulp_ops->owner)) + return -1; + + return 0; +} + +static void chtls_pass_accept_request(struct sock *sk, + struct sk_buff *skb) +{ + struct cpl_pass_accept_req *req = cplhdr(skb) + RSS_HDR; + struct chtls_dev *cdev = BLOG_SKB_CB(skb)->cdev; + struct cpl_t5_pass_accept_rpl *rpl; + unsigned int len = roundup(sizeof(*rpl), 16); + struct request_sock *oreq = NULL; + unsigned int tid = GET_TID(req); + struct sk_buff *reply_skb; + struct sock *newsk; + struct ethhdr *eh; + struct iphdr *iph; + struct tcphdr *tcph; + void *network_hdr; + + newsk = lookup_tid(cdev->tids, tid); + if (newsk) { + pr_info("tid (%d) already in use\n", tid); + return; + } + + reply_skb = alloc_skb(len, GFP_ATOMIC); + if (!reply_skb) { + cxgb4_remove_tid(cdev->tids, 0, tid, sk->sk_family); + kfree_skb(skb); + return; + } + + if (sk->sk_state != TCP_LISTEN) + goto reject; + + if (inet_csk_reqsk_queue_is_full(sk)) + goto reject; + + if (sk_acceptq_is_full(sk)) + goto reject; + + oreq = inet_reqsk_alloc(&chtls_rsk_ops, sk, true); + if (!oreq) + goto reject; + + oreq->rsk_rcv_wnd = 0; + oreq->rsk_window_clamp = 0; + oreq->cookie_ts = 0; + oreq->mss = 0; + oreq->ts_recent = 0; + + eh = (struct ethhdr *)(req + 1); + iph = (struct iphdr *)(eh + 1); + if (iph->version != 0x4) + goto free_oreq; + + network_hdr = (void *)(eh + 1); + tcph = (struct tcphdr *)(iph + 1); + + tcp_rsk(oreq)->tfo_listener = false; + tcp_rsk(oreq)->rcv_isn = ntohl(tcph->seq); + chtls_set_req_port(oreq, tcph->source, tcph->dest); + inet_rsk(oreq)->ecn_ok = 0; + chtls_set_req_addr(oreq, iph->daddr, iph->saddr); + if (req->tcpopt.wsf <= 14) { + inet_rsk(oreq)->wscale_ok = 1; + inet_rsk(oreq)->snd_wscale = req->tcpopt.wsf; + } + inet_rsk(oreq)->ir_iif = sk->sk_bound_dev_if; + + newsk = chtls_recv_sock(sk, oreq, network_hdr, req, cdev); + if (!newsk) + goto reject; + + if (chtls_get_module(newsk)) + goto reject; + inet_csk_reqsk_queue_added(sk); + reply_skb->sk = newsk; + chtls_install_cpl_ops(newsk); + cxgb4_insert_tid(cdev->tids, newsk, tid, newsk->sk_family); + chtls_pass_accept_rpl(reply_skb, req, tid); + kfree_skb(skb); + return; + +free_oreq: + chtls_reqsk_free(oreq); +reject: + mk_tid_release(reply_skb, 0, tid); + cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb); + kfree_skb(skb); +} + +/* + * Handle a CPL_PASS_ACCEPT_REQ message. + */ +static int chtls_pass_accept_req(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_pass_accept_req *req = cplhdr(skb) + RSS_HDR; + unsigned int stid = PASS_OPEN_TID_G(ntohl(req->tos_stid)); + unsigned int tid = GET_TID(req); + struct listen_ctx *ctx; + struct sock *lsk; + void *data; + + data = lookup_stid(cdev->tids, stid); + if (!data) + return 1; + + ctx = (struct listen_ctx *)data; + lsk = ctx->lsk; + + if (unlikely(tid >= cdev->tids->ntids)) { + pr_info("passive open TID %u too large\n", tid); + return 1; + } + + BLOG_SKB_CB(skb)->cdev = cdev; + process_cpl_msg(chtls_pass_accept_request, lsk, skb); + return 0; +} + +/* + * Completes some final bits of initialization for just established connections + * and changes their state to TCP_ESTABLISHED. + * + * snd_isn here is the ISN after the SYN, i.e., the true ISN + 1. + */ +static void make_established(struct sock *sk, u32 snd_isn, unsigned int opt) +{ + struct tcp_sock *tp = tcp_sk(sk); + + tp->pushed_seq = snd_isn; + tp->write_seq = snd_isn; + tp->snd_nxt = snd_isn; + tp->snd_una = snd_isn; + inet_sk(sk)->inet_id = tp->write_seq ^ jiffies; + assign_rxopt(sk, opt); + + if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10)) + tp->rcv_wup -= tp->rcv_wnd - (RCV_BUFSIZ_M << 10); + + smp_mb(); + tcp_set_state(sk, TCP_ESTABLISHED); +} + +static void chtls_abort_conn(struct sock *sk, struct sk_buff *skb) +{ + struct sk_buff *abort_skb; + + abort_skb = alloc_skb(sizeof(struct cpl_abort_req), GFP_ATOMIC); + if (abort_skb) + chtls_send_reset(sk, CPL_ABORT_SEND_RST, abort_skb); +} + +static struct sock *reap_list; +static DEFINE_SPINLOCK(reap_list_lock); + +/* + * Process the reap list. + */ +DECLARE_TASK_FUNC(process_reap_list, task_param) +{ + spin_lock_bh(&reap_list_lock); + while (reap_list) { + struct sock *sk = reap_list; + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + + reap_list = csk->passive_reap_next; + csk->passive_reap_next = NULL; + spin_unlock(&reap_list_lock); + sock_hold(sk); + + bh_lock_sock(sk); + chtls_abort_conn(sk, NULL); + sock_orphan(sk); + if (sk->sk_state == TCP_CLOSE) + inet_csk_destroy_sock(sk); + bh_unlock_sock(sk); + sock_put(sk); + spin_lock(&reap_list_lock); + } + spin_unlock_bh(&reap_list_lock); +} + +static DECLARE_WORK(reap_task, process_reap_list); + +static void add_to_reap_list(struct sock *sk) +{ + struct chtls_sock *csk = sk->sk_user_data; + + local_bh_disable(); + bh_lock_sock(sk); + release_tcp_port(sk); /* release the port immediately */ + + spin_lock(&reap_list_lock); + csk->passive_reap_next = reap_list; + reap_list = sk; + if (!csk->passive_reap_next) + schedule_work(&reap_task); + spin_unlock(&reap_list_lock); + bh_unlock_sock(sk); + local_bh_enable(); +} + +static void add_pass_open_to_parent(struct sock *child, struct sock *lsk, + struct chtls_dev *cdev) +{ + struct chtls_sock *csk = child->sk_user_data; + struct request_sock *oreq; + + if (lsk->sk_state != TCP_LISTEN) + return; + + oreq = csk->passive_reap_next; + csk->passive_reap_next = NULL; + + reqsk_queue_removed(&inet_csk(lsk)->icsk_accept_queue, oreq); + + if (sk_acceptq_is_full(lsk)) { + chtls_reqsk_free(oreq); + add_to_reap_list(child); + } else { + refcount_set(&oreq->rsk_refcnt, 1); + inet_csk_reqsk_queue_add(lsk, oreq, child); + lsk->sk_data_ready(lsk); + } +} + +static void bl_add_pass_open_to_parent(struct sock *lsk, struct sk_buff *skb) +{ + struct sock *child = skb->sk; + + skb->sk = NULL; + add_pass_open_to_parent(child, lsk, BLOG_SKB_CB(skb)->cdev); + kfree_skb(skb); +} + +static int chtls_pass_establish(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_pass_establish *req = cplhdr(skb) + RSS_HDR; + unsigned int hwtid = GET_TID(req); + struct chtls_sock *csk; + struct sock *lsk, *sk; + + sk = lookup_tid(cdev->tids, hwtid); + if (!sk) + return 1; + + bh_lock_sock(sk); + if (unlikely(sock_owned_by_user(sk))) { + kfree_skb(skb); + } else { + void *data; + unsigned int stid; + + csk = sk->sk_user_data; + csk->wr_max_credits = 64; + csk->wr_credits = 64; + csk->wr_unacked = 0; + make_established(sk, ntohl(req->snd_isn), ntohs(req->tcp_opt)); + stid = PASS_OPEN_TID_G(ntohl(req->tos_stid)); + sk->sk_state_change(sk); + if (unlikely(sk->sk_socket)) + sk_wake_async(sk, 0, POLL_OUT); + + data = lookup_stid(cdev->tids, stid); + lsk = ((struct listen_ctx *)data)->lsk; + + bh_lock_sock(lsk); + if (likely(!sock_owned_by_user(lsk))) { + kfree_skb(skb); + add_pass_open_to_parent(sk, lsk, cdev); + } else { + skb->sk = sk; + BLOG_SKB_CB(skb)->cdev = cdev; + BLOG_SKB_CB(skb)->backlog_rcv = + bl_add_pass_open_to_parent; + __sk_add_backlog(lsk, skb); + } + bh_unlock_sock(lsk); + } + bh_unlock_sock(sk); + return 0; +} + +/* + * Handle receipt of an urgent pointer. + */ +static void handle_urg_ptr(struct sock *sk, u32 urg_seq) +{ + struct tcp_sock *tp = tcp_sk(sk); + + urg_seq--; + if (tp->urg_data && !after(urg_seq, tp->urg_seq)) + return; /* duplicate pointer */ + + sk_send_sigurg(sk); + if (tp->urg_seq == tp->copied_seq && tp->urg_data && + !sock_flag(sk, SOCK_URGINLINE) && + tp->copied_seq != tp->rcv_nxt) { + struct sk_buff *skb = skb_peek(&sk->sk_receive_queue); + + tp->copied_seq++; + if (skb && tp->copied_seq - ULP_SKB_CB(skb)->seq >= skb->len) + chtls_free_skb(sk, skb); + } + + tp->urg_data = TCP_URG_NOTYET; + tp->urg_seq = urg_seq; +} + +static void check_sk_callbacks(struct chtls_sock *csk) +{ + struct sock *sk = csk->sk; + + if (unlikely(sk->sk_user_data && + !csk_flag_nochk(csk, CSK_CALLBACKS_CHKD))) + csk_set_flag(csk, CSK_CALLBACKS_CHKD); +} + +/* + * Handles Rx data that arrives in a state where the socket isn't accepting + * new data. + */ +static void handle_excess_rx(struct sock *sk, struct sk_buff *skb) +{ + if (!csk_flag(sk, CSK_ABORT_SHUTDOWN)) + chtls_abort_conn(sk, skb); + + kfree_skb(skb); +} + +static void chtls_recv_data(struct sock *sk, struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct cpl_rx_data *hdr = cplhdr(skb) + RSS_HDR; + struct tcp_sock *tp = tcp_sk(sk); + + if (unlikely(sk->sk_shutdown & RCV_SHUTDOWN)) { + handle_excess_rx(sk, skb); + return; + } + + ULP_SKB_CB(skb)->seq = ntohl(hdr->seq); + ULP_SKB_CB(skb)->psh = hdr->psh; + skb_ulp_mode(skb) = ULP_MODE_NONE; + + skb_reset_transport_header(skb); + __skb_pull(skb, sizeof(*hdr) + RSS_HDR); + if (!skb->data_len) + __skb_trim(skb, ntohs(hdr->len)); + + if (unlikely(hdr->urg)) + handle_urg_ptr(sk, tp->rcv_nxt + ntohs(hdr->urg)); + if (unlikely(tp->urg_data == TCP_URG_NOTYET && + tp->urg_seq - tp->rcv_nxt < skb->len)) + tp->urg_data = TCP_URG_VALID | + skb->data[tp->urg_seq - tp->rcv_nxt]; + + if (unlikely(hdr->dack_mode != csk->delack_mode)) { + csk->delack_mode = hdr->dack_mode; + csk->delack_seq = tp->rcv_nxt; + } + + tcp_hdr(skb)->fin = 0; + tp->rcv_nxt += skb->len; + + __skb_queue_tail(&sk->sk_receive_queue, skb); + + if (!sock_flag(sk, SOCK_DEAD)) { + check_sk_callbacks(csk); + sk->sk_data_ready(sk); + } +} + +static int chtls_rx_data(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_rx_data *req = cplhdr(skb) + RSS_HDR; + unsigned int hwtid = GET_TID(req); + struct sock *sk; + + sk = lookup_tid(cdev->tids, hwtid); + skb_dst_set(skb, NULL); + process_cpl_msg(chtls_recv_data, sk, skb); + return 0; +} + +static void chtls_recv_pdu(struct sock *sk, struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct chtls_hws *tlsk = &csk->tlshws; + struct cpl_tls_data *hdr = cplhdr(skb); + struct tcp_sock *tp = tcp_sk(sk); + + if (unlikely(sk->sk_shutdown & RCV_SHUTDOWN)) { + handle_excess_rx(sk, skb); + return; + } + + ULP_SKB_CB(skb)->seq = ntohl(hdr->seq); + ULP_SKB_CB(skb)->flags = 0; + skb_ulp_mode(skb) = ULP_MODE_TLS; + + skb_reset_transport_header(skb); + __skb_pull(skb, sizeof(*hdr)); + if (!skb->data_len) + __skb_trim(skb, + CPL_TLS_DATA_LENGTH_G(ntohl(hdr->length_pkd))); + + if (unlikely(tp->urg_data == TCP_URG_NOTYET && tp->urg_seq - + tp->rcv_nxt < skb->len)) + tp->urg_data = TCP_URG_VALID | + skb->data[tp->urg_seq - tp->rcv_nxt]; + + tcp_hdr(skb)->fin = 0; + tlsk->pldlen = CPL_TLS_DATA_LENGTH_G(ntohl(hdr->length_pkd)); + __skb_queue_tail(&tlsk->sk_recv_queue, skb); +} + +static int chtls_rx_pdu(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_tls_data *req = cplhdr(skb); + unsigned int hwtid = GET_TID(req); + struct sock *sk; + + sk = lookup_tid(cdev->tids, hwtid); + skb_dst_set(skb, NULL); + process_cpl_msg(chtls_recv_pdu, sk, skb); + return 0; +} + +static void chtls_set_hdrlen(struct sk_buff *skb, unsigned int nlen) +{ + struct tlsrx_cmp_hdr *tls_cmp_hdr = cplhdr(skb); + + skb->hdr_len = (__u16)ntohs(tls_cmp_hdr->length); + tls_cmp_hdr->length = ntohs(nlen); +} + +static void chtls_rx_hdr(struct sock *sk, struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct cpl_rx_tls_cmp *cmp_cpl = cplhdr(skb); + struct chtls_hws *tlsk = &csk->tlshws; + struct tcp_sock *tp = tcp_sk(sk); + struct sk_buff *skb_rec = NULL; + + ULP_SKB_CB(skb)->seq = ntohl(cmp_cpl->seq); + ULP_SKB_CB(skb)->flags = 0; + + skb_reset_transport_header(skb); + __skb_pull(skb, sizeof(*cmp_cpl)); + if (!skb->data_len) + __skb_trim(skb, CPL_RX_TLS_CMP_LENGTH_G + (ntohl(cmp_cpl->pdulength_length))); + + tp->rcv_nxt += + CPL_RX_TLS_CMP_PDULENGTH_G(ntohl(cmp_cpl->pdulength_length)); + + skb_rec = __skb_dequeue(&tlsk->sk_recv_queue); + if (!skb_rec) { + ULP_SKB_CB(skb)->flags |= ULPCB_FLAG_TLS_ND; + __skb_queue_tail(&sk->sk_receive_queue, skb); + } else { + chtls_set_hdrlen(skb, tlsk->pldlen); + tlsk->pldlen = 0; + __skb_queue_tail(&sk->sk_receive_queue, skb); + __skb_queue_tail(&sk->sk_receive_queue, skb_rec); + } + + if (!sock_flag(sk, SOCK_DEAD)) { + check_sk_callbacks(csk); + sk->sk_data_ready(sk); + } +} + +static int chtls_rx_cmp(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_rx_tls_cmp *req = cplhdr(skb); + unsigned int hwtid = GET_TID(req); + struct sock *sk; + + sk = lookup_tid(cdev->tids, hwtid); + skb_dst_set(skb, NULL); + process_cpl_msg(chtls_rx_hdr, sk, skb); + + return 0; +} + +static void chtls_timewait(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + + tp->rcv_nxt++; + tp->rx_opt.ts_recent_stamp = get_seconds(); + tp->srtt_us = 0; + tcp_time_wait(sk, TCP_TIME_WAIT, 0); +} + +static void chtls_peer_close(struct sock *sk, struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + + sk->sk_shutdown |= RCV_SHUTDOWN; + sock_set_flag(sk, SOCK_DONE); + + switch (sk->sk_state) { + case TCP_SYN_RECV: + case TCP_ESTABLISHED: + tcp_set_state(sk, TCP_CLOSE_WAIT); + break; + case TCP_FIN_WAIT1: + tcp_set_state(sk, TCP_CLOSING); + break; + case TCP_FIN_WAIT2: + chtls_release_resources(sk); + if (csk_flag_nochk(csk, CSK_ABORT_RPL_PENDING)) + chtls_conn_done(sk); + else + chtls_timewait(sk); + break; + default: + pr_info("cpl_peer_close in bad state %d\n", sk->sk_state); + } + + if (!sock_flag(sk, SOCK_DEAD)) { + sk->sk_state_change(sk); + /* Do not send POLL_HUP for half duplex close. */ + + if ((sk->sk_shutdown & SEND_SHUTDOWN) || + sk->sk_state == TCP_CLOSE) + sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_HUP); + else + sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN); + } +} + +static void chtls_close_con_rpl(struct sock *sk, struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct cpl_close_con_rpl *rpl = cplhdr(skb) + RSS_HDR; + struct tcp_sock *tp = tcp_sk(sk); + + tp->snd_una = ntohl(rpl->snd_nxt) - 1; /* exclude FIN */ + + switch (sk->sk_state) { + case TCP_CLOSING: + chtls_release_resources(sk); + if (csk_flag_nochk(csk, CSK_ABORT_RPL_PENDING)) + chtls_conn_done(sk); + else + chtls_timewait(sk); + break; + case TCP_LAST_ACK: + chtls_release_resources(sk); + chtls_conn_done(sk); + break; + case TCP_FIN_WAIT1: + tcp_set_state(sk, TCP_FIN_WAIT2); + sk->sk_shutdown |= SEND_SHUTDOWN; + + if (!sock_flag(sk, SOCK_DEAD)) + sk->sk_state_change(sk); + else if (tcp_sk(sk)->linger2 < 0 && + !csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN)) + chtls_abort_conn(sk, skb); + break; + default: + pr_info("close_con_rpl in bad state %d\n", sk->sk_state); + } + kfree_skb(skb); +} + +static struct sk_buff *get_cpl_skb(struct sk_buff *skb, + size_t len, gfp_t gfp) +{ + if (likely(!skb_is_nonlinear(skb) && !skb_cloned(skb))) { + WARN_ONCE(skb->len < len, "skb alloc error"); + __skb_trim(skb, len); + skb_get(skb); + } else { + skb = alloc_skb(len, gfp); + if (skb) + __skb_put(skb, len); + } + return skb; +} + +static void set_abort_rpl_wr(struct sk_buff *skb, unsigned int tid, + int cmd) +{ + struct cpl_abort_rpl *rpl = cplhdr(skb); + + INIT_TP_WR_CPL(rpl, CPL_ABORT_RPL, tid); + rpl->cmd = cmd; +} + +static void send_defer_abort_rpl(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_abort_req_rss *req = cplhdr(skb); + struct sk_buff *reply_skb; + + reply_skb = alloc_skb(sizeof(struct cpl_abort_rpl), + GFP_KERNEL | __GFP_NOFAIL); + if (!reply_skb) + return; + + __skb_put(reply_skb, sizeof(struct cpl_abort_rpl)); + set_abort_rpl_wr(reply_skb, GET_TID(req), + (req->status & CPL_ABORT_NO_RST)); + set_wr_txq(reply_skb, CPL_PRIORITY_DATA, req->status >> 1); + cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb); + kfree_skb(skb); +} + +static void send_abort_rpl(struct sock *sk, struct sk_buff *skb, + struct chtls_dev *cdev, int status, int queue) +{ + struct cpl_abort_req_rss *req = cplhdr(skb); + struct sk_buff *reply_skb; + + reply_skb = alloc_skb(sizeof(struct cpl_abort_rpl), + GFP_KERNEL); + + if (!reply_skb) { + req->status = (queue << 1); + send_defer_abort_rpl(cdev, skb); + return; + } + + set_abort_rpl_wr(reply_skb, GET_TID(req), status); + kfree_skb(skb); + + set_wr_txq(reply_skb, CPL_PRIORITY_DATA, queue); + if (sock_flag(sk, SOCK_INLINE)) { + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct l2t_entry *e = csk->l2t_entry; + + if (e && sk->sk_state != TCP_SYN_RECV) { + cxgb4_l2t_send(csk->egress_dev, reply_skb, e); + return; + } + } + cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb); +} + +/* + * Add an skb to the deferred skb queue for processing from process context. + */ +static void t4_defer_reply(struct sk_buff *skb, struct chtls_dev *cdev, + defer_handler_t handler) +{ + DEFERRED_SKB_CB(skb)->handler = handler; + spin_lock_bh(&cdev->deferq.lock); + __skb_queue_tail(&cdev->deferq, skb); + if (skb_queue_len(&cdev->deferq) == 1) + schedule_work(&cdev->deferq_task); + spin_unlock_bh(&cdev->deferq.lock); +} + +static void chtls_send_abort_rpl(struct sock *sk, struct sk_buff *skb, + struct chtls_dev *cdev, + int status, int queue) +{ + struct cpl_abort_req_rss *req = cplhdr(skb) + RSS_HDR; + unsigned int tid = GET_TID(req); + struct sk_buff *reply_skb; + + reply_skb = get_cpl_skb(skb, sizeof(struct cpl_abort_rpl), gfp_any()); + if (!reply_skb) { + req->status = (queue << 1) | status; + t4_defer_reply(skb, cdev, send_defer_abort_rpl); + return; + } + + set_abort_rpl_wr(reply_skb, tid, status); + set_wr_txq(reply_skb, CPL_PRIORITY_DATA, queue); + if (sock_flag(sk, SOCK_INLINE)) { + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct l2t_entry *e = csk->l2t_entry; + + if (e && sk->sk_state != TCP_SYN_RECV) { + cxgb4_l2t_send(csk->egress_dev, reply_skb, e); + return; + } + } + cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb); + kfree_skb(skb); +} + +/* + * This is run from a listener's backlog to abort a child connection in + * SYN_RCV state (i.e., one on the listener's SYN queue). + */ +static void bl_abort_syn_rcv(struct sock *lsk, struct sk_buff *skb) +{ + struct sock *child = skb->sk; + struct chtls_sock *csk = rcu_dereference_sk_user_data(child); + int queue = csk->txq_idx; + + skb->sk = NULL; + do_abort_syn_rcv(child, lsk); + send_abort_rpl(child, skb, BLOG_SKB_CB(skb)->cdev, + CPL_ABORT_NO_RST, queue); +} + +static int abort_syn_rcv(struct sock *sk, struct sk_buff *skb) +{ + struct chtls_sock *csk = sk->sk_user_data; + const struct request_sock *oreq = csk->passive_reap_next; + struct chtls_dev *cdev = csk->cdev; + struct listen_ctx *listen_ctx; + struct sock *psk; + void *ctx; + + if (!oreq) + return -1; + + ctx = lookup_stid(cdev->tids, oreq->ts_recent); + if (!ctx) + return -1; + + listen_ctx = (struct listen_ctx *)ctx; + psk = listen_ctx->lsk; + + bh_lock_sock(psk); + if (!sock_owned_by_user(psk)) { + int queue = csk->txq_idx; + + do_abort_syn_rcv(sk, psk); + send_abort_rpl(sk, skb, cdev, CPL_ABORT_NO_RST, queue); + } else { + skb->sk = sk; + BLOG_SKB_CB(skb)->backlog_rcv = bl_abort_syn_rcv; + __sk_add_backlog(psk, skb); + } + bh_unlock_sock(psk); + return 0; +} + +static void chtls_abort_req_rss(struct sock *sk, struct sk_buff *skb) +{ + const struct cpl_abort_req_rss *req = cplhdr(skb) + RSS_HDR; + struct chtls_sock *csk = sk->sk_user_data; + int rst_status = CPL_ABORT_NO_RST; + int queue = csk->txq_idx; + + if (is_neg_adv(req->status)) { + if (sk->sk_state == TCP_SYN_RECV) + chtls_set_tcb_tflag(sk, 0, 0); + + kfree_skb(skb); + return; + } + + csk_reset_flag(csk, CSK_ABORT_REQ_RCVD); + + if (!csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN) && + !csk_flag_nochk(csk, CSK_TX_DATA_SENT)) { + struct tcp_sock *tp = tcp_sk(sk); + + if (send_tx_flowc_wr(sk, 0, tp->snd_nxt, tp->rcv_nxt) < 0) + WARN_ONCE(1, "send_tx_flowc error"); + csk_set_flag(csk, CSK_TX_DATA_SENT); + } + + csk_set_flag(csk, CSK_ABORT_SHUTDOWN); + + if (!csk_flag_nochk(csk, CSK_ABORT_RPL_PENDING)) { + sk->sk_err = ETIMEDOUT; + + if (!sock_flag(sk, SOCK_DEAD)) + sk->sk_error_report(sk); + + if (sk->sk_state == TCP_SYN_RECV && !abort_syn_rcv(sk, skb)) + return; + + chtls_release_resources(sk); + chtls_conn_done(sk); + } + + chtls_send_abort_rpl(sk, skb, csk->cdev, rst_status, queue); +} + +static void chtls_abort_rpl_rss(struct sock *sk, struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct cpl_abort_rpl_rss *rpl = cplhdr(skb) + RSS_HDR; + struct chtls_dev *cdev = csk->cdev; + + if (csk_flag_nochk(csk, CSK_ABORT_RPL_PENDING)) { + csk_reset_flag(csk, CSK_ABORT_RPL_PENDING); + if (!csk_flag_nochk(csk, CSK_ABORT_REQ_RCVD)) { + if (sk->sk_state == TCP_SYN_SENT) { + cxgb4_remove_tid(cdev->tids, + csk->port_id, + GET_TID(rpl), + sk->sk_family); + sock_put(sk); + } + chtls_release_resources(sk); + chtls_conn_done(sk); + } + } + kfree_skb(skb); +} + +static int chtls_conn_cpl(struct chtls_dev *cdev, struct sk_buff *skb) +{ + u8 opcode = ((const struct rss_header *)cplhdr(skb))->opcode; + struct cpl_peer_close *req = cplhdr(skb) + RSS_HDR; + void (*fn)(struct sock *sk, struct sk_buff *skb); + unsigned int hwtid = GET_TID(req); + struct sock *sk; + + sk = lookup_tid(cdev->tids, hwtid); + if (!sk) + goto rel_skb; + + switch (opcode) { + case CPL_PEER_CLOSE: + fn = chtls_peer_close; + break; + case CPL_CLOSE_CON_RPL: + fn = chtls_close_con_rpl; + break; + case CPL_ABORT_REQ_RSS: + fn = chtls_abort_req_rss; + break; + case CPL_ABORT_RPL_RSS: + fn = chtls_abort_rpl_rss; + break; + default: + goto rel_skb; + } + + process_cpl_msg(fn, sk, skb); + return 0; + +rel_skb: + kfree_skb(skb); + return 0; +} + +static struct sk_buff *dequeue_wr(struct sock *sk) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct sk_buff *skb = csk->wr_skb_head; + + if (likely(skb)) { + /* Don't bother clearing the tail */ + csk->wr_skb_head = WR_SKB_CB(skb)->next_wr; + WR_SKB_CB(skb)->next_wr = NULL; + } + return skb; +} + +static void chtls_rx_ack(struct sock *sk, struct sk_buff *skb) +{ + struct cpl_fw4_ack *hdr = cplhdr(skb) + RSS_HDR; + struct chtls_sock *csk = sk->sk_user_data; + struct tcp_sock *tp = tcp_sk(sk); + u32 snd_una = ntohl(hdr->snd_una); + __be32 credits = hdr->credits; + + csk->wr_credits += credits; + + if (csk->wr_unacked > csk->wr_max_credits - csk->wr_credits) + csk->wr_unacked = csk->wr_max_credits - csk->wr_credits; + + while (credits) { + struct sk_buff *pskb = csk->wr_skb_head; + + if (unlikely(!pskb)) { + if (csk->wr_nondata) + csk->wr_nondata -= credits; + break; + } + if (unlikely(credits < pskb->csum)) { + pskb->csum -= credits; + break; + } + dequeue_wr(sk); + credits -= pskb->csum; + kfree_skb(pskb); + } + if (hdr->seq_vld & CPL_FW4_ACK_FLAGS_SEQVAL) { + if (unlikely(before(snd_una, tp->snd_una))) { + kfree_skb(skb); + return; + } + + if (tp->snd_una != snd_una) { + tp->snd_una = snd_una; + tp->rcv_tstamp = tcp_time_stamp(tp); + if (tp->snd_una == tp->snd_nxt && + !csk_flag_nochk(csk, CSK_TX_FAILOVER)) + csk_reset_flag(csk, CSK_TX_WAIT_IDLE); + } + } + + if (hdr->seq_vld & CPL_FW4_ACK_FLAGS_CH) { + unsigned int fclen16 = roundup(failover_flowc_wr_len, 16); + + csk->wr_credits -= fclen16; + csk_reset_flag(csk, CSK_TX_WAIT_IDLE); + csk_reset_flag(csk, CSK_TX_FAILOVER); + } + if (skb_queue_len(&csk->txq) && chtls_push_frames(csk, 0)) + sk->sk_write_space(sk); + + kfree_skb(skb); +} + +static int chtls_wr_ack(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_fw4_ack *rpl = cplhdr(skb) + RSS_HDR; + unsigned int hwtid = GET_TID(rpl); + struct sock *sk; + + sk = lookup_tid(cdev->tids, hwtid); + process_cpl_msg(chtls_rx_ack, sk, skb); + + return 0; +} + +chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = { + [CPL_PASS_OPEN_RPL] = chtls_pass_open_rpl, + [CPL_CLOSE_LISTSRV_RPL] = chtls_close_listsrv_rpl, + [CPL_PASS_ACCEPT_REQ] = chtls_pass_accept_req, + [CPL_PASS_ESTABLISH] = chtls_pass_establish, + [CPL_RX_DATA] = chtls_rx_data, + [CPL_TLS_DATA] = chtls_rx_pdu, + [CPL_RX_TLS_CMP] = chtls_rx_cmp, + [CPL_PEER_CLOSE] = chtls_conn_cpl, + [CPL_CLOSE_CON_RPL] = chtls_conn_cpl, + [CPL_ABORT_REQ_RSS] = chtls_conn_cpl, + [CPL_ABORT_RPL_RSS] = chtls_conn_cpl, + [CPL_FW4_ACK] = chtls_wr_ack, +}; diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index a8384b0c..abcab2c 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -332,6 +332,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo) tcp_update_metrics(sk); tcp_done(sk); } +EXPORT_SYMBOL(tcp_time_wait); void tcp_twsk_destructor(struct sock *sk) { From patchwork Thu Feb 22 17:52:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876790 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMQc3nJJz9ryG for ; Fri, 23 Feb 2018 04:53:12 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933574AbeBVRxJ (ORCPT ); Thu, 22 Feb 2018 12:53:09 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:62527 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933532AbeBVRxE (ORCPT ); Thu, 22 Feb 2018 12:53:04 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHqA35014680; Thu, 22 Feb 2018 09:52:33 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 10/12] chtls: Inline crypto request Tx/Rx Date: Thu, 22 Feb 2018 23:22:06 +0530 Message-Id: <1519321928-22995-7-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> References: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org TLS handler for record transmit and receive. Create Inline TLS work request and post to FW. Signed-off-by: Atul Gupta --- drivers/crypto/chelsio/chtls/chtls_io.c | 1867 +++++++++++++++++++++++++++++++ 1 file changed, 1867 insertions(+) create mode 100644 drivers/crypto/chelsio/chtls/chtls_io.c diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c new file mode 100644 index 0000000..0c5d6c1 --- /dev/null +++ b/drivers/crypto/chelsio/chtls/chtls_io.c @@ -0,0 +1,1867 @@ +/* + * Copyright (c) 2017 Chelsio Communications, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * Written by: Atul Gupta (atul.gupta@chelsio.com) + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "chtls.h" +#include "chtls_cm.h" + +static bool is_tls_hw(struct chtls_sock *csk) +{ + return csk->tlshws.ofld; +} + +static bool is_tls_rx(struct chtls_sock *csk) +{ + return (csk->tlshws.rxkey >= 0); +} + +static bool is_tls_tx(struct chtls_sock *csk) +{ + return (csk->tlshws.txkey >= 0); +} + +static bool is_tls_skb(struct chtls_sock *csk, const struct sk_buff *skb) +{ + return (is_tls_hw(csk) && skb_ulp_tls_skb_flags(skb)); +} + +static int key_size(void *sk) +{ + return 16; /* Key on DDR */ +} + +#define ceil(x, y) \ + ({ unsigned long __x = (x), __y = (y); (__x + __y - 1) / __y; }) + +static int data_sgl_len(const struct sk_buff *skb) +{ + unsigned int cnt; + + cnt = skb_shinfo(skb)->nr_frags; + return (sgl_len(cnt) * 8); +} + +static int nos_ivs(struct sock *sk, unsigned int size) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + + return ceil(size, csk->tlshws.mfs); +} + +#define TLS_WR_CPL_LEN \ + (sizeof(struct fw_tlstx_data_wr) + \ + sizeof(struct cpl_tx_tls_sfo)) + +static int is_ivs_imm(struct sock *sk, const struct sk_buff *skb) +{ + int ivs_size = nos_ivs(sk, skb->len) * CIPHER_BLOCK_SIZE; + int hlen = TLS_WR_CPL_LEN + data_sgl_len(skb); + + if ((hlen + key_size(sk) + ivs_size) < + MAX_IMM_OFLD_TX_DATA_WR_LEN) { + ULP_SKB_CB(skb)->ulp.tls.iv = 1; + return 1; + } + ULP_SKB_CB(skb)->ulp.tls.iv = 0; + return 0; +} + +static int max_ivs_size(struct sock *sk, int size) +{ + return (nos_ivs(sk, size) * CIPHER_BLOCK_SIZE); +} + +static int ivs_size(struct sock *sk, const struct sk_buff *skb) +{ + return (is_ivs_imm(sk, skb) ? (nos_ivs(sk, skb->len) * + CIPHER_BLOCK_SIZE) : 0); +} + +static int flowc_wr_credits(int nparams, int *flowclenp) +{ + int flowclen16, flowclen; + + flowclen = offsetof(struct fw_flowc_wr, mnemval[nparams]); + flowclen16 = DIV_ROUND_UP(flowclen, 16); + flowclen = flowclen16 * 16; + + if (flowclenp) + *flowclenp = flowclen; + + return flowclen16; +} + +static struct sk_buff *create_flowc_wr_skb(struct sock *sk, + struct fw_flowc_wr *flowc, + int flowclen) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct sk_buff *skb; + + skb = alloc_skb(flowclen, GFP_ATOMIC); + if (!skb) + return NULL; + + memcpy(__skb_put(skb, flowclen), flowc, flowclen); + set_queue(skb, (csk->txq_idx << 1) | CPL_PRIORITY_DATA, sk); + + return skb; +} + +static int send_flowc_wr(struct sock *sk, struct fw_flowc_wr *flowc, + int flowclen) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + bool syn_sent = (sk->sk_state == TCP_SYN_SENT); + struct tcp_sock *tp = tcp_sk(sk); + int flowclen16 = flowclen / 16; + struct sk_buff *skb; + + if (csk_flag(sk, CSK_TX_DATA_SENT)) { + skb = create_flowc_wr_skb(sk, flowc, flowclen); + if (!skb) + return -ENOMEM; + + if (syn_sent) + __skb_queue_tail(&csk->ooo_queue, skb); + else + skb_entail(sk, skb, + ULPCB_FLAG_NO_HDR | ULPCB_FLAG_NO_APPEND); + return 0; + } + + if (!syn_sent) { + int ret; + + ret = cxgb4_immdata_send(csk->egress_dev, + csk->txq_idx, + flowc, flowclen); + if (!ret) + return flowclen16; + } + skb = create_flowc_wr_skb(sk, flowc, flowclen); + if (!skb) + return -ENOMEM; + send_or_defer(sk, tp, skb, 0); + return flowclen16; +} + +static u8 tcp_state_to_flowc_state(u8 state) +{ + u8 ret = FW_FLOWC_MNEM_TCPSTATE_ESTABLISHED; + + switch (state) { + case TCP_ESTABLISHED: + ret = FW_FLOWC_MNEM_TCPSTATE_ESTABLISHED; + break; + case TCP_CLOSE_WAIT: + ret = FW_FLOWC_MNEM_TCPSTATE_CLOSEWAIT; + break; + case TCP_FIN_WAIT1: + ret = FW_FLOWC_MNEM_TCPSTATE_FINWAIT1; + break; + case TCP_CLOSING: + ret = FW_FLOWC_MNEM_TCPSTATE_CLOSING; + break; + case TCP_LAST_ACK: + ret = FW_FLOWC_MNEM_TCPSTATE_LASTACK; + break; + case TCP_FIN_WAIT2: + ret = FW_FLOWC_MNEM_TCPSTATE_FINWAIT2; + break; + } + + return ret; +} + +int send_tx_flowc_wr(struct sock *sk, int compl, + u32 snd_nxt, u32 rcv_nxt) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + int nparams, paramidx, flowclen16, flowclen; + struct tcp_sock *tp = tcp_sk(sk); + struct flowc_packed { + struct fw_flowc_wr fc; + struct fw_flowc_mnemval mnemval[FW_FLOWC_MNEM_MAX]; + } __packed sflowc; + struct fw_flowc_wr *flowc; + + memset(&sflowc, 0, sizeof(sflowc)); + flowc = &sflowc.fc; + +#define FLOWC_PARAM(__m, __v) \ + do { \ + flowc->mnemval[paramidx].mnemonic = FW_FLOWC_MNEM_##__m; \ + flowc->mnemval[paramidx].val = cpu_to_be32(__v); \ + paramidx++; \ + } while (0) + + paramidx = 0; + + FLOWC_PARAM(PFNVFN, FW_PFVF_CMD_PFN_V(csk->cdev->lldi->pf)); + FLOWC_PARAM(CH, csk->tx_chan); + FLOWC_PARAM(PORT, csk->tx_chan); + FLOWC_PARAM(IQID, csk->rss_qid); + FLOWC_PARAM(SNDNXT, tp->snd_nxt); + FLOWC_PARAM(RCVNXT, tp->rcv_nxt); + FLOWC_PARAM(SNDBUF, csk->sndbuf); + FLOWC_PARAM(MSS, tp->mss_cache); + FLOWC_PARAM(TCPSTATE, tcp_state_to_flowc_state(sk->sk_state)); + + if (SND_WSCALE(tp)) + FLOWC_PARAM(RCV_SCALE, SND_WSCALE(tp)); + + if (csk->ulp_mode == ULP_MODE_TLS) + FLOWC_PARAM(ULD_MODE, ULP_MODE_TLS); + + if (csk->tlshws.fcplenmax) + FLOWC_PARAM(TXDATAPLEN_MAX, csk->tlshws.fcplenmax); + + nparams = paramidx; +#undef FLOWC_PARAM + + flowclen16 = flowc_wr_credits(nparams, &flowclen); + flowc->op_to_nparams = + cpu_to_be32(FW_WR_OP_V(FW_FLOWC_WR) | + FW_WR_COMPL_V(compl) | + FW_FLOWC_WR_NPARAMS_V(nparams)); + flowc->flowid_len16 = cpu_to_be32(FW_WR_LEN16_V(flowclen16) | + FW_WR_FLOWID_V(csk->tid)); + + return send_flowc_wr(sk, flowc, flowclen); +} + +/* Copy IVs to WR */ +static int tls_copy_ivs(struct sock *sk, struct sk_buff *skb) + +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct chtls_hws *hws = &csk->tlshws; + u16 number_of_ivs = 0; + int err = 0; + unsigned char *ivs; + unsigned char *iv_loc = NULL; + unsigned int max_frag_size = 0; + struct page *page; + + max_frag_size = hws->mfs; + number_of_ivs = nos_ivs(sk, skb->len); + + if (number_of_ivs > MAX_IVS_PAGE) { + pr_warn("MAX IVs in PAGE exceeded %d\n", number_of_ivs); + return -ENOMEM; + } + + /* generate the IVs */ + ivs = kmalloc(number_of_ivs * CIPHER_BLOCK_SIZE, GFP_ATOMIC); + if (!ivs) + return -ENOMEM; + get_random_bytes(ivs, number_of_ivs * CIPHER_BLOCK_SIZE); + + if (skb_ulp_tls_skb_iv(skb)) { + /* send the IVs as immediate data in the WR */ + iv_loc = (unsigned char *)__skb_push(skb, number_of_ivs * + CIPHER_BLOCK_SIZE); + if (iv_loc) + memcpy(iv_loc, ivs, number_of_ivs * CIPHER_BLOCK_SIZE); + + hws->ivsize = number_of_ivs * CIPHER_BLOCK_SIZE; + } else { + /* Send the IVs as sgls */ + /* Already accounted IV DSGL for credits */ + skb_shinfo(skb)->nr_frags--; + page = alloc_pages(sk->sk_allocation | __GFP_COMP, 0); + if (!page) { + pr_info("%s : Page allocation for IVs failed\n", + __func__); + err = -ENOMEM; + goto out; + } + memcpy(page_address(page), ivs, number_of_ivs * + CIPHER_BLOCK_SIZE); + skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags, page, 0, + number_of_ivs * CIPHER_BLOCK_SIZE); + hws->ivsize = 0; + } +out: + kfree(ivs); + return err; +} + +static unsigned int keyid_to_addr(int start_addr, int keyid) +{ + return ((start_addr + (keyid * TLS_KEY_CONTEXT_SZ)) >> 5); +} + +/* Copy Key to WR */ +static void tls_copy_tx_key(struct sock *sk, struct sk_buff *skb) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct ulptx_sc_memrd *sc_memrd; + struct ulptx_idata *sc; + struct chtls_hws *hws = &csk->tlshws; + struct chtls_dev *cdev = csk->cdev; + u32 immdlen; + + immdlen = sizeof(*sc) + sizeof(*sc_memrd); + sc = (struct ulptx_idata *)__skb_push(skb, immdlen); + if (sc) { + sc->cmd_more = htonl(ULPTX_CMD_V(ULP_TX_SC_NOOP)); + sc->len = htonl(0); + sc_memrd = (struct ulptx_sc_memrd *)(sc + 1); + sc_memrd->cmd_to_len = + htonl(ULPTX_CMD_V(ULP_TX_SC_MEMRD) | + ULP_TX_SC_MORE_V(1) | + ULPTX_LEN16_V(hws->keylen >> 4)); + sc_memrd->addr = htonl(keyid_to_addr(cdev->kmap.start, + hws->txkey)); + } +} + +static u64 tls_sequence_number(struct chtls_hws *hws) +{ + return (hws->tx_seq_no++); +} + +static int is_sg_request(const struct sk_buff *skb) +{ + return (skb->peeked || + (skb->len > MAX_IMM_ULPTX_WR_LEN)); +} + +/* + * Returns true if an sk_buff carries urgent data. + */ +static int skb_urgent(struct sk_buff *skb) +{ + return (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_URG) != 0; +} + +/* TLS content type for CPL SFO */ +static unsigned char tls_content_type(unsigned char content_type) +{ + switch (content_type) { + case TLS_HDR_TYPE_CCS: + return CPL_TX_TLS_SFO_TYPE_CCS; + case TLS_HDR_TYPE_ALERT: + return CPL_TX_TLS_SFO_TYPE_ALERT; + case TLS_HDR_TYPE_HANDSHAKE: + return CPL_TX_TLS_SFO_TYPE_HANDSHAKE; + case TLS_HDR_TYPE_HEARTBEAT: + return CPL_TX_TLS_SFO_TYPE_HEARTBEAT; + } + return CPL_TX_TLS_SFO_TYPE_DATA; +} + +static void tls_tx_data_wr(struct sock *sk, struct sk_buff *skb, + int dlen, int tls_immd, u32 credits, + int expn, int pdus) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct chtls_hws *hws = &csk->tlshws; + struct net_device *dev = csk->egress_dev; + struct adapter *adap = netdev2adap(dev); + struct fw_tlstx_data_wr *req_wr = NULL; + struct cpl_tx_tls_sfo *req_cpl = NULL; + int iv_imm = skb_ulp_tls_skb_iv(skb); + struct tls_scmd *scmd = &hws->scmd; + struct tls_scmd *updated_scmd; + unsigned int wr_ulp_mode_force; + unsigned char data_type = 0; + unsigned char *req = NULL; + int len = dlen + expn; + int immd_len; + + dlen = (dlen < hws->mfs) ? dlen : hws->mfs; + atomic_inc(&adap->chcr_stats.tls_pdu_tx); + + updated_scmd = scmd; + updated_scmd->seqno_numivs &= 0xffffff80; + updated_scmd->seqno_numivs |= SCMD_NUM_IVS_V(pdus); + hws->scmd = *updated_scmd; + + req = (unsigned char *)__skb_push(skb, sizeof(struct cpl_tx_tls_sfo)); + req_cpl = (struct cpl_tx_tls_sfo *)req; + req = (unsigned char *)__skb_push(skb, (sizeof(struct + fw_tlstx_data_wr))); + + req_wr = (struct fw_tlstx_data_wr *)req; + immd_len = (tls_immd ? dlen : 0); + req_wr->op_to_immdlen = + htonl(FW_WR_OP_V(FW_TLSTX_DATA_WR) | + FW_TLSTX_DATA_WR_COMPL_V(1) | + FW_TLSTX_DATA_WR_IMMDLEN_V(immd_len)); + req_wr->flowid_len16 = htonl(FW_TLSTX_DATA_WR_FLOWID_V(csk->tid) | + FW_TLSTX_DATA_WR_LEN16_V(credits)); + wr_ulp_mode_force = TX_ULP_MODE_V(ULP_MODE_TLS); + + if (is_sg_request(skb)) + wr_ulp_mode_force |= FW_OFLD_TX_DATA_WR_ALIGNPLD_F | + ((tcp_sk(sk)->nonagle & TCP_NAGLE_OFF) ? 0 : + FW_OFLD_TX_DATA_WR_SHOVE_F); + + req_wr->lsodisable_to_flags = + htonl(TX_ULP_MODE_V(ULP_MODE_TLS) | + FW_OFLD_TX_DATA_WR_URGENT_V(skb_urgent(skb)) | + T6_TX_FORCE_F | wr_ulp_mode_force | + TX_SHOVE_V((!csk_flag(sk, CSK_TX_MORE_DATA)) && + skb_queue_empty(&csk->txq))); + + req_wr->ctxloc_to_exp = + htonl(FW_TLSTX_DATA_WR_NUMIVS_V(pdus) | + FW_TLSTX_DATA_WR_EXP_V(expn) | + FW_TLSTX_DATA_WR_CTXLOC_V(CHTLS_KEY_CONTEXT_DDR) | + FW_TLSTX_DATA_WR_IVDSGL_V(!iv_imm) | + FW_TLSTX_DATA_WR_KEYSIZE_V(hws->keylen >> 4)); + + /* Fill in the length */ + req_wr->plen = htonl(len); + req_wr->mfs = htons(hws->mfs); + req_wr->adjustedplen_pkd = + htons(FW_TLSTX_DATA_WR_ADJUSTEDPLEN_V(hws->adjustlen)); + req_wr->expinplenmax_pkd = + htons(FW_TLSTX_DATA_WR_EXPINPLENMAX_V(hws->expansion)); + req_wr->pdusinplenmax_pkd = + FW_TLSTX_DATA_WR_PDUSINPLENMAX_V(hws->pdus); + req_wr->r10 = 0; + + data_type = tls_content_type(ULP_SKB_CB(skb)->ulp.tls.type); + req_cpl->op_to_seg_len = htonl(CPL_TX_TLS_SFO_OPCODE_V(CPL_TX_TLS_SFO) | + CPL_TX_TLS_SFO_DATA_TYPE_V(data_type) | + CPL_TX_TLS_SFO_CPL_LEN_V(2) | + CPL_TX_TLS_SFO_SEG_LEN_V(dlen)); + req_cpl->pld_len = htonl(len - expn); + + req_cpl->type_protover = htonl(CPL_TX_TLS_SFO_TYPE_V + ((data_type == CPL_TX_TLS_SFO_TYPE_HEARTBEAT) ? + TLS_HDR_TYPE_HEARTBEAT : 0) | + CPL_TX_TLS_SFO_PROTOVER_V(0)); + + /* create the s-command */ + req_cpl->r1_lo = 0; + req_cpl->seqno_numivs = cpu_to_be32(hws->scmd.seqno_numivs); + req_cpl->ivgen_hdrlen = cpu_to_be32(hws->scmd.ivgen_hdrlen); + req_cpl->scmd1 = cpu_to_be64(tls_sequence_number(hws)); +} + +/* + * Calculate the TLS data expansion size + */ +static int chtls_expansion_size(struct sock *sk, int data_len, + int fullpdu, + unsigned short *pducnt) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct chtls_hws *hws = &csk->tlshws; + struct tls_scmd *scmd = &hws->scmd; + int hdrlen = TLS_HEADER_LENGTH; + int expnsize = 0, frcnt = 0, fraglast = 0, fragsize = 0; + int expppdu = 0; + + do { + fragsize = hws->mfs; + + if (SCMD_CIPH_MODE_G(scmd->seqno_numivs) == + SCMD_CIPH_MODE_AES_GCM) { + frcnt = (data_len / fragsize); + expppdu = GCM_TAG_SIZE + AEAD_EXPLICIT_DATA_SIZE + + hdrlen; + expnsize = frcnt * expppdu; + + if (fullpdu) { + *pducnt = data_len / (expppdu + fragsize); + + if (*pducnt > 32) + *pducnt = 32; + else if (!*pducnt) + *pducnt = 1; + expnsize = (*pducnt) * expppdu; + break; + } + fraglast = data_len % fragsize; + if (fraglast > 0) { + frcnt += 1; + expnsize += expppdu; + } + break; + } + pr_info("unsupported cipher\n"); + expnsize = 0; + } while (0); + + return expnsize; +} + +/* WR with IV, KEY and CPL SFO added */ +static void make_tlstx_data_wr(struct sock *sk, struct sk_buff *skb, + int tls_tx_imm, int tls_len, u32 credits) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct chtls_hws *hws = &csk->tlshws; + int pdus = ceil(tls_len, hws->mfs); + unsigned short pdus_per_ulp = 0; + int expn_sz = 0; + + expn_sz = chtls_expansion_size(sk, tls_len, 0, NULL); + if (!hws->compute) { + hws->expansion = chtls_expansion_size(sk, + hws->fcplenmax, + 1, &pdus_per_ulp); + hws->pdus = pdus_per_ulp; + hws->adjustlen = hws->pdus * + ((hws->expansion / hws->pdus) + hws->mfs); + hws->compute = 1; + } + tls_copy_ivs(sk, skb); + tls_copy_tx_key(sk, skb); + tls_tx_data_wr(sk, skb, tls_len, tls_tx_imm, credits, expn_sz, pdus); + hws->tx_seq_no += (pdus - 1); +} + +static void make_tx_data_wr(struct sock *sk, struct sk_buff *skb, + unsigned int immdlen, int len, + u32 credits, u32 compl) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + unsigned int opcode = FW_OFLD_TX_DATA_WR; + struct fw_ofld_tx_data_wr *req; + unsigned int wr_ulp_mode_force; + + req = (struct fw_ofld_tx_data_wr *)__skb_push(skb, sizeof(*req)); + req->op_to_immdlen = htonl(WR_OP_V(opcode) | + FW_WR_COMPL_V(compl) | + FW_WR_IMMDLEN_V(immdlen)); + req->flowid_len16 = htonl(FW_WR_FLOWID_V(csk->tid) | + FW_WR_LEN16_V(credits)); + + wr_ulp_mode_force = TX_ULP_MODE_V(csk->ulp_mode); + if (is_sg_request(skb)) + wr_ulp_mode_force |= FW_OFLD_TX_DATA_WR_ALIGNPLD_F | + ((tcp_sk(sk)->nonagle & TCP_NAGLE_OFF) ? 0 : + FW_OFLD_TX_DATA_WR_SHOVE_F); + + req->tunnel_to_proxy = htonl(wr_ulp_mode_force | + FW_OFLD_TX_DATA_WR_URGENT_V(skb_urgent(skb)) | + FW_OFLD_TX_DATA_WR_SHOVE_V((!csk_flag + (sk, CSK_TX_MORE_DATA)) && + skb_queue_empty(&csk->txq))); + req->plen = htonl(len); +} + +static int chtls_wr_size(struct chtls_sock *csk, const struct sk_buff *skb, + bool size) +{ + int wr_size; + + wr_size = TLS_WR_CPL_LEN; + wr_size += key_size(csk->sk); + wr_size += ivs_size(csk->sk, skb); + + if (size) + return wr_size; + + /* frags counted for IV dsgl */ + if (!skb_ulp_tls_skb_iv(skb)) + skb_shinfo(skb)->nr_frags++; + + return wr_size; +} + +static bool is_ofld_imm(struct chtls_sock *csk, + const struct sk_buff *skb) +{ + int length = skb->len; + + if (skb->peeked || skb->len > MAX_IMM_ULPTX_WR_LEN) + return false; + + if (likely(ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NEED_HDR)) { + /* Check TLS header len for Immediate */ + if (csk->ulp_mode == ULP_MODE_TLS && + is_tls_skb(csk, skb)) + length += chtls_wr_size(csk, skb, true); + else + length += sizeof(struct fw_ofld_tx_data_wr); + + return length <= MAX_IMM_OFLD_TX_DATA_WR_LEN; + } + return true; +} + +static unsigned int calc_tx_flits(const struct sk_buff *skb, + unsigned int immdlen) +{ + unsigned int flits, cnt; + + flits = immdlen / 8; /* headers */ + cnt = skb_shinfo(skb)->nr_frags; + if (skb_tail_pointer(skb) != skb_transport_header(skb)) + cnt++; + return flits + sgl_len(cnt); +} + +static void arp_failure_discard(void *handle, struct sk_buff *skb) +{ + kfree_skb(skb); +} + +int chtls_push_frames(struct chtls_sock *csk, int comp) +{ + struct sock *sk = csk->sk; + struct tcp_sock *tp = tcp_sk(sk); + struct chtls_hws *hws = &csk->tlshws; + struct sk_buff *skb; + int total_size = 0; + int wr_size = sizeof(struct fw_ofld_tx_data_wr); + + if (unlikely(sk_in_state(sk, TCPF_SYN_SENT | TCPF_CLOSE))) + return 0; + + if (unlikely(csk_flag(sk, CSK_ABORT_SHUTDOWN))) + return 0; + + while (csk->wr_credits && (skb = skb_peek(&csk->txq)) && + (!(ULP_SKB_CB(skb)->flags & ULPCB_FLAG_HOLD) || + skb_queue_len(&csk->txq) > 1)) { + unsigned int immdlen; + unsigned int credits_needed; + int len = skb->len; /* length [ulp bytes] inserted by hw */ + int tls_len = skb->len;/* TLS data len before IV/key */ + unsigned int credit_len = skb->len; + unsigned int completion = 0; + int flowclen16 = 0; + int tls_tx_imm = 0; + + immdlen = skb->len; + if (!is_ofld_imm(csk, skb)) { + immdlen = skb_transport_offset(skb); + if (is_tls_skb(csk, skb)) + wr_size = chtls_wr_size(csk, skb, false); + credit_len = 8 * calc_tx_flits(skb, immdlen); + } else { + if (is_tls_skb(csk, skb)) { + wr_size = chtls_wr_size(csk, skb, false); + tls_tx_imm = 1; + } + } + if (likely(ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NEED_HDR)) + credit_len += wr_size; + credits_needed = DIV_ROUND_UP(credit_len, 16); + if (!csk_flag_nochk(csk, CSK_TX_DATA_SENT)) { + flowclen16 = send_tx_flowc_wr(sk, 1, tp->snd_nxt, + tp->rcv_nxt); + if (flowclen16 <= 0) + break; + csk->wr_credits -= flowclen16; + csk->wr_unacked += flowclen16; + csk->wr_nondata += flowclen16; + csk_set_flag(csk, CSK_TX_DATA_SENT); + } + + if (csk->wr_credits < credits_needed) { + if (is_tls_skb(csk, skb) && + !skb_ulp_tls_skb_iv(skb)) + skb_shinfo(skb)->nr_frags--; + break; + } + + __skb_unlink(skb, &csk->txq); + set_queue(skb, (csk->txq_idx << 1) | CPL_PRIORITY_DATA, sk); + if (is_tls_hw(csk)) + hws->txqid = (skb->queue_mapping >> 1); + skb->csum = (__force __wsum)(credits_needed + csk->wr_nondata); + csk->wr_credits -= credits_needed; + csk->wr_unacked += credits_needed; + csk->wr_nondata = 0; + enqueue_wr(csk, skb); + + if (likely(ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NEED_HDR)) { + if ((comp && csk->wr_unacked == credits_needed) || + (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_COMPL) || + csk->wr_unacked >= csk->wr_max_credits / 2) { + completion = 1; + csk->wr_unacked = 0; + } + if (is_tls_skb(csk, skb)) + make_tlstx_data_wr(sk, skb, tls_tx_imm, + tls_len, credits_needed); + else + make_tx_data_wr(sk, skb, immdlen, len, + credits_needed, completion); + tp->snd_nxt += len; + tp->lsndtime = tcp_time_stamp(tp); + if (completion) + ULP_SKB_CB(skb)->flags &= ~ULPCB_FLAG_NEED_HDR; + } else { + struct cpl_close_con_req *req = cplhdr(skb); + unsigned int cmd = CPL_OPCODE_G(ntohl + (OPCODE_TID(req))); + + if (cmd == CPL_CLOSE_CON_REQ) + csk_set_flag(csk, + CSK_CLOSE_CON_REQUESTED); + + if ((ULP_SKB_CB(skb)->flags & ULPCB_FLAG_COMPL) && + (csk->wr_unacked >= csk->wr_max_credits / 2)) { + req->wr.wr_hi |= htonl(FW_WR_COMPL_F); + csk->wr_unacked = 0; + } + } + total_size += skb->truesize; + if (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_BARRIER) + csk_set_flag(csk, CSK_TX_WAIT_IDLE); + t4_set_arp_err_handler(skb, NULL, arp_failure_discard); + cxgb4_l2t_send(csk->egress_dev, skb, csk->l2t_entry); + } + sk->sk_wmem_queued -= total_size; + return total_size; +} + +static void mark_urg(struct tcp_sock *tp, int flags, + struct sk_buff *skb) +{ + if (unlikely(flags & MSG_OOB)) { + tp->snd_up = tp->write_seq; + ULP_SKB_CB(skb)->flags = ULPCB_FLAG_URG | + ULPCB_FLAG_BARRIER | + ULPCB_FLAG_NO_APPEND | + ULPCB_FLAG_NEED_HDR; + } +} + +/* + * Returns true if a connection should send more data to the TOE ASAP. + */ +static int should_push(struct sock *sk) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct chtls_dev *cdev = csk->cdev; + struct tcp_sock *tp = tcp_sk(sk); + + /* + * If we've released our offload resources there's nothing to do ... + */ + if (!cdev) + return 0; + + /* + * If there aren't any work requests in flight, or there isn't enough + * data in flight, or Nagle is off then send the current TX_DATA + * otherwise hold it and wait to accumulate more data. + */ + return csk->wr_credits == csk->wr_max_credits || + (tp->nonagle & TCP_NAGLE_OFF); +} + +/* + * Returns true if a TCP socket is corked. + */ +static int corked(const struct tcp_sock *tp, int flags) +{ + return (flags & MSG_MORE) | (tp->nonagle & TCP_NAGLE_CORK); +} + +/* + * Returns true if a send should try to push new data. + */ +static int send_should_push(struct sock *sk, int flags) +{ + return should_push(sk) && !corked(tcp_sk(sk), flags); +} + +void chtls_tcp_push(struct sock *sk, int flags) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + int qlen = skb_queue_len(&csk->txq); + + if (likely(qlen)) { + struct tcp_sock *tp = tcp_sk(sk); + struct sk_buff *skb = skb_peek_tail(&csk->txq); + + mark_urg(tp, flags, skb); + + if (!(ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NO_APPEND) && + corked(tp, flags)) { + ULP_SKB_CB(skb)->flags |= ULPCB_FLAG_HOLD; + return; + } + + ULP_SKB_CB(skb)->flags &= ~ULPCB_FLAG_HOLD; + if (qlen == 1 && + ((ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NO_APPEND) || + should_push(sk))) + chtls_push_frames(csk, 1); + } +} + +/* + * Calculate the size for a new send sk_buff. It's maximum size so we can + * pack lots of data into it, unless we plan to send it immediately, in which + * case we size it more tightly. + * + * Note: we don't bother compensating for MSS < PAGE_SIZE because it doesn't + * arise in normal cases and when it does we are just wasting memory. + */ +static int select_size(struct sock *sk, int io_len, + int flags, int len) +{ + const int pgbreak = SKB_MAX_HEAD(len); + + /* + * If the data wouldn't fit in the main body anyway, put only the + * header in the main body so it can use immediate data and place all + * the payload in page fragments. + */ + if (io_len > pgbreak) + return 0; + + /* + * If we will be accumulating payload get a large main body. + */ + if (!send_should_push(sk, flags)) + return pgbreak; + + return io_len; +} + +void skb_entail(struct sock *sk, struct sk_buff *skb, int flags) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct tcp_sock *tp = tcp_sk(sk); + + ULP_SKB_CB(skb)->seq = tp->write_seq; + ULP_SKB_CB(skb)->flags = flags; + __skb_queue_tail(&csk->txq, skb); + sk->sk_wmem_queued += skb->truesize; + + if (TCP_PAGE(sk) && TCP_OFF(sk)) { + put_page(TCP_PAGE(sk)); + TCP_PAGE(sk) = NULL; + TCP_OFF(sk) = 0; + } +} + +static struct sk_buff *get_tx_skb(struct sock *sk, int size) +{ + struct sk_buff *skb; + + skb = alloc_skb(size + TX_HEADER_LEN, sk->sk_allocation); + if (likely(skb)) { + skb_reserve(skb, TX_HEADER_LEN); + skb_entail(sk, skb, ULPCB_FLAG_NEED_HDR); + skb_reset_transport_header(skb); + } + return skb; +} + +static struct sk_buff *get_record_skb(struct sock *sk, int size, bool zcopy) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct sk_buff *skb; + + skb = alloc_skb(((zcopy ? 0 : size) + TX_TLSHDR_LEN + + key_size(sk) + max_ivs_size(sk, size)), + sk->sk_allocation); + if (likely(skb)) { + skb_reserve(skb, (TX_TLSHDR_LEN + + key_size(sk) + max_ivs_size(sk, size))); + skb_entail(sk, skb, ULPCB_FLAG_NEED_HDR); + skb_reset_transport_header(skb); + ULP_SKB_CB(skb)->ulp.tls.ofld = 1; + ULP_SKB_CB(skb)->ulp.tls.type = csk->tlshws.type; + } + return skb; +} + +static void tx_skb_finalize(struct sk_buff *skb) +{ + struct ulp_skb_cb *cb = ULP_SKB_CB(skb); + + if (!(cb->flags & ULPCB_FLAG_NO_HDR)) + cb->flags = ULPCB_FLAG_NEED_HDR; + cb->flags |= ULPCB_FLAG_NO_APPEND; +} + +static void push_frames_if_head(struct sock *sk) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + + if (skb_queue_len(&csk->txq) == 1) + chtls_push_frames(csk, 1); +} + +static int chtls_skb_copy_to_page_nocache(struct sock *sk, + struct iov_iter *from, + struct sk_buff *skb, + struct page *page, + int off, int copy) +{ + int err; + + err = skb_do_copy_data_nocache(sk, skb, from, page_address(page) + + off, copy, skb->len); + if (err) + return err; + + skb->len += copy; + skb->data_len += copy; + skb->truesize += copy; + sk->sk_wmem_queued += copy; + return 0; +} + +/* Read TLS header to find content type and data length */ +static u16 tls_header_read(struct tls_hdr *thdr, struct iov_iter *from) +{ + if (copy_from_iter(thdr, sizeof(*thdr), from) != sizeof(*thdr)) + return -EFAULT; + return cpu_to_be16(thdr->length); +} + +int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct tcp_sock *tp = tcp_sk(sk); + struct chtls_dev *cdev = csk->cdev; + struct sk_buff *skb; + int mss, flags, err; + int copied = 0; + int hdrlen = 0; + int recordsz = 0; + long timeo; + + lock_sock(sk); + flags = msg->msg_flags; + timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); + + if (!sk_in_state(sk, TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) { + err = sk_stream_wait_connect(sk, &timeo); + if (err) + goto out_err; + } + + if (sk->sk_prot->sendmsg != chtls_sendmsg) { + release_sock(sk); + if (sk->sk_prot->sendmsg) + return sk->sk_prot->sendmsg(sk, msg, size); + else + return sk->sk_socket->ops->sendmsg(sk->sk_socket, + msg, size); + } + + sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); + err = -EPIPE; + if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN)) + goto out_err; + + mss = csk->mss; + csk_set_flag(csk, CSK_TX_MORE_DATA); + + while (msg_data_left(msg)) { + int copy = 0; + + skb = skb_peek_tail(&csk->txq); + if (skb) { + copy = mss - skb->len; + skb->ip_summed = CHECKSUM_UNNECESSARY; + } + + if (is_tls_tx(csk) && !csk->tlshws.txleft) { + struct tls_hdr hdr; + + recordsz = tls_header_read(&hdr, &msg->msg_iter); + size -= TLS_HEADER_LENGTH; + hdrlen += TLS_HEADER_LENGTH; + csk->tlshws.txleft = recordsz; + csk->tlshws.type = hdr.type; + if (skb) + ULP_SKB_CB(skb)->ulp.tls.type = hdr.type; + } + + if (!skb || (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NO_APPEND) || + copy <= 0) { +new_buf: + if (skb) { + tx_skb_finalize(skb); + push_frames_if_head(sk); + } + + if (is_tls_tx(csk)) { + skb = get_record_skb(sk, + select_size(sk, + recordsz, + flags, + TX_TLSHDR_LEN), + false); + } else { + skb = get_tx_skb(sk, + select_size(sk, size, flags, + TX_HEADER_LEN)); + } + if (unlikely(!skb)) + goto wait_for_memory; + + skb->ip_summed = CHECKSUM_UNNECESSARY; + copy = mss; + } + if (copy > size) + copy = size; + + if (skb_tailroom(skb) > 0) { + copy = min(copy, skb_tailroom(skb)); + if (is_tls_tx(csk)) + copy = min_t(int, copy, csk->tlshws.txleft); + err = skb_add_data_nocache(sk, skb, + &msg->msg_iter, copy); + if (err) + goto do_fault; + } else { + bool merge; + int off = TCP_OFF(sk); + int i = skb_shinfo(skb)->nr_frags; + struct page *page = TCP_PAGE(sk); + int pg_size = PAGE_SIZE; + + if (page) + pg_size <<= compound_order(page); + + if (off < pg_size && + skb_can_coalesce(skb, i, page, off)) { + merge = 1; + goto copy; + } + merge = 0; + if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) : + MAX_SKB_FRAGS)) + goto new_buf; + + if (page && off == pg_size) { + put_page(page); + TCP_PAGE(sk) = page = NULL; + pg_size = PAGE_SIZE; + } + + if (!page) { + gfp_t gfp = sk->sk_allocation; + int order = cdev->send_page_order; + + if (order) { + page = alloc_pages(gfp | __GFP_COMP | + __GFP_NOWARN | + __GFP_NORETRY, + order); + if (page) + pg_size <<= + compound_order(page); + } + if (!page) { + page = alloc_page(gfp); + pg_size = PAGE_SIZE; + } + if (!page) + goto wait_for_memory; + off = 0; + } +copy: + if (copy > pg_size - off) + copy = pg_size - off; + if (is_tls_tx(csk)) + copy = min_t(int, copy, csk->tlshws.txleft); + + err = chtls_skb_copy_to_page_nocache(sk, &msg->msg_iter, + skb, page, + off, copy); + if (unlikely(err)) { + if (!TCP_PAGE(sk)) { + TCP_PAGE(sk) = page; + TCP_OFF(sk) = 0; + } + goto do_fault; + } + /* Update the skb. */ + if (merge) { + skb_shinfo(skb)->frags[i - 1].size += copy; + } else { + skb_fill_page_desc(skb, i, page, off, copy); + if (off + copy < pg_size) { + /* space left keep page */ + get_page(page); + TCP_PAGE(sk) = page; + } else { + TCP_PAGE(sk) = NULL; + } + } + TCP_OFF(sk) = off + copy; + } + if (unlikely(skb->len == mss)) + tx_skb_finalize(skb); + tp->write_seq += copy; + copied += copy; + size -= copy; + + if (is_tls_tx(csk)) + csk->tlshws.txleft -= copy; + + if (corked(tp, flags) && + (sk_stream_wspace(sk) < sk_stream_min_wspace(sk))) + ULP_SKB_CB(skb)->flags |= ULPCB_FLAG_NO_APPEND; + + if (size == 0) + goto out; + + if (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NO_APPEND) + push_frames_if_head(sk); + continue; +wait_for_memory: + err = sk_stream_wait_memory(sk, &timeo); + if (err) + goto do_error; + } +out: + csk_reset_flag(csk, CSK_TX_MORE_DATA); + if (copied) + chtls_tcp_push(sk, flags); +done: + release_sock(sk); + return copied + hdrlen; +do_fault: + if (!skb->len) { + __skb_unlink(skb, &csk->txq); + sk->sk_wmem_queued -= skb->truesize; + __kfree_skb(skb); + } +do_error: + if (copied) + goto out; +out_err: + if (sock_flag(sk, SOCK_INLINE)) + csk_reset_flag(csk, CSK_TX_MORE_DATA); + copied = sk_stream_error(sk, flags, err); + goto done; +} + +int chtls_sendpage(struct sock *sk, struct page *page, + int offset, size_t size, int flags) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct chtls_sock *csk; + int mss, err, copied = 0; + long timeo; + + if (sk->sk_prot->sendpage != chtls_sendpage) { + release_sock(sk); + if (sk->sk_prot->sendpage) + return sk->sk_prot->sendpage(sk, page, offset, + size, flags); + else + return sk->sk_socket->ops->sendpage(sk->sk_socket, + page, offset, + size, flags); + } + + csk = rcu_dereference_sk_user_data(sk); + timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); + + err = sk_stream_wait_connect(sk, &timeo); + if (!sk_in_state(sk, TCPF_ESTABLISHED | TCPF_CLOSE_WAIT) && + err != 0) + goto out_err; + + mss = csk->mss; + csk_set_flag(csk, CSK_TX_MORE_DATA); + + while (size > 0) { + int copy, i; + struct sk_buff *skb = skb_peek_tail(&csk->txq); + + copy = mss - skb->len; + if (!skb || (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NO_APPEND) || + copy <= 0) { +new_buf: + + if (is_tls_tx(csk)) { + skb = get_record_skb(sk, + select_size(sk, size, + flags, + TX_TLSHDR_LEN), + true); + } else { + skb = get_tx_skb(sk, 0); + } + if (!skb) + goto do_error; + copy = mss; + } + if (copy > size) + copy = size; + + i = skb_shinfo(skb)->nr_frags; + if (skb_can_coalesce(skb, i, page, offset)) { + skb_shinfo(skb)->frags[i - 1].size += copy; + } else if (i < MAX_SKB_FRAGS) { + get_page(page); + skb_fill_page_desc(skb, i, page, offset, copy); + } else { + tx_skb_finalize(skb); + push_frames_if_head(sk); + goto new_buf; + } + + skb->len += copy; + if (skb->len == mss) + tx_skb_finalize(skb); + skb->data_len += copy; + skb->truesize += copy; + sk->sk_wmem_queued += copy; + tp->write_seq += copy; + copied += copy; + offset += copy; + size -= copy; + + if (corked(tp, flags) && + (sk_stream_wspace(sk) < sk_stream_min_wspace(sk))) + ULP_SKB_CB(skb)->flags |= ULPCB_FLAG_NO_APPEND; + + if (!size) + break; + + if (unlikely(ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NO_APPEND)) + push_frames_if_head(sk); + continue; + + set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); + } +out: + csk_reset_flag(csk, CSK_TX_MORE_DATA); + if (copied) + chtls_tcp_push(sk, flags); +done: + release_sock(sk); + return copied; + +do_error: + if (copied) + goto out; + +out_err: + if (sock_flag(sk, SOCK_INLINE)) + csk_reset_flag(csk, CSK_TX_MORE_DATA); + copied = sk_stream_error(sk, flags, err); + goto done; +} + +/* + * Returns true if a socket cannot accept new Rx data. + */ +static int sk_no_receive(const struct sock *sk) +{ + return (sk->sk_shutdown & RCV_SHUTDOWN); +} + +static void chtls_select_window(struct sock *sk) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct tcp_sock *tp = tcp_sk(sk); + unsigned int wnd = tp->rcv_wnd; + + wnd = max_t(unsigned int, wnd, tcp_full_space(sk)); + wnd = max_t(unsigned int, MIN_RCV_WND, wnd); + + if (wnd > MAX_RCV_WND) + wnd = MAX_RCV_WND; + +/* + * Check if we need to grow the receive window in response to an increase in + * the socket's receive buffer size. Some applications increase the buffer + * size dynamically and rely on the window to grow accordingly. + */ + + if (wnd > tp->rcv_wnd) { + tp->rcv_wup -= wnd - tp->rcv_wnd; + tp->rcv_wnd = wnd; + /* Mark the receive window as updated */ + csk_reset_flag(csk, CSK_UPDATE_RCV_WND); + } +} + +static u32 send_rx_credits(struct chtls_sock *csk, u32 credits) +{ + struct cpl_rx_data_ack *req; + struct sk_buff *skb; + + skb = alloc_skb(sizeof(*req), GFP_ATOMIC); + if (!skb) + return 0; + __skb_put(skb, sizeof(*req)); + req = (struct cpl_rx_data_ack *)skb->head; + + set_wr_txq(skb, CPL_PRIORITY_ACK, csk->port_id); + INIT_TP_WR(req, csk->tid); + OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_RX_DATA_ACK, + csk->tid)); + req->credit_dack = cpu_to_be32(RX_CREDITS_V(credits) | + RX_FORCE_ACK_F); + cxgb4_ofld_send(csk->cdev->ports[csk->port_id], skb); + return credits; +} + +#define CREDIT_RETURN_STATE (TCPF_ESTABLISHED | \ + TCPF_FIN_WAIT1 | \ + TCPF_FIN_WAIT2) +/* + * Called after some received data has been read. It returns RX credits + * to the HW for the amount of data processed. + */ +static void chtls_cleanup_rbuf(struct sock *sk, int copied) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct tcp_sock *tp; + int must_send; + u32 credits; + u32 thres = 15 * 1024; + + if (!sk_in_state(sk, CREDIT_RETURN_STATE)) + return; + + chtls_select_window(sk); + tp = tcp_sk(sk); + credits = tp->copied_seq - tp->rcv_wup; + if (unlikely(!credits)) + return; + + /* + * For coalescing to work effectively ensure the receive window has + * at least 16KB left. + */ + must_send = credits + 16384 >= tp->rcv_wnd; + + if (must_send || credits >= thres) + tp->rcv_wup += send_rx_credits(csk, credits); +} + +static int chtls_pt_recvmsg(struct sock *sk, + struct msghdr *msg, size_t len, + int nonblock, int flags, int *addr_len) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct net_device *dev = csk->egress_dev; + struct adapter *adap = netdev2adap(dev); + struct tcp_sock *tp = tcp_sk(sk); + struct chtls_hws *hws = &csk->tlshws; + int copied = 0, buffers_freed = 0; + int target; + int request; + long timeo; + unsigned long avail; + + timeo = sock_rcvtimeo(sk, nonblock); + target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); + request = len; + + if (unlikely(csk_flag(sk, CSK_UPDATE_RCV_WND))) + chtls_cleanup_rbuf(sk, copied); + + do { + struct sk_buff *skb; + u32 offset = 0; + + if (unlikely(tp->urg_data && + tp->urg_seq == tp->copied_seq)) { + if (copied) + break; + if (signal_pending(current)) { + copied = timeo ? sock_intr_errno(timeo) : + -EAGAIN; + break; + } + } + skb = skb_peek(&sk->sk_receive_queue); + if (skb) + goto found_ok_skb; + if (csk->wr_credits && + skb_queue_len(&csk->txq) && + chtls_push_frames(csk, csk->wr_credits == + csk->wr_max_credits)) + sk->sk_write_space(sk); + + if (copied >= target && !sk->sk_backlog.tail) + break; + + if (copied) { + if (sk->sk_err || sk->sk_state == TCP_CLOSE || + sk_no_receive(sk) || + signal_pending(current)) + break; + + if (!timeo) + break; + } else { + if (sock_flag(sk, SOCK_DONE)) + break; + if (sk->sk_err) { + copied = sock_error(sk); + break; + } + if (sk_no_receive(sk)) + break; + if (sk->sk_state == TCP_CLOSE) { + copied = -ENOTCONN; + break; + } + if (!timeo) { + copied = -EAGAIN; + break; + } + if (signal_pending(current)) { + copied = sock_intr_errno(timeo); + break; + } + } + if (sk->sk_backlog.tail) { + release_sock(sk); + lock_sock(sk); + chtls_cleanup_rbuf(sk, copied); + continue; + } + + if (copied >= target) + break; + chtls_cleanup_rbuf(sk, copied); + sk_wait_data(sk, &timeo, NULL); + continue; +found_ok_skb: + if (!skb->len) { + skb_dst_set(skb, NULL); + __skb_unlink(skb, &sk->sk_receive_queue); + kfree_skb(skb); + + if (!copied && !timeo) { + copied = -EAGAIN; + break; + } + + if (copied < target) { + release_sock(sk); + lock_sock(sk); + continue; + } + break; + } + offset = hws->copied_seq; + avail = skb->len - offset; + if (len < avail) + avail = len; + + if (unlikely(tp->urg_data)) { + u32 urg_offset = tp->urg_seq - tp->copied_seq; + + if (urg_offset < avail) { + if (urg_offset) { + avail = urg_offset; + } else if (!sock_flag(sk, SOCK_URGINLINE)) { + /* First byte is urgent, skip */ + tp->copied_seq++; + offset++; + avail--; + if (!avail) + goto skip_copy; + } + } + } + if (hws->rstate == TLS_RCV_ST_READ_BODY) { + if (skb_copy_datagram_msg(skb, offset, + msg, avail)) { + if (!copied) { + copied = -EFAULT; + break; + } + } + } else { + struct tlsrx_cmp_hdr *tls_hdr_pkt = + (struct tlsrx_cmp_hdr *)skb->data; + + if ((tls_hdr_pkt->res_to_mac_error & + TLSRX_HDR_PKT_ERROR_M)) + tls_hdr_pkt->type = 0x7F; + + /* CMP pld len is for recv seq */ + hws->rcvpld = skb->hdr_len; + if (skb_copy_datagram_msg(skb, offset, msg, avail)) { + if (!copied) { + copied = -EFAULT; + break; + } + } + } + copied += avail; + len -= avail; + hws->copied_seq += avail; +skip_copy: + if (tp->urg_data && after(tp->copied_seq, tp->urg_seq)) + tp->urg_data = 0; + + if (hws->rstate == TLS_RCV_ST_READ_BODY && + (avail + offset) >= skb->len) { + if (likely(skb)) + chtls_free_skb(sk, skb); + buffers_freed++; + hws->rstate = TLS_RCV_ST_READ_HEADER; + atomic_inc(&adap->chcr_stats.tls_pdu_rx); + tp->copied_seq += hws->rcvpld; + hws->copied_seq = 0; + if (copied >= target && + !skb_peek(&sk->sk_receive_queue)) + break; + } else { + if (likely(skb)) { + if (ULP_SKB_CB(skb)->flags & + ULPCB_FLAG_TLS_ND) + hws->rstate = + TLS_RCV_ST_READ_HEADER; + else + hws->rstate = + TLS_RCV_ST_READ_BODY; + chtls_free_skb(sk, skb); + } + buffers_freed++; + tp->copied_seq += avail; + hws->copied_seq = 0; + } + } while (len > 0); + + if (buffers_freed) + chtls_cleanup_rbuf(sk, copied); + release_sock(sk); + return copied; +} + +/* + * Peek at data in a socket's receive buffer. + */ +static int peekmsg(struct sock *sk, struct msghdr *msg, + size_t len, int nonblock, int flags) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct sk_buff *skb; + u32 peek_seq, offset; + int copied = 0; + size_t avail; /* amount of available data in current skb */ + long timeo; + + lock_sock(sk); + timeo = sock_rcvtimeo(sk, nonblock); + peek_seq = tp->copied_seq; + + do { + if (unlikely(tp->urg_data && tp->urg_seq == peek_seq)) { + if (copied) + break; + if (signal_pending(current)) { + copied = timeo ? sock_intr_errno(timeo) : + -EAGAIN; + break; + } + } + + skb_queue_walk(&sk->sk_receive_queue, skb) { + offset = peek_seq - ULP_SKB_CB(skb)->seq; + if (offset < skb->len) + goto found_ok_skb; + } + + /* empty receive queue */ + if (copied) + break; + if (sock_flag(sk, SOCK_DONE)) + break; + if (sk->sk_err) { + copied = sock_error(sk); + break; + } + if (sk_no_receive(sk)) + break; + if (sk->sk_state == TCP_CLOSE) { + copied = -ENOTCONN; + break; + } + if (!timeo) { + copied = -EAGAIN; + break; + } + if (signal_pending(current)) { + copied = sock_intr_errno(timeo); + break; + } + + if (sk->sk_backlog.tail) { + /* Do not sleep, just process backlog. */ + release_sock(sk); + lock_sock(sk); + } else { + sk_wait_data(sk, &timeo, NULL); + } + + if (unlikely(peek_seq != tp->copied_seq)) { + if (net_ratelimit()) + pr_info("TCP(%s:%d), race in MSG_PEEK.\n", + current->comm, current->pid); + peek_seq = tp->copied_seq; + } + continue; + +found_ok_skb: + avail = skb->len - offset; + if (len < avail) + avail = len; + + /* + * Do we have urgent data here? We need to skip over the + * urgent byte. + */ + if (unlikely(tp->urg_data)) { + u32 urg_offset = tp->urg_seq - peek_seq; + + if (urg_offset < avail) { + /* + * The amount of data we are preparing to copy + * contains urgent data. + */ + if (!urg_offset) { /* First byte is urgent */ + if (!sock_flag(sk, SOCK_URGINLINE)) { + peek_seq++; + offset++; + avail--; + } + if (!avail) + continue; + } else { + /* stop short of the urgent data */ + avail = urg_offset; + } + } + } + + /* + * If MSG_TRUNC is specified the data is discarded. + */ + if (likely(!(flags & MSG_TRUNC))) + if (skb_copy_datagram_msg(skb, offset, msg, len)) { + if (!copied) + copied = -EFAULT; + break; + } + peek_seq += avail; + copied += avail; + len -= avail; + } while (len > 0); + + release_sock(sk); + return copied; +} + +int chtls_recvmsg(struct sock *sk, struct msghdr *msg, + size_t len, int nonblock, + int flags, int *addr_len) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct chtls_sock *csk; + struct chtls_hws *hws; + unsigned long avail; /* amount of available data in current skb */ + int buffers_freed = 0; + int copied = 0; + long timeo; + int target; /* Read at least this many bytes */ + int request; + + if (unlikely(flags & MSG_OOB)) + return tcp_prot.recvmsg(sk, msg, len, nonblock, flags, + addr_len); + + if (unlikely(flags & MSG_PEEK)) + return peekmsg(sk, msg, len, nonblock, flags); + + if (sk_can_busy_loop(sk) && + skb_queue_empty(&sk->sk_receive_queue) && + sk->sk_state == TCP_ESTABLISHED) + sk_busy_loop(sk, nonblock); + + lock_sock(sk); + + if (sk->sk_prot->recvmsg != chtls_recvmsg) { + release_sock(sk); + return sk->sk_prot->recvmsg(sk, msg, len, nonblock, + flags, addr_len); + } + + csk = rcu_dereference_sk_user_data(sk); + hws = &csk->tlshws; + + if (is_tls_rx(csk)) + return chtls_pt_recvmsg(sk, msg, len, nonblock, + flags, addr_len); + + timeo = sock_rcvtimeo(sk, nonblock); + target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); + request = len; + + if (unlikely(csk_flag(sk, CSK_UPDATE_RCV_WND))) + chtls_cleanup_rbuf(sk, copied); + + do { + struct sk_buff *skb; + u32 offset; + + if (unlikely(tp->urg_data && tp->urg_seq == tp->copied_seq)) { + if (copied) + break; + if (signal_pending(current)) { + copied = timeo ? sock_intr_errno(timeo) : + -EAGAIN; + break; + } + } + + skb = skb_peek(&sk->sk_receive_queue); + if (skb) + goto found_ok_skb; + + if (csk->wr_credits && + skb_queue_len(&csk->txq) && + chtls_push_frames(csk, csk->wr_credits == + csk->wr_max_credits)) + sk->sk_write_space(sk); + + if (copied >= target && !sk->sk_backlog.tail) + break; + + if (copied) { + if (sk->sk_err || sk->sk_state == TCP_CLOSE || + sk_no_receive(sk) || + signal_pending(current)) + break; + } else { + if (sock_flag(sk, SOCK_DONE)) + break; + if (sk->sk_err) { + copied = sock_error(sk); + break; + } + if (sk_no_receive(sk)) + break; + if (sk->sk_state == TCP_CLOSE) { + copied = -ENOTCONN; + break; + } + if (!timeo) { + copied = -EAGAIN; + break; + } + if (signal_pending(current)) { + copied = sock_intr_errno(timeo); + break; + } + } + + if (sk->sk_backlog.tail) { + release_sock(sk); + lock_sock(sk); + chtls_cleanup_rbuf(sk, copied); + continue; + } + + if (copied >= target) + break; + chtls_cleanup_rbuf(sk, copied); + sk_wait_data(sk, &timeo, NULL); + continue; + +found_ok_skb: + if (!skb->len) { + chtls_kfree_skb(sk, skb); + if (!copied && !timeo) { + copied = -EAGAIN; + break; + } + + if (copied < target) + continue; + + break; + } + + offset = tp->copied_seq - ULP_SKB_CB(skb)->seq; + avail = skb->len - offset; + if (len < avail) + avail = len; + + if (unlikely(tp->urg_data)) { + u32 urg_offset = tp->urg_seq - tp->copied_seq; + + if (urg_offset < avail) { + if (urg_offset) { + avail = urg_offset; + } else if (!sock_flag(sk, SOCK_URGINLINE)) { + tp->copied_seq++; + offset++; + avail--; + if (!avail) + goto skip_copy; + } + } + } + + if (likely(!(flags & MSG_TRUNC))) { + if (skb_copy_datagram_msg(skb, offset, + msg, avail)) { + if (!copied) { + copied = -EFAULT; + break; + } + } + } + + tp->copied_seq += avail; + copied += avail; + len -= avail; + +skip_copy: + if (tp->urg_data && after(tp->copied_seq, tp->urg_seq)) + tp->urg_data = 0; + + if (avail + offset >= skb->len) { + if (likely(skb)) + chtls_free_skb(sk, skb); + buffers_freed++; + + if (copied >= target && + !skb_peek(&sk->sk_receive_queue)) + break; + } + } while (len > 0); + + if (buffers_freed) + chtls_cleanup_rbuf(sk, copied); + + release_sock(sk); + return copied; +} From patchwork Thu Feb 22 17:52:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876788 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMQC6Bnlz9ryG for ; Fri, 23 Feb 2018 04:52:51 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933674AbeBVRws (ORCPT ); Thu, 22 Feb 2018 12:52:48 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:41034 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933670AbeBVRwq (ORCPT ); Thu, 22 Feb 2018 12:52:46 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHqA36014680; Thu, 22 Feb 2018 09:52:37 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 11/12] chtls: Register chtls Inline TLS with net tls Date: Thu, 22 Feb 2018 23:22:07 +0530 Message-Id: <1519321928-22995-8-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> References: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Register chtls as Inline TLS driver, chtls is ULD to cxgb4. Setsockopt to program (tx/rx) keys on chip. Support AES GCM of key size 128. Support both Inline Rx and Tx. Signed-off-by: Atul Gupta --- drivers/crypto/chelsio/chtls/chtls_main.c | 600 ++++++++++++++++++++++++++++++ include/uapi/linux/tls.h | 1 + 2 files changed, 601 insertions(+) create mode 100644 drivers/crypto/chelsio/chtls/chtls_main.c diff --git a/drivers/crypto/chelsio/chtls/chtls_main.c b/drivers/crypto/chelsio/chtls/chtls_main.c new file mode 100644 index 0000000..657c515 --- /dev/null +++ b/drivers/crypto/chelsio/chtls/chtls_main.c @@ -0,0 +1,600 @@ +/* + * Copyright (c) 2017 Chelsio Communications, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * Written by: Atul Gupta (atul.gupta@chelsio.com) + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "chtls.h" +#include "chtls_cm.h" + +#define DRV_NAME "chtls" + +/* + * chtls device management + * maintains a list of the chtls devices + */ +static LIST_HEAD(cdev_list); +static DEFINE_MUTEX(cdev_mutex); +static DEFINE_MUTEX(cdev_list_lock); + +static struct proto chtls_cpl_prot; +static struct proto chtls_base_prot; +static DEFINE_MUTEX(notify_mutex); +static RAW_NOTIFIER_HEAD(listen_notify_list); +struct request_sock_ops chtls_rsk_ops; +static uint send_page_order = (14 - PAGE_SHIFT < 0) ? 0 : 14 - PAGE_SHIFT; + +static int register_listen_notifier(struct notifier_block *nb) +{ + int err; + + mutex_lock(¬ify_mutex); + err = raw_notifier_chain_register(&listen_notify_list, nb); + mutex_unlock(¬ify_mutex); + return err; +} + +static int unregister_listen_notifier(struct notifier_block *nb) +{ + int err; + + mutex_lock(¬ify_mutex); + err = raw_notifier_chain_unregister(&listen_notify_list, nb); + mutex_unlock(¬ify_mutex); + return err; +} + +static int listen_notify_handler(struct notifier_block *this, + unsigned long event, void *data) +{ + struct sock *sk = data; + struct chtls_dev *cdev; + int ret = NOTIFY_DONE; + + switch (event) { + case CHTLS_LISTEN_START: + case CHTLS_LISTEN_STOP: + mutex_lock(&cdev_list_lock); + list_for_each_entry(cdev, &cdev_list, list) { + if (event == CHTLS_LISTEN_START) + ret = chtls_listen_start(cdev, sk); + else + chtls_listen_stop(cdev, sk); + } + mutex_unlock(&cdev_list_lock); + break; + } + return ret; +} + +static struct notifier_block listen_notifier = { + .notifier_call = listen_notify_handler +}; + +static int listen_backlog_rcv(struct sock *sk, struct sk_buff *skb) +{ + if (likely(skb_transport_header(skb) != skb_network_header(skb))) + return tcp_v4_do_rcv(sk, skb); + BLOG_SKB_CB(skb)->backlog_rcv(sk, skb); + return 0; +} + +static int chtls_start_listen(struct sock *sk) +{ + int err; + + if (sk->sk_protocol != IPPROTO_TCP) + return -EPROTONOSUPPORT; + + if (sk->sk_family == PF_INET && + LOOPBACK(inet_sk(sk)->inet_rcv_saddr)) + return -EADDRNOTAVAIL; + + sk->sk_backlog_rcv = listen_backlog_rcv; + mutex_lock(¬ify_mutex); + err = raw_notifier_call_chain(&listen_notify_list, + CHTLS_LISTEN_START, sk); + mutex_unlock(¬ify_mutex); + return err; +} + +static int chtls_hash(struct sock *sk) +{ + int err; + + err = tcp_prot.hash(sk); + if (sk->sk_state == TCP_LISTEN) + err |= chtls_start_listen(sk); + + if (err) + tcp_prot.unhash(sk); + return err; +} + +static int chtls_stop_listen(struct sock *sk) +{ + if (sk->sk_protocol != IPPROTO_TCP) + return -EPROTONOSUPPORT; + + mutex_lock(¬ify_mutex); + raw_notifier_call_chain(&listen_notify_list, + CHTLS_LISTEN_STOP, sk); + mutex_unlock(¬ify_mutex); + return 0; +} + +static void chtls_unhash(struct sock *sk) +{ + if (sk->sk_state == TCP_LISTEN) + chtls_stop_listen(sk); + tcp_prot.unhash(sk); +} + +static int chtls_netdev(struct tls_device *dev, + struct net_device *netdev) +{ + struct chtls_dev *cdev = to_chtls_dev(dev); + int i; + + for (i = 0; i < cdev->lldi->nports; i++) + if (cdev->ports[i] == netdev) + return 1; + + return 0; +} + +static int chtls_inline_feature(struct tls_device *dev) +{ + struct chtls_dev *cdev = to_chtls_dev(dev); + struct net_device *netdev; + int i; + + for (i = 0; i < cdev->lldi->nports; i++) { + netdev = cdev->ports[i]; + if (netdev->features & NETIF_F_HW_TLS_INLINE) + return 1; + } + return 0; +} + +static int chtls_create_hash(struct tls_device *dev, struct sock *sk) +{ + if (sk->sk_state == TCP_LISTEN) + return chtls_start_listen(sk); + return 0; +} + +static void chtls_destroy_hash(struct tls_device *dev, struct sock *sk) +{ + if (sk->sk_state == TCP_LISTEN) + chtls_stop_listen(sk); +} + +static void chtls_register_dev(struct chtls_dev *cdev) +{ + strlcpy(cdev->tlsdev.name, "chtls", TLS_DEVICE_NAME_MAX); + cdev->tlsdev.netdev = chtls_netdev; + cdev->tlsdev.feature = chtls_inline_feature; + cdev->tlsdev.hash = chtls_create_hash; + cdev->tlsdev.unhash = chtls_destroy_hash; + + tls_register_device(&cdev->tlsdev); +} + +static void chtls_unregister_dev(struct chtls_dev *cdev) +{ + tls_unregister_device(&cdev->tlsdev); +} + +static void process_deferq(struct work_struct *task_param) +{ + struct chtls_dev *cdev = container_of(task_param, + struct chtls_dev, deferq_task); + struct sk_buff *skb; + + spin_lock_bh(&cdev->deferq.lock); + while ((skb = __skb_dequeue(&cdev->deferq)) != NULL) { + spin_unlock_bh(&cdev->deferq.lock); + DEFERRED_SKB_CB(skb)->handler(cdev, skb); + spin_lock_bh(&cdev->deferq.lock); + } + spin_unlock_bh(&cdev->deferq.lock); +} + +static int chtls_get_skb(struct chtls_dev *cdev) +{ + cdev->askb = alloc_skb(sizeof(struct tcphdr), GFP_KERNEL); + if (!cdev->askb) + return 1; + + skb_put(cdev->askb, sizeof(struct tcphdr)); + skb_reset_transport_header(cdev->askb); + memset(cdev->askb->data, 0, cdev->askb->len); + return 0; +} + +static void *chtls_uld_add(const struct cxgb4_lld_info *info) +{ + struct cxgb4_lld_info *lldi; + struct chtls_dev *cdev; + int i; + + cdev = kzalloc(sizeof(*cdev) + info->nports * + (sizeof(struct net_device *)), GFP_KERNEL); + if (!cdev) + return NULL; + + lldi = kzalloc(sizeof(*lldi), GFP_KERNEL); + if (!lldi) { + kfree(cdev); + return NULL; + } + + if (chtls_get_skb(cdev)) { + kfree(lldi); + kfree(cdev); + return NULL; + } + *lldi = *info; + cdev->lldi = lldi; + cdev->pdev = lldi->pdev; + cdev->tids = lldi->tids; + cdev->ports = (struct net_device **)(cdev + 1); + cdev->ports = lldi->ports; + cdev->mtus = lldi->mtus; + cdev->tids = lldi->tids; + cdev->pfvf = FW_VIID_PFN_G(cxgb4_port_viid(lldi->ports[0])) + << FW_VIID_PFN_S; + + for (i = 0; i < (1 << RSPQ_HASH_BITS); i++) { + unsigned int size = 64 - sizeof(struct rsp_ctrl) - 8; + + cdev->rspq_skb_cache[i] = __alloc_skb(size, + gfp_any(), 0, + lldi->nodeid); + } + + idr_init(&cdev->aidr); + idr_init(&cdev->hwtid_idr); + INIT_WORK(&cdev->deferq_task, process_deferq); + spin_lock_init(&cdev->listen_lock); + spin_lock_init(&cdev->idr_lock); + spin_lock_init(&cdev->aidr_lock); + cdev->send_page_order = min_t(uint, get_order(32768), + send_page_order); + + if (lldi->vr->key.size) + chtls_init_kmap(cdev, lldi); + + mutex_lock(&cdev_mutex); + list_add_tail(&cdev->list, &cdev_list); + mutex_unlock(&cdev_mutex); + + return cdev; +} + +static void chtls_free_uld(struct chtls_dev *cdev) +{ + int i; + + mutex_lock(&cdev_mutex); + list_del(&cdev->list); + mutex_unlock(&cdev_mutex); + chtls_free_kmap(cdev); + idr_destroy(&cdev->hwtid_idr); + idr_destroy(&cdev->aidr); + for (i = 0; i < (1 << RSPQ_HASH_BITS); i++) { + kfree_skb(cdev->rspq_skb_cache[i]); + cdev->rspq_skb_cache[i] = NULL; + } + kfree(cdev->lldi); + if (cdev->askb) + kfree_skb(cdev->askb); + kfree(cdev); +} + +static void chtls_free_all_uld(void) +{ + struct chtls_dev *cdev, *tmp; + + list_for_each_entry_safe(cdev, tmp, &cdev_list, list) + chtls_free_uld(cdev); +} + +static int chtls_uld_state_change(void *handle, enum cxgb4_state new_state) +{ + struct chtls_dev *cdev = handle; + + switch (new_state) { + case CXGB4_STATE_UP: + chtls_register_dev(cdev); + break; + case CXGB4_STATE_DOWN: + break; + case CXGB4_STATE_START_RECOVERY: + break; + case CXGB4_STATE_DETACH: + chtls_unregister_dev(cdev); + chtls_free_uld(cdev); + break; + } + return 0; +} + +static struct sk_buff *copy_gl_to_skb_pkt(const struct pkt_gl *gl, + const __be64 *rsp, + u32 pktshift) +{ + struct sk_buff *skb; + + /* Allocate space for cpl_pass_accpet_req which will be synthesized by + * driver. Once driver synthesizes cpl_pass_accpet_req the skb will go + * through the regular cpl_pass_accept_req processing in TOM. + */ + skb = alloc_skb(gl->tot_len + sizeof(struct cpl_pass_accept_req) + - pktshift, GFP_ATOMIC); + if (unlikely(!skb)) + return NULL; + __skb_put(skb, gl->tot_len + sizeof(struct cpl_pass_accept_req) + - pktshift); + /* For now we will copy cpl_rx_pkt in the skb */ + skb_copy_to_linear_data(skb, rsp, sizeof(struct cpl_rx_pkt)); + skb_copy_to_linear_data_offset(skb, sizeof(struct cpl_pass_accept_req) + , gl->va + pktshift, + gl->tot_len - pktshift); + + return skb; +} + +static int chtls_recv_packet(struct chtls_dev *cdev, + const struct pkt_gl *gl, const __be64 *rsp) +{ + unsigned int opcode = *(u8 *)rsp; + struct sk_buff *skb; + int ret; + + skb = copy_gl_to_skb_pkt(gl, rsp, cdev->lldi->sge_pktshift); + if (!skb) + return -ENOMEM; + + ret = chtls_handlers[opcode](cdev, skb); + if (ret & CPL_RET_BUF_DONE) + kfree_skb(skb); + + return 0; +} + +static int chtls_recv_rsp(struct chtls_dev *cdev, const __be64 *rsp) +{ + unsigned int opcode = *(u8 *)rsp; + unsigned int len = 64 - sizeof(struct rsp_ctrl) - 8; + unsigned long rspq_bin; + struct sk_buff *skb; + int ret; + + rspq_bin = hash_ptr((void *)rsp, RSPQ_HASH_BITS); + skb = cdev->rspq_skb_cache[rspq_bin]; + if (skb && !skb_is_nonlinear(skb) && + !skb_shared(skb) && !skb_cloned(skb)) { + refcount_inc(&skb->users); + if (refcount_read(&skb->users) == 2) { + __skb_trim(skb, 0); + if (skb_tailroom(skb) >= len) + goto copy_out; + } + refcount_dec(&skb->users); + } + skb = alloc_skb(len, GFP_ATOMIC); + if (unlikely(!skb)) + return 1; + +copy_out: + __skb_put(skb, len); + skb_copy_to_linear_data(skb, rsp, len); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + ret = chtls_handlers[opcode](cdev, skb); + + if (ret & CPL_RET_BUF_DONE) + kfree_skb(skb); + return 0; +} + +static int chtls_recv(struct chtls_dev *cdev, + struct sk_buff **skbs, const __be64 *rsp) +{ + unsigned int opcode = *(u8 *)rsp; + struct sk_buff *skb = *skbs; + int ret; + + __skb_push(skb, sizeof(struct rss_header)); + skb_copy_to_linear_data(skb, rsp, sizeof(struct rss_header)); + + ret = chtls_handlers[opcode](cdev, skb); + if (ret & CPL_RET_BUF_DONE) + kfree_skb(skb); + + return 0; +} + +static int chtls_uld_rx_handler(void *handle, const __be64 *rsp, + const struct pkt_gl *gl) +{ + struct chtls_dev *cdev = handle; + unsigned int opcode = *(u8 *)rsp; + struct sk_buff *skb; + + if (unlikely(opcode == CPL_RX_PKT)) { + if (chtls_recv_packet(cdev, gl, rsp) < 0) + goto nomem; + return 0; + } + + if (!gl) + return chtls_recv_rsp(cdev, rsp); + +#define RX_PULL_LEN 128 + skb = cxgb4_pktgl_to_skb(gl, RX_PULL_LEN, RX_PULL_LEN); + if (unlikely(!skb)) + goto nomem; + chtls_recv(cdev, &skb, rsp); + return 0; + +nomem: + return 1; +} + +static int do_chtls_getsockopt(struct sock *sk, char __user *optval, + int __user *optlen) +{ + struct tls_crypto_info crypto_info; + + crypto_info.version = TLS_1_2_VERSION; + if (copy_to_user(optval, &crypto_info, sizeof(struct tls_crypto_info))) + return -EFAULT; + return 0; +} + +static int chtls_getsockopt(struct sock *sk, int level, int optname, + char __user *optval, int __user *optlen) +{ + struct tls_context *ctx = tls_get_ctx(sk); + + if (level != SOL_TLS) + return ctx->getsockopt(sk, level, optname, optval, optlen); + + return do_chtls_getsockopt(sk, optval, optlen); +} + +static int do_chtls_setsockopt(struct sock *sk, int optname, + char __user *optval, unsigned int optlen) +{ + struct chtls_sock *csk = rcu_dereference_sk_user_data(sk); + struct tls_crypto_info *crypto_info, tmp_crypto_info; + int keylen; + int rc = 0; + + if (!optval || optlen < sizeof(*crypto_info)) { + rc = -EINVAL; + goto out; + } + + rc = copy_from_user(&tmp_crypto_info, optval, sizeof(*crypto_info)); + if (rc) { + rc = -EFAULT; + goto out; + } + + /* check version */ + if (tmp_crypto_info.version != TLS_1_2_VERSION) { + rc = -ENOTSUPP; + goto out; + } + + crypto_info = (struct tls_crypto_info *)&csk->tlshws.crypto_info; + + switch (tmp_crypto_info.cipher_type) { + case TLS_CIPHER_AES_GCM_128: { + rc = copy_from_user(crypto_info, optval, + sizeof(struct + tls12_crypto_info_aes_gcm_128)); + + if (rc) { + rc = -EFAULT; + goto out; + } + + keylen = TLS_CIPHER_AES_GCM_128_KEY_SIZE; + chtls_setkey(csk, keylen, optname); + break; + } + default: + rc = -EINVAL; + goto out; + } +out: + return rc; +} + +static int chtls_setsockopt(struct sock *sk, int level, int optname, + char __user *optval, unsigned int optlen) +{ + struct tls_context *ctx = tls_get_ctx(sk); + + if (level != SOL_TLS) + return ctx->setsockopt(sk, level, optname, optval, optlen); + + return do_chtls_setsockopt(sk, optname, optval, optlen); +} + +static struct cxgb4_uld_info chtls_uld_info = { + .name = DRV_NAME, + .nrxq = MAX_ULD_QSETS, + .ntxq = MAX_ULD_QSETS, + .rxq_size = 1024, + .add = chtls_uld_add, + .state_change = chtls_uld_state_change, + .rx_handler = chtls_uld_rx_handler, +}; + +void chtls_install_cpl_ops(struct sock *sk) +{ + sk->sk_prot = &chtls_cpl_prot; +} + +static void __init chtls_init_ulp_ops(void) +{ + chtls_base_prot = tls_prots[TLS_FULL_HW]; + chtls_base_prot.hash = chtls_hash; + chtls_base_prot.unhash = chtls_unhash; + chtls_cpl_prot = chtls_base_prot; + chtls_init_rsk_ops(&chtls_cpl_prot, &chtls_rsk_ops, + &tcp_prot, PF_INET); + chtls_cpl_prot.close = chtls_close; + chtls_cpl_prot.disconnect = chtls_disconnect; + chtls_cpl_prot.destroy = chtls_destroy_sock; + chtls_cpl_prot.shutdown = chtls_shutdown; + chtls_cpl_prot.sendmsg = chtls_sendmsg; + chtls_cpl_prot.recvmsg = chtls_recvmsg; + chtls_cpl_prot.sendpage = chtls_sendpage; + chtls_cpl_prot.setsockopt = chtls_setsockopt; + chtls_cpl_prot.getsockopt = chtls_getsockopt; +} + +static int __init chtls_register(void) +{ + chtls_init_ulp_ops(); + register_listen_notifier(&listen_notifier); + cxgb4_register_uld(CXGB4_ULD_TLS, &chtls_uld_info); + return 0; +} + +static void __exit chtls_unregister(void) +{ + unregister_listen_notifier(&listen_notifier); + chtls_free_all_uld(); + cxgb4_unregister_uld(CXGB4_ULD_TLS); +} + +module_init(chtls_register); +module_exit(chtls_unregister); + +MODULE_DESCRIPTION("Chelsio TLS Inline driver"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Chelsio Communications"); +MODULE_VERSION(DRV_VERSION); diff --git a/include/uapi/linux/tls.h b/include/uapi/linux/tls.h index 293b2cd..6048b27 100644 --- a/include/uapi/linux/tls.h +++ b/include/uapi/linux/tls.h @@ -38,6 +38,7 @@ /* TLS socket options */ #define TLS_TX 1 /* Set transmit parameters */ +#define TLS_RX 2 /* Set receive parameters */ /* Supported versions */ #define TLS_VERSION_MINOR(ver) ((ver) & 0xFF) From patchwork Thu Feb 22 17:52:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atul Gupta X-Patchwork-Id: 876791 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3znMQp0Xslz9ryG for ; Fri, 23 Feb 2018 04:53:22 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933579AbeBVRxS (ORCPT ); Thu, 22 Feb 2018 12:53:18 -0500 Received: from stargate.chelsio.com ([12.32.117.8]:17161 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933675AbeBVRwy (ORCPT ); Thu, 22 Feb 2018 12:52:54 -0500 Received: from chumthang.asicdesigners.com (indra-ipmi.blr.asicdesigners.com [10.193.186.96] (may be forged)) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w1MHqA37014680; Thu, 22 Feb 2018 09:52:40 -0800 From: Atul Gupta To: davejwatson@fb.com, davem@davemloft.net, herbert@gondor.apana.org.au Cc: sd@queasysnail.net, linux-crypto@vger.kernel.org, netdev@vger.kernel.org, ganeshgr@chelsio.com Subject: [Crypto v7 12/12] Makefile Kconfig Date: Thu, 22 Feb 2018 23:22:08 +0530 Message-Id: <1519321928-22995-9-git-send-email-atul.gupta@chelsio.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> References: <1519321928-22995-1-git-send-email-atul.gupta@chelsio.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Entry for Inline TLS as another driver dependent on cxgb4 and chcr Signed-off-by: Atul Gupta --- drivers/crypto/chelsio/Kconfig | 11 +++++++++++ drivers/crypto/chelsio/Makefile | 1 + drivers/crypto/chelsio/chtls/Makefile | 4 ++++ 3 files changed, 16 insertions(+) create mode 100644 drivers/crypto/chelsio/chtls/Makefile diff --git a/drivers/crypto/chelsio/Kconfig b/drivers/crypto/chelsio/Kconfig index 5ae9f87..930d82d 100644 --- a/drivers/crypto/chelsio/Kconfig +++ b/drivers/crypto/chelsio/Kconfig @@ -29,3 +29,14 @@ config CHELSIO_IPSEC_INLINE default n ---help--- Enable support for IPSec Tx Inline. + +config CRYPTO_DEV_CHELSIO_TLS + tristate "Chelsio Crypto Inline TLS Driver" + depends on CHELSIO_T4 + depends on TLS + select CRYPTO_DEV_CHELSIO + ---help--- + Support Chelsio Inline TLS with Chelsio crypto accelerator. + + To compile this driver as a module, choose M here: the module + will be called chtls. diff --git a/drivers/crypto/chelsio/Makefile b/drivers/crypto/chelsio/Makefile index eaecaf1..639e571 100644 --- a/drivers/crypto/chelsio/Makefile +++ b/drivers/crypto/chelsio/Makefile @@ -3,3 +3,4 @@ ccflags-y := -Idrivers/net/ethernet/chelsio/cxgb4 obj-$(CONFIG_CRYPTO_DEV_CHELSIO) += chcr.o chcr-objs := chcr_core.o chcr_algo.o chcr-$(CONFIG_CHELSIO_IPSEC_INLINE) += chcr_ipsec.o +obj-$(CONFIG_CRYPTO_DEV_CHELSIO_TLS) += chtls/ diff --git a/drivers/crypto/chelsio/chtls/Makefile b/drivers/crypto/chelsio/chtls/Makefile new file mode 100644 index 0000000..df13795 --- /dev/null +++ b/drivers/crypto/chelsio/chtls/Makefile @@ -0,0 +1,4 @@ +ccflags-y := -Idrivers/net/ethernet/chelsio/cxgb4 -Idrivers/crypto/chelsio/ + +obj-$(CONFIG_CRYPTO_DEV_CHELSIO_TLS) += chtls.o +chtls-objs := chtls_main.o chtls_cm.o chtls_io.o chtls_hw.o