From patchwork Mon Oct 19 23:50:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kubalewski, Arkadiusz" X-Patchwork-Id: 1384548 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.138; helo=whitealder.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CFYYy0tWvz9sSn for ; Tue, 20 Oct 2020 10:56:21 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 9123986ACE; Mon, 19 Oct 2020 23:56:19 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 19dysoPT6BD1; Mon, 19 Oct 2020 23:56:09 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by whitealder.osuosl.org (Postfix) with ESMTP id 003FE86AD7; Mon, 19 Oct 2020 23:56:08 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 6D7461BF2FC for ; Mon, 19 Oct 2020 23:56:07 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id F3A1F820A1 for ; Mon, 19 Oct 2020 23:56:05 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TOs6M64DA6U9 for ; Mon, 19 Oct 2020 23:56:01 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by hemlock.osuosl.org (Postfix) with ESMTPS id 52130872B4 for ; Mon, 19 Oct 2020 23:56:01 +0000 (UTC) IronPort-SDR: z8DkdEi38S9Du06tKiRS0WwKf6opHCkZBe8ANeDcXFKN182GSyD25AksJO7hehP3/Aj0JTYsFr XphJc7MNb42A== X-IronPort-AV: E=McAfee;i="6000,8403,9779"; a="166351754" X-IronPort-AV: E=Sophos;i="5.77,395,1596524400"; d="scan'208";a="166351754" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Oct 2020 16:55:59 -0700 IronPort-SDR: z0ZDwTwqZ1ivSAIbcEmEeNaAPQrWm4Xx8Wwenb45JHAtQAdAlTF/AFOsvMbII8j1NNnVpawmHo RjqpJO+j/biw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,395,1596524400"; d="scan'208";a="465712625" Received: from amlin-018-053.igk.intel.com ([10.102.18.53]) by orsmga004.jf.intel.com with ESMTP; 19 Oct 2020 16:55:57 -0700 From: Arkadiusz Kubalewski To: intel-wired-lan@lists.osuosl.org Date: Mon, 19 Oct 2020 23:50:27 +0000 Message-Id: <20201019235029.176050-2-arkadiusz.kubalewski@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20201019235029.176050-1-arkadiusz.kubalewski@intel.com> References: <20201019235029.176050-1-arkadiusz.kubalewski@intel.com> MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH net-next 1/3] i40e: Add hardware configuration for software based DCB X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aleksandr Loktionov Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Add registers and definitions required for applying DCB related hardware configuration. Add functions responsible for calculating and setting proper hardware configuration values for software based DCB functionality. Add function responsible for invoking Admin Queue command, which results in applying new DCB configuration to the hardware. Update copyright dates as appropriate. Software based DCB is a brand-new feature in i40e driver. Before, DCB was implemented by Firmware LLDP agent only. The agent was responsible for handling incoming DCB-related LLDP frames and applying received DCB configuration to hardware. New communication channel between software and hardware is required for software driver. It must be able to calculate and configure all the registers related for DCB feature. Signed-off-by: Aleksandr Loktionov Signed-off-by: Arkadiusz Kubalewski --- .../net/ethernet/intel/i40e/i40e_adminq_cmd.h | 11 +- drivers/net/ethernet/intel/i40e/i40e_common.c | 65 +- drivers/net/ethernet/intel/i40e/i40e_dcb.c | 949 +++++++++++++++++- drivers/net/ethernet/intel/i40e/i40e_dcb.h | 169 +++- .../net/ethernet/intel/i40e/i40e_prototype.h | 9 +- .../net/ethernet/intel/i40e/i40e_register.h | 172 +++- drivers/net/ethernet/intel/i40e/i40e_type.h | 3 +- 7 files changed, 1365 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h index 1e960c3c7ef0..6d94172b1189 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #ifndef _I40E_ADMINQ_CMD_H_ #define _I40E_ADMINQ_CMD_H_ @@ -1080,6 +1080,7 @@ struct i40e_aqc_add_remove_control_packet_filter { #define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC 0x0001 #define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP 0x0002 #define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX 0x0008 +#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_RX 0x0000 __le16 seid; __le16 queue; u8 reserved[2]; @@ -2184,6 +2185,14 @@ I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_get_cee_dcb_cfg_resp); * Used to replace the local MIB of a given LLDP agent. e.g. DCBx */ struct i40e_aqc_lldp_set_local_mib { +#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT 0 +#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK (1 << \ + SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT) +#define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB 0x0 +#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT (1) +#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK (1 << \ + SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT) +#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS 0x1 u8 type; u8 reserved0; __le16 length; diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c index adc9e4fa4789..0342ea4131e8 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_common.c +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #include "i40e.h" #include "i40e_type.h" @@ -3661,6 +3661,46 @@ i40e_status i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, return status; } +/** + * i40e_aq_set_lldp_mib - Set the LLDP MIB + * @hw: pointer to the hw struct + * @mib_type: Local, Remote or both Local and Remote MIBs + * @buff: pointer to a user supplied buffer to store the MIB block + * @buff_size: size of the buffer (in bytes) + * @cmd_details: pointer to command details structure or NULL + * + * Set the LLDP MIB. + **/ +enum i40e_status_code +i40e_aq_set_lldp_mib(struct i40e_hw *hw, + u8 mib_type, void *buff, u16 buff_size, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aqc_lldp_set_local_mib *cmd; + enum i40e_status_code status; + struct i40e_aq_desc desc; + + cmd = (struct i40e_aqc_lldp_set_local_mib *)&desc.params.raw; + if (buff_size == 0 || !buff) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_lldp_set_local_mib); + /* Indirect Command */ + desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (buff_size > I40E_AQ_LARGE_BUF) + desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_LB); + desc.datalen = cpu_to_le16(buff_size); + + cmd->type = mib_type; + cmd->length = cpu_to_le16(buff_size); + cmd->address_high = cpu_to_le32(upper_32_bits((uintptr_t)buff)); + cmd->address_low = cpu_to_le32(lower_32_bits((uintptr_t)buff)); + + status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); + return status; +} + /** * i40e_aq_cfg_lldp_mib_change_event * @hw: pointer to the hw struct @@ -4479,6 +4519,29 @@ static i40e_status i40e_aq_alternate_read(struct i40e_hw *hw, return status; } +/** + * i40e_aq_suspend_port_tx + * @hw: pointer to the hardware structure + * @seid: port seid + * @cmd_details: pointer to command details structure or NULL + * + * Suspend port's Tx traffic + **/ +i40e_status i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aqc_tx_sched_ind *cmd; + struct i40e_aq_desc desc; + i40e_status status; + + cmd = (struct i40e_aqc_tx_sched_ind *)&desc.params.raw; + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_suspend_port_tx); + cmd->vsi_seid = cpu_to_le16(seid); + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + /** * i40e_aq_resume_port_tx * @hw: pointer to the hardware structure diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.c b/drivers/net/ethernet/intel/i40e/i40e_dcb.c index 9de503c5f99b..0d482593aa1c 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb.c +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #include "i40e_adminq.h" #include "i40e_prototype.h" @@ -932,6 +932,953 @@ i40e_status i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change) return ret; } +/** + * i40e_get_fw_lldp_status + * @hw: pointer to the hw struct + * @lldp_status: pointer to the status enum + * + * Get status of FW Link Layer Discovery Protocol (LLDP) Agent. + * Status of agent is reported via @lldp_status parameter. + **/ +enum i40e_status_code +i40e_get_fw_lldp_status(struct i40e_hw *hw, + enum i40e_get_fw_lldp_status_resp *lldp_status) +{ + struct i40e_virt_mem mem; + i40e_status ret; + u8 *lldpmib; + + if (!lldp_status) + return I40E_ERR_PARAM; + + /* Allocate buffer for the LLDPDU */ + ret = i40e_allocate_virt_mem(hw, &mem, I40E_LLDPDU_SIZE); + if (ret) + return ret; + + lldpmib = (u8 *)mem.va; + ret = i40e_aq_get_lldp_mib(hw, 0, 0, (void *)lldpmib, + I40E_LLDPDU_SIZE, NULL, NULL, NULL); + + if (!ret) { + *lldp_status = I40E_GET_FW_LLDP_STATUS_ENABLED; + } else if (hw->aq.asq_last_status == I40E_AQ_RC_ENOENT) { + /* MIB is not available yet but the agent is running */ + *lldp_status = I40E_GET_FW_LLDP_STATUS_ENABLED; + ret = 0; + } else if (hw->aq.asq_last_status == I40E_AQ_RC_EPERM) { + *lldp_status = I40E_GET_FW_LLDP_STATUS_DISABLED; + ret = 0; + } + + i40e_free_virt_mem(hw, &mem); + return ret; +} + +/** + * i40e_add_ieee_ets_tlv - Prepare ETS TLV in IEEE format + * @tlv: Fill the ETS config data in IEEE format + * @dcbcfg: Local store which holds the DCB Config + * + * Prepare IEEE 802.1Qaz ETS CFG TLV + **/ +static void i40e_add_ieee_ets_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + u8 priority0, priority1, maxtcwilling = 0; + struct i40e_dcb_ets_config *etscfg; + u16 offset = 0, typelength, i; + u8 *buf = tlv->tlvinfo; + u32 ouisubtype; + + typelength = (u16)((I40E_TLV_TYPE_ORG << I40E_LLDP_TLV_TYPE_SHIFT) | + I40E_IEEE_ETS_TLV_LENGTH); + tlv->typelength = htons(typelength); + + ouisubtype = (u32)((I40E_IEEE_8021QAZ_OUI << I40E_LLDP_TLV_OUI_SHIFT) | + I40E_IEEE_SUBTYPE_ETS_CFG); + tlv->ouisubtype = htonl(ouisubtype); + + /* First Octet post subtype + * -------------------------- + * |will-|CBS | Re- | Max | + * |ing | |served| TCs | + * -------------------------- + * |1bit | 1bit|3 bits|3bits| + */ + etscfg = &dcbcfg->etscfg; + if (etscfg->willing) + maxtcwilling = BIT(I40E_IEEE_ETS_WILLING_SHIFT); + maxtcwilling |= etscfg->maxtcs & I40E_IEEE_ETS_MAXTC_MASK; + buf[offset] = maxtcwilling; + + /* Move offset to Priority Assignment Table */ + offset++; + + /* Priority Assignment Table (4 octets) + * Octets:| 1 | 2 | 3 | 4 | + * ----------------------------------------- + * |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7| + * ----------------------------------------- + * Bits:|7 4|3 0|7 4|3 0|7 4|3 0|7 4|3 0| + * ----------------------------------------- + */ + for (i = 0; i < 4; i++) { + priority0 = etscfg->prioritytable[i * 2] & 0xF; + priority1 = etscfg->prioritytable[i * 2 + 1] & 0xF; + buf[offset] = (priority0 << I40E_IEEE_ETS_PRIO_1_SHIFT) | + priority1; + offset++; + } + + /* TC Bandwidth Table (8 octets) + * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | + * --------------------------------- + * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| + * --------------------------------- + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + buf[offset++] = etscfg->tcbwtable[i]; + + /* TSA Assignment Table (8 octets) + * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | + * --------------------------------- + * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| + * --------------------------------- + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + buf[offset++] = etscfg->tsatable[i]; +} + +/** + * i40e_add_ieee_etsrec_tlv - Prepare ETS Recommended TLV in IEEE format + * @tlv: Fill ETS Recommended TLV in IEEE format + * @dcbcfg: Local store which holds the DCB Config + * + * Prepare IEEE 802.1Qaz ETS REC TLV + **/ +static void i40e_add_ieee_etsrec_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + struct i40e_dcb_ets_config *etsrec; + u16 offset = 0, typelength, i; + u8 priority0, priority1; + u8 *buf = tlv->tlvinfo; + u32 ouisubtype; + + typelength = (u16)((I40E_TLV_TYPE_ORG << I40E_LLDP_TLV_TYPE_SHIFT) | + I40E_IEEE_ETS_TLV_LENGTH); + tlv->typelength = htons(typelength); + + ouisubtype = (u32)((I40E_IEEE_8021QAZ_OUI << I40E_LLDP_TLV_OUI_SHIFT) | + I40E_IEEE_SUBTYPE_ETS_REC); + tlv->ouisubtype = htonl(ouisubtype); + + etsrec = &dcbcfg->etsrec; + /* First Octet is reserved */ + /* Move offset to Priority Assignment Table */ + offset++; + + /* Priority Assignment Table (4 octets) + * Octets:| 1 | 2 | 3 | 4 | + * ----------------------------------------- + * |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7| + * ----------------------------------------- + * Bits:|7 4|3 0|7 4|3 0|7 4|3 0|7 4|3 0| + * ----------------------------------------- + */ + for (i = 0; i < 4; i++) { + priority0 = etsrec->prioritytable[i * 2] & 0xF; + priority1 = etsrec->prioritytable[i * 2 + 1] & 0xF; + buf[offset] = (priority0 << I40E_IEEE_ETS_PRIO_1_SHIFT) | + priority1; + offset++; + } + + /* TC Bandwidth Table (8 octets) + * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | + * --------------------------------- + * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| + * --------------------------------- + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + buf[offset++] = etsrec->tcbwtable[i]; + + /* TSA Assignment Table (8 octets) + * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | + * --------------------------------- + * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| + * --------------------------------- + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + buf[offset++] = etsrec->tsatable[i]; +} + +/** + * i40e_add_ieee_pfc_tlv - Prepare PFC TLV in IEEE format + * @tlv: Fill PFC TLV in IEEE format + * @dcbcfg: Local store to get PFC CFG data + * + * Prepare IEEE 802.1Qaz PFC CFG TLV + **/ +static void i40e_add_ieee_pfc_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + u8 *buf = tlv->tlvinfo; + u32 ouisubtype; + u16 typelength; + + typelength = (u16)((I40E_TLV_TYPE_ORG << I40E_LLDP_TLV_TYPE_SHIFT) | + I40E_IEEE_PFC_TLV_LENGTH); + tlv->typelength = htons(typelength); + + ouisubtype = (u32)((I40E_IEEE_8021QAZ_OUI << I40E_LLDP_TLV_OUI_SHIFT) | + I40E_IEEE_SUBTYPE_PFC_CFG); + tlv->ouisubtype = htonl(ouisubtype); + + /* ---------------------------------------- + * |will-|MBC | Re- | PFC | PFC Enable | + * |ing | |served| cap | | + * ----------------------------------------- + * |1bit | 1bit|2 bits|4bits| 1 octet | + */ + if (dcbcfg->pfc.willing) + buf[0] = BIT(I40E_IEEE_PFC_WILLING_SHIFT); + + if (dcbcfg->pfc.mbc) + buf[0] |= BIT(I40E_IEEE_PFC_MBC_SHIFT); + + buf[0] |= dcbcfg->pfc.pfccap & 0xF; + buf[1] = dcbcfg->pfc.pfcenable; +} + +/** + * i40e_add_ieee_app_pri_tlv - Prepare APP TLV in IEEE format + * @tlv: Fill APP TLV in IEEE format + * @dcbcfg: Local store to get APP CFG data + * + * Prepare IEEE 802.1Qaz APP CFG TLV + **/ +static void i40e_add_ieee_app_pri_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + u16 typelength, length, offset = 0; + u8 priority, selector, i = 0; + u8 *buf = tlv->tlvinfo; + u32 ouisubtype; + + /* No APP TLVs then just return */ + if (dcbcfg->numapps == 0) + return; + ouisubtype = (u32)((I40E_IEEE_8021QAZ_OUI << I40E_LLDP_TLV_OUI_SHIFT) | + I40E_IEEE_SUBTYPE_APP_PRI); + tlv->ouisubtype = htonl(ouisubtype); + + /* Move offset to App Priority Table */ + offset++; + /* Application Priority Table (3 octets) + * Octets:| 1 | 2 | 3 | + * ----------------------------------------- + * |Priority|Rsrvd| Sel | Protocol ID | + * ----------------------------------------- + * Bits:|23 21|20 19|18 16|15 0| + * ----------------------------------------- + */ + while (i < dcbcfg->numapps) { + priority = dcbcfg->app[i].priority & 0x7; + selector = dcbcfg->app[i].selector & 0x7; + buf[offset] = (priority << I40E_IEEE_APP_PRIO_SHIFT) | selector; + buf[offset + 1] = (dcbcfg->app[i].protocolid >> 0x8) & 0xFF; + buf[offset + 2] = dcbcfg->app[i].protocolid & 0xFF; + /* Move to next app */ + offset += 3; + i++; + if (i >= I40E_DCBX_MAX_APPS) + break; + } + /* length includes size of ouisubtype + 1 reserved + 3*numapps */ + length = sizeof(tlv->ouisubtype) + 1 + (i * 3); + typelength = (u16)((I40E_TLV_TYPE_ORG << I40E_LLDP_TLV_TYPE_SHIFT) | + (length & 0x1FF)); + tlv->typelength = htons(typelength); +} + +/** + * i40e_add_dcb_tlv - Add all IEEE TLVs + * @tlv: pointer to org tlv + * @dcbcfg: pointer to modified dcbx config structure * + * @tlvid: tlv id to be added + * add tlv information + **/ +static void i40e_add_dcb_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg, + u16 tlvid) +{ + switch (tlvid) { + case I40E_IEEE_TLV_ID_ETS_CFG: + i40e_add_ieee_ets_tlv(tlv, dcbcfg); + break; + case I40E_IEEE_TLV_ID_ETS_REC: + i40e_add_ieee_etsrec_tlv(tlv, dcbcfg); + break; + case I40E_IEEE_TLV_ID_PFC_CFG: + i40e_add_ieee_pfc_tlv(tlv, dcbcfg); + break; + case I40E_IEEE_TLV_ID_APP_PRI: + i40e_add_ieee_app_pri_tlv(tlv, dcbcfg); + break; + default: + break; + } +} + +/** + * i40e_set_dcb_config - Set the local LLDP MIB to FW + * @hw: pointer to the hw struct + * + * Set DCB configuration to the Firmware + **/ +i40e_status i40e_set_dcb_config(struct i40e_hw *hw) +{ + struct i40e_dcbx_config *dcbcfg; + struct i40e_virt_mem mem; + u8 mib_type, *lldpmib; + i40e_status ret; + u16 miblen; + + /* update the hw local config */ + dcbcfg = &hw->local_dcbx_config; + /* Allocate the LLDPDU */ + ret = i40e_allocate_virt_mem(hw, &mem, I40E_LLDPDU_SIZE); + if (ret) + return ret; + + mib_type = SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB; + if (dcbcfg->app_mode == I40E_DCBX_APPS_NON_WILLING) { + mib_type |= SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS << + SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT; + } + lldpmib = (u8 *)mem.va; + i40e_dcb_config_to_lldp(lldpmib, &miblen, dcbcfg); + ret = i40e_aq_set_lldp_mib(hw, mib_type, (void *)lldpmib, miblen, NULL); + + i40e_free_virt_mem(hw, &mem); + return ret; +} + +/** + * i40e_dcb_config_to_lldp - Convert Dcbconfig to MIB format + * @lldpmib: pointer to mib to be output + * @miblen: pointer to u16 for length of lldpmib + * @dcbcfg: store for LLDPDU data + * + * send DCB configuration to FW + **/ +i40e_status i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen, + struct i40e_dcbx_config *dcbcfg) +{ + u16 length, offset = 0, tlvid, typelength; + struct i40e_lldp_org_tlv *tlv; + + tlv = (struct i40e_lldp_org_tlv *)lldpmib; + tlvid = I40E_TLV_ID_START; + do { + i40e_add_dcb_tlv(tlv, dcbcfg, tlvid++); + typelength = ntohs(tlv->typelength); + length = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> + I40E_LLDP_TLV_LEN_SHIFT); + if (length) + offset += length + I40E_IEEE_TLV_HEADER_LENGTH; + /* END TLV or beyond LLDPDU size */ + if (tlvid >= I40E_TLV_ID_END_OF_LLDPPDU || + offset >= I40E_LLDPDU_SIZE) + break; + /* Move to next TLV */ + if (length) + tlv = (struct i40e_lldp_org_tlv *)((char *)tlv + + sizeof(tlv->typelength) + length); + } while (tlvid < I40E_TLV_ID_END_OF_LLDPPDU); + *miblen = offset; + return I40E_SUCCESS; +} + +/** + * i40e_dcb_hw_rx_fifo_config + * @hw: pointer to the hw struct + * @ets_mode: Strict Priority or Round Robin mode + * @non_ets_mode: Strict Priority or Round Robin + * @max_exponent: Exponent to calculate max refill credits + * @lltc_map: Low latency TC bitmap + * + * Configure HW Rx FIFO as part of DCB configuration. + **/ +void i40e_dcb_hw_rx_fifo_config(struct i40e_hw *hw, + enum i40e_dcb_arbiter_mode ets_mode, + enum i40e_dcb_arbiter_mode non_ets_mode, + u32 max_exponent, + u8 lltc_map) +{ + u32 reg = rd32(hw, I40E_PRTDCB_RETSC); + + reg &= ~I40E_PRTDCB_RETSC_ETS_MODE_MASK; + reg |= ((u32)ets_mode << I40E_PRTDCB_RETSC_ETS_MODE_SHIFT) & + I40E_PRTDCB_RETSC_ETS_MODE_MASK; + + reg &= ~I40E_PRTDCB_RETSC_NON_ETS_MODE_MASK; + reg |= ((u32)non_ets_mode << I40E_PRTDCB_RETSC_NON_ETS_MODE_SHIFT) & + I40E_PRTDCB_RETSC_NON_ETS_MODE_MASK; + + reg &= ~I40E_PRTDCB_RETSC_ETS_MAX_EXP_MASK; + reg |= (max_exponent << I40E_PRTDCB_RETSC_ETS_MAX_EXP_SHIFT) & + I40E_PRTDCB_RETSC_ETS_MAX_EXP_MASK; + + reg &= ~I40E_PRTDCB_RETSC_LLTC_MASK; + reg |= (lltc_map << I40E_PRTDCB_RETSC_LLTC_SHIFT) & + I40E_PRTDCB_RETSC_LLTC_MASK; + wr32(hw, I40E_PRTDCB_RETSC, reg); +} + +/** + * i40e_dcb_hw_rx_cmd_monitor_config + * @hw: pointer to the hw struct + * @num_tc: Total number of traffic class + * @num_ports: Total number of ports on device + * + * Configure HW Rx command monitor as part of DCB configuration. + **/ +void i40e_dcb_hw_rx_cmd_monitor_config(struct i40e_hw *hw, + u8 num_tc, u8 num_ports) +{ + u32 threshold; + u32 fifo_size; + u32 reg; + + /* Set the threshold and fifo_size based on number of ports */ + switch (num_ports) { + case 1: + threshold = I40E_DCB_1_PORT_THRESHOLD; + fifo_size = I40E_DCB_1_PORT_FIFO_SIZE; + break; + case 2: + if (num_tc > 4) { + threshold = I40E_DCB_2_PORT_THRESHOLD_HIGH_NUM_TC; + fifo_size = I40E_DCB_2_PORT_FIFO_SIZE_HIGH_NUM_TC; + } else { + threshold = I40E_DCB_2_PORT_THRESHOLD_LOW_NUM_TC; + fifo_size = I40E_DCB_2_PORT_FIFO_SIZE_LOW_NUM_TC; + } + break; + case 4: + if (num_tc > 4) { + threshold = I40E_DCB_4_PORT_THRESHOLD_HIGH_NUM_TC; + fifo_size = I40E_DCB_4_PORT_FIFO_SIZE_HIGH_NUM_TC; + } else { + threshold = I40E_DCB_4_PORT_THRESHOLD_LOW_NUM_TC; + fifo_size = I40E_DCB_4_PORT_FIFO_SIZE_LOW_NUM_TC; + } + break; + default: + i40e_debug(hw, I40E_DEBUG_DCB, "Invalid num_ports %u.\n", + (u32)num_ports); + return; + } + + /* The hardware manual describes setting up of I40E_PRT_SWR_PM_THR + * based on the number of ports and traffic classes for a given port as + * part of DCB configuration. + */ + reg = rd32(hw, I40E_PRT_SWR_PM_THR); + reg &= ~I40E_PRT_SWR_PM_THR_THRESHOLD_MASK; + reg |= (threshold << I40E_PRT_SWR_PM_THR_THRESHOLD_SHIFT) & + I40E_PRT_SWR_PM_THR_THRESHOLD_MASK; + wr32(hw, I40E_PRT_SWR_PM_THR, reg); + + reg = rd32(hw, I40E_PRTDCB_RPPMC); + reg &= ~I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_MASK; + reg |= (fifo_size << I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_SHIFT) & + I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_MASK; + wr32(hw, I40E_PRTDCB_RPPMC, reg); +} + +/** + * i40e_dcb_hw_pfc_config + * @hw: pointer to the hw struct + * @pfc_en: Bitmap of PFC enabled priorities + * @prio_tc: priority to tc assignment indexed by priority + * + * Configure HW Priority Flow Controller as part of DCB configuration. + **/ +void i40e_dcb_hw_pfc_config(struct i40e_hw *hw, + u8 pfc_en, u8 *prio_tc) +{ + u16 refresh_time = (u16)I40E_DEFAULT_PAUSE_TIME / 2; + u32 link_speed = hw->phy.link_info.link_speed; + u8 first_pfc_prio = 0; + u8 num_pfc_tc = 0; + u8 tc2pfc = 0; + u32 reg; + u8 i; + + /* Get Number of PFC TCs and TC2PFC map */ + for (i = 0; i < I40E_MAX_USER_PRIORITY; i++) { + if (pfc_en & BIT(i)) { + if (!first_pfc_prio) + first_pfc_prio = i; + /* Set bit for the PFC TC */ + tc2pfc |= BIT(prio_tc[i]); + num_pfc_tc++; + } + } + + switch (link_speed) { + case I40E_LINK_SPEED_10GB: + reg = rd32(hw, I40E_PRTDCB_MFLCN); + reg |= BIT(I40E_PRTDCB_MFLCN_DPF_SHIFT) & + I40E_PRTDCB_MFLCN_DPF_MASK; + reg &= ~I40E_PRTDCB_MFLCN_RFCE_MASK; + reg &= ~I40E_PRTDCB_MFLCN_RPFCE_MASK; + if (pfc_en) { + reg |= BIT(I40E_PRTDCB_MFLCN_RPFCM_SHIFT) & + I40E_PRTDCB_MFLCN_RPFCM_MASK; + reg |= ((u32)pfc_en << I40E_PRTDCB_MFLCN_RPFCE_SHIFT) & + I40E_PRTDCB_MFLCN_RPFCE_MASK; + } + wr32(hw, I40E_PRTDCB_MFLCN, reg); + + reg = rd32(hw, I40E_PRTDCB_FCCFG); + reg &= ~I40E_PRTDCB_FCCFG_TFCE_MASK; + if (pfc_en) + reg |= (I40E_DCB_PFC_ENABLED << + I40E_PRTDCB_FCCFG_TFCE_SHIFT) & + I40E_PRTDCB_FCCFG_TFCE_MASK; + wr32(hw, I40E_PRTDCB_FCCFG, reg); + + /* FCTTV and FCRTV to be set by default */ + break; + case I40E_LINK_SPEED_40GB: + reg = rd32(hw, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP); + reg &= ~I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_MASK; + wr32(hw, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP, reg); + + reg = rd32(hw, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP); + reg &= ~I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_MASK; + reg |= BIT(I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_SHIFT) & + I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_MASK; + wr32(hw, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP, reg); + + reg = rd32(hw, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE); + reg &= ~I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_MASK; + reg |= ((u32)pfc_en << + I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_SHIFT) & + I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_MASK; + wr32(hw, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE, reg); + + reg = rd32(hw, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE); + reg &= ~I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_MASK; + reg |= ((u32)pfc_en << + I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_SHIFT) & + I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_MASK; + wr32(hw, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE, reg); + + for (i = 0; i < I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX; i++) { + reg = rd32(hw, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(i)); + reg &= ~I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MASK; + if (pfc_en) { + reg |= ((u32)refresh_time << + I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_SHIFT) & + I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MASK; + } + wr32(hw, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(i), reg); + } + /* PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA default value is 0xFFFF + * for all user priorities + */ + break; + } + + reg = rd32(hw, I40E_PRTDCB_TC2PFC); + reg &= ~I40E_PRTDCB_TC2PFC_TC2PFC_MASK; + reg |= ((u32)tc2pfc << I40E_PRTDCB_TC2PFC_TC2PFC_SHIFT) & + I40E_PRTDCB_TC2PFC_TC2PFC_MASK; + wr32(hw, I40E_PRTDCB_TC2PFC, reg); + + reg = rd32(hw, I40E_PRTDCB_RUP); + reg &= ~I40E_PRTDCB_RUP_NOVLANUP_MASK; + reg |= ((u32)first_pfc_prio << I40E_PRTDCB_RUP_NOVLANUP_SHIFT) & + I40E_PRTDCB_RUP_NOVLANUP_MASK; + wr32(hw, I40E_PRTDCB_RUP, reg); + + reg = rd32(hw, I40E_PRTDCB_TDPMC); + reg &= ~I40E_PRTDCB_TDPMC_TCPM_MODE_MASK; + if (num_pfc_tc > I40E_DCB_PFC_FORCED_NUM_TC) { + reg |= BIT(I40E_PRTDCB_TDPMC_TCPM_MODE_SHIFT) & + I40E_PRTDCB_TDPMC_TCPM_MODE_MASK; + } + wr32(hw, I40E_PRTDCB_TDPMC, reg); + + reg = rd32(hw, I40E_PRTDCB_TCPMC); + reg &= ~I40E_PRTDCB_TCPMC_TCPM_MODE_MASK; + if (num_pfc_tc > I40E_DCB_PFC_FORCED_NUM_TC) { + reg |= BIT(I40E_PRTDCB_TCPMC_TCPM_MODE_SHIFT) & + I40E_PRTDCB_TCPMC_TCPM_MODE_MASK; + } + wr32(hw, I40E_PRTDCB_TCPMC, reg); +} + +/** + * i40e_dcb_hw_set_num_tc + * @hw: pointer to the hw struct + * @num_tc: number of traffic classes + * + * Configure number of traffic classes in HW + **/ +void i40e_dcb_hw_set_num_tc(struct i40e_hw *hw, u8 num_tc) +{ + u32 reg = rd32(hw, I40E_PRTDCB_GENC); + + reg &= ~I40E_PRTDCB_GENC_NUMTC_MASK; + reg |= ((u32)num_tc << I40E_PRTDCB_GENC_NUMTC_SHIFT) & + I40E_PRTDCB_GENC_NUMTC_MASK; + wr32(hw, I40E_PRTDCB_GENC, reg); +} + +/** + * i40e_dcb_hw_get_num_tc + * @hw: pointer to the hw struct + * + * Returns number of traffic classes configured in HW + **/ +u8 i40e_dcb_hw_get_num_tc(struct i40e_hw *hw) +{ + u32 reg = rd32(hw, I40E_PRTDCB_GENC); + + return (u8)((reg & I40E_PRTDCB_GENC_NUMTC_MASK) >> + I40E_PRTDCB_GENC_NUMTC_SHIFT); +} + +/** + * i40e_dcb_hw_rx_ets_bw_config + * @hw: pointer to the hw struct + * @bw_share: Bandwidth share indexed per traffic class + * @mode: Strict Priority or Round Robin mode between UP sharing same + * traffic class + * @prio_type: TC is ETS enabled or strict priority + * + * Configure HW Rx ETS bandwidth as part of DCB configuration. + **/ +void i40e_dcb_hw_rx_ets_bw_config(struct i40e_hw *hw, u8 *bw_share, + u8 *mode, u8 *prio_type) +{ + u32 reg; + u8 i; + + for (i = 0; i <= I40E_PRTDCB_RETSTCC_MAX_INDEX; i++) { + reg = rd32(hw, I40E_PRTDCB_RETSTCC(i)); + reg &= ~(I40E_PRTDCB_RETSTCC_BWSHARE_MASK | + I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK | + I40E_PRTDCB_RETSTCC_ETSTC_SHIFT); + reg |= ((u32)bw_share[i] << I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT) & + I40E_PRTDCB_RETSTCC_BWSHARE_MASK; + reg |= ((u32)mode[i] << I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT) & + I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK; + reg |= ((u32)prio_type[i] << I40E_PRTDCB_RETSTCC_ETSTC_SHIFT) & + I40E_PRTDCB_RETSTCC_ETSTC_MASK; + wr32(hw, I40E_PRTDCB_RETSTCC(i), reg); + } +} + +/** + * i40e_dcb_hw_rx_ets_bw_config + * @hw: pointer to the hw struct + * @prio_tc: priority to tc assignment indexed by priority + * + * Configure HW Rx UP2TC map as part of DCB configuration. + **/ +void i40e_dcb_hw_rx_up2tc_config(struct i40e_hw *hw, u8 *prio_tc) +{ + u32 reg = rd32(hw, I40E_PRTDCB_RUP2TC); +#define I40E_UP2TC_REG(val, i) \ + (((val) << I40E_PRTDCB_RUP2TC_UP##i##TC_SHIFT) & \ + I40E_PRTDCB_RUP2TC_UP##i##TC_MASK) + + reg |= I40E_UP2TC_REG(prio_tc[0], 0); + reg |= I40E_UP2TC_REG(prio_tc[1], 1); + reg |= I40E_UP2TC_REG(prio_tc[2], 2); + reg |= I40E_UP2TC_REG(prio_tc[3], 3); + reg |= I40E_UP2TC_REG(prio_tc[4], 4); + reg |= I40E_UP2TC_REG(prio_tc[5], 5); + reg |= I40E_UP2TC_REG(prio_tc[6], 6); + reg |= I40E_UP2TC_REG(prio_tc[7], 7); + + wr32(hw, I40E_PRTDCB_RUP2TC, reg); +} + +/** + * i40e_dcb_hw_calculate_pool_sizes - configure dcb pool sizes + * @hw: pointer to the hw struct + * @num_ports: Number of available ports on the device + * @eee_enabled: EEE enabled for the given port + * @pfc_en: Bit map of PFC enabled traffic classes + * @mfs_tc: Array of max frame size for each traffic class + * @pb_cfg: pointer to packet buffer configuration + * + * Calculate the shared and dedicated per TC pool sizes, + * watermarks and threshold values. + **/ +void i40e_dcb_hw_calculate_pool_sizes(struct i40e_hw *hw, + u8 num_ports, bool eee_enabled, + u8 pfc_en, u32 *mfs_tc, + struct i40e_rx_pb_config *pb_cfg) +{ + u32 pool_size[I40E_MAX_TRAFFIC_CLASS]; + u32 high_wm[I40E_MAX_TRAFFIC_CLASS]; + u32 low_wm[I40E_MAX_TRAFFIC_CLASS]; + u32 total_pool_size = 0; + int shared_pool_size; /* Need signed variable */ + u32 port_pb_size; + u32 mfs_max; + u32 pcirtt; + u8 i; + + /* Get the MFS(max) for the port */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + if (mfs_tc[i] > mfs_max) + mfs_max = mfs_tc[i]; + } + + pcirtt = I40E_BT2B(I40E_PCIRTT_LINK_SPEED_10G); + + /* Calculate effective Rx PB size per port */ + port_pb_size = I40E_DEVICE_RPB_SIZE / num_ports; + if (eee_enabled) + port_pb_size -= I40E_BT2B(I40E_EEE_TX_LPI_EXIT_TIME); + port_pb_size -= mfs_max; + + /* Step 1 Calculating tc pool/shared pool sizes and watermarks */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + if (pfc_en & BIT(i)) { + low_wm[i] = (I40E_DCB_WATERMARK_START_FACTOR * + mfs_tc[i]) + pcirtt; + high_wm[i] = low_wm[i]; + high_wm[i] += ((mfs_max > I40E_MAX_FRAME_SIZE) + ? mfs_max : I40E_MAX_FRAME_SIZE); + pool_size[i] = high_wm[i]; + pool_size[i] += I40E_BT2B(I40E_STD_DV_TC(mfs_max, + mfs_tc[i])); + } else { + low_wm[i] = 0; + pool_size[i] = (I40E_DCB_WATERMARK_START_FACTOR * + mfs_tc[i]) + pcirtt; + high_wm[i] = pool_size[i]; + } + total_pool_size += pool_size[i]; + } + + shared_pool_size = port_pb_size - total_pool_size; + if (shared_pool_size > 0) { + pb_cfg->shared_pool_size = shared_pool_size; + pb_cfg->shared_pool_high_wm = shared_pool_size; + pb_cfg->shared_pool_low_wm = 0; + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + pb_cfg->shared_pool_low_thresh[i] = 0; + pb_cfg->shared_pool_high_thresh[i] = shared_pool_size; + pb_cfg->tc_pool_size[i] = pool_size[i]; + pb_cfg->tc_pool_high_wm[i] = high_wm[i]; + pb_cfg->tc_pool_low_wm[i] = low_wm[i]; + } + + } else { + i40e_debug(hw, I40E_DEBUG_DCB, + "The shared pool size for the port is negative %d.\n", + shared_pool_size); + } +} + +/** + * i40e_dcb_hw_rx_pb_config + * @hw: pointer to the hw struct + * @old_pb_cfg: Existing Rx Packet buffer configuration + * @new_pb_cfg: New Rx Packet buffer configuration + * + * Program the Rx Packet Buffer registers. + **/ +void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, + struct i40e_rx_pb_config *old_pb_cfg, + struct i40e_rx_pb_config *new_pb_cfg) +{ + u32 old_val; + u32 new_val; + u32 reg; + u8 i; + + /* The Rx Packet buffer register programming needs to be done in a + * certain order and the following code is based on that + * requirement. + */ + + /* Program the shared pool low water mark per port if decreasing */ + old_val = old_pb_cfg->shared_pool_low_wm; + new_val = new_pb_cfg->shared_pool_low_wm; + if (new_val < old_val) { + reg = rd32(hw, I40E_PRTRPB_SLW); + reg &= ~I40E_PRTRPB_SLW_SLW_MASK; + reg |= (new_val << I40E_PRTRPB_SLW_SLW_SHIFT) & + I40E_PRTRPB_SLW_SLW_MASK; + wr32(hw, I40E_PRTRPB_SLW, reg); + } + + /* Program the shared pool low threshold and tc pool + * low water mark per TC that are decreasing. + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + old_val = old_pb_cfg->shared_pool_low_thresh[i]; + new_val = new_pb_cfg->shared_pool_low_thresh[i]; + if (new_val < old_val) { + reg = rd32(hw, I40E_PRTRPB_SLT(i)); + reg &= ~I40E_PRTRPB_SLT_SLT_TCN_MASK; + reg |= (new_val << I40E_PRTRPB_SLT_SLT_TCN_SHIFT) & + I40E_PRTRPB_SLT_SLT_TCN_MASK; + wr32(hw, I40E_PRTRPB_SLT(i), reg); + } + + old_val = old_pb_cfg->tc_pool_low_wm[i]; + new_val = new_pb_cfg->tc_pool_low_wm[i]; + if (new_val < old_val) { + reg = rd32(hw, I40E_PRTRPB_DLW(i)); + reg &= ~I40E_PRTRPB_DLW_DLW_TCN_MASK; + reg |= (new_val << I40E_PRTRPB_DLW_DLW_TCN_SHIFT) & + I40E_PRTRPB_DLW_DLW_TCN_MASK; + wr32(hw, I40E_PRTRPB_DLW(i), reg); + } + } + + /* Program the shared pool high water mark per port if decreasing */ + old_val = old_pb_cfg->shared_pool_high_wm; + new_val = new_pb_cfg->shared_pool_high_wm; + if (new_val < old_val) { + reg = rd32(hw, I40E_PRTRPB_SHW); + reg &= ~I40E_PRTRPB_SHW_SHW_MASK; + reg |= (new_val << I40E_PRTRPB_SHW_SHW_SHIFT) & + I40E_PRTRPB_SHW_SHW_MASK; + wr32(hw, I40E_PRTRPB_SHW, reg); + } + + /* Program the shared pool high threshold and tc pool + * high water mark per TC that are decreasing. + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + old_val = old_pb_cfg->shared_pool_high_thresh[i]; + new_val = new_pb_cfg->shared_pool_high_thresh[i]; + if (new_val < old_val) { + reg = rd32(hw, I40E_PRTRPB_SHT(i)); + reg &= ~I40E_PRTRPB_SHT_SHT_TCN_MASK; + reg |= (new_val << I40E_PRTRPB_SHT_SHT_TCN_SHIFT) & + I40E_PRTRPB_SHT_SHT_TCN_MASK; + wr32(hw, I40E_PRTRPB_SHT(i), reg); + } + + old_val = old_pb_cfg->tc_pool_high_wm[i]; + new_val = new_pb_cfg->tc_pool_high_wm[i]; + if (new_val < old_val) { + reg = rd32(hw, I40E_PRTRPB_DHW(i)); + reg &= ~I40E_PRTRPB_DHW_DHW_TCN_MASK; + reg |= (new_val << I40E_PRTRPB_DHW_DHW_TCN_SHIFT) & + I40E_PRTRPB_DHW_DHW_TCN_MASK; + wr32(hw, I40E_PRTRPB_DHW(i), reg); + } + } + + /* Write Dedicated Pool Sizes per TC */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + new_val = new_pb_cfg->tc_pool_size[i]; + reg = rd32(hw, I40E_PRTRPB_DPS(i)); + reg &= ~I40E_PRTRPB_DPS_DPS_TCN_MASK; + reg |= (new_val << I40E_PRTRPB_DPS_DPS_TCN_SHIFT) & + I40E_PRTRPB_DPS_DPS_TCN_MASK; + wr32(hw, I40E_PRTRPB_DPS(i), reg); + } + + /* Write Shared Pool Size per port */ + new_val = new_pb_cfg->shared_pool_size; + reg = rd32(hw, I40E_PRTRPB_SPS); + reg &= ~I40E_PRTRPB_SPS_SPS_MASK; + reg |= (new_val << I40E_PRTRPB_SPS_SPS_SHIFT) & + I40E_PRTRPB_SPS_SPS_MASK; + wr32(hw, I40E_PRTRPB_SPS, reg); + + /* Program the shared pool low water mark per port if increasing */ + old_val = old_pb_cfg->shared_pool_low_wm; + new_val = new_pb_cfg->shared_pool_low_wm; + if (new_val > old_val) { + reg = rd32(hw, I40E_PRTRPB_SLW); + reg &= ~I40E_PRTRPB_SLW_SLW_MASK; + reg |= (new_val << I40E_PRTRPB_SLW_SLW_SHIFT) & + I40E_PRTRPB_SLW_SLW_MASK; + wr32(hw, I40E_PRTRPB_SLW, reg); + } + + /* Program the shared pool low threshold and tc pool + * low water mark per TC that are increasing. + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + old_val = old_pb_cfg->shared_pool_low_thresh[i]; + new_val = new_pb_cfg->shared_pool_low_thresh[i]; + if (new_val > old_val) { + reg = rd32(hw, I40E_PRTRPB_SLT(i)); + reg &= ~I40E_PRTRPB_SLT_SLT_TCN_MASK; + reg |= (new_val << I40E_PRTRPB_SLT_SLT_TCN_SHIFT) & + I40E_PRTRPB_SLT_SLT_TCN_MASK; + wr32(hw, I40E_PRTRPB_SLT(i), reg); + } + + old_val = old_pb_cfg->tc_pool_low_wm[i]; + new_val = new_pb_cfg->tc_pool_low_wm[i]; + if (new_val > old_val) { + reg = rd32(hw, I40E_PRTRPB_DLW(i)); + reg &= ~I40E_PRTRPB_DLW_DLW_TCN_MASK; + reg |= (new_val << I40E_PRTRPB_DLW_DLW_TCN_SHIFT) & + I40E_PRTRPB_DLW_DLW_TCN_MASK; + wr32(hw, I40E_PRTRPB_DLW(i), reg); + } + } + + /* Program the shared pool high water mark per port if increasing */ + old_val = old_pb_cfg->shared_pool_high_wm; + new_val = new_pb_cfg->shared_pool_high_wm; + if (new_val > old_val) { + reg = rd32(hw, I40E_PRTRPB_SHW); + reg &= ~I40E_PRTRPB_SHW_SHW_MASK; + reg |= (new_val << I40E_PRTRPB_SHW_SHW_SHIFT) & + I40E_PRTRPB_SHW_SHW_MASK; + wr32(hw, I40E_PRTRPB_SHW, reg); + } + + /* Program the shared pool high threshold and tc pool + * high water mark per TC that are increasing. + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + old_val = old_pb_cfg->shared_pool_high_thresh[i]; + new_val = new_pb_cfg->shared_pool_high_thresh[i]; + if (new_val > old_val) { + reg = rd32(hw, I40E_PRTRPB_SHT(i)); + reg &= ~I40E_PRTRPB_SHT_SHT_TCN_MASK; + reg |= (new_val << I40E_PRTRPB_SHT_SHT_TCN_SHIFT) & + I40E_PRTRPB_SHT_SHT_TCN_MASK; + wr32(hw, I40E_PRTRPB_SHT(i), reg); + } + + old_val = old_pb_cfg->tc_pool_high_wm[i]; + new_val = new_pb_cfg->tc_pool_high_wm[i]; + if (new_val > old_val) { + reg = rd32(hw, I40E_PRTRPB_DHW(i)); + reg &= ~I40E_PRTRPB_DHW_DHW_TCN_MASK; + reg |= (new_val << I40E_PRTRPB_DHW_DHW_TCN_SHIFT) & + I40E_PRTRPB_DHW_DHW_TCN_MASK; + wr32(hw, I40E_PRTRPB_DHW(i), reg); + } + } +} + /** * _i40e_read_lldp_cfg - generic read of LLDP Configuration data from NVM * @hw: pointer to the HW structure diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.h b/drivers/net/ethernet/intel/i40e/i40e_dcb.h index 2b1a2e81ac73..cabbc55d14a0 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb.h +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.h @@ -1,13 +1,15 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #ifndef _I40E_DCB_H_ #define _I40E_DCB_H_ #include "i40e_type.h" +#define I40E_DCBX_STATUS_NOT_STARTED 0 #define I40E_DCBX_STATUS_IN_PROGRESS 1 #define I40E_DCBX_STATUS_DONE 2 +#define I40E_DCBX_STATUS_MULTIPLE_PEERS 3 #define I40E_DCBX_STATUS_DISABLED 7 #define I40E_TLV_TYPE_END 0 @@ -22,6 +24,7 @@ #define I40E_CEE_DCBX_OUI 0x001b21 #define I40E_CEE_DCBX_TYPE 2 +#define I40E_CEE_SUBTYPE_CTRL 1 #define I40E_CEE_SUBTYPE_PG_CFG 2 #define I40E_CEE_SUBTYPE_PFC_CFG 3 #define I40E_CEE_SUBTYPE_APP_PRI 4 @@ -64,6 +67,8 @@ #define I40E_IEEE_TSA_ETS 2 /* Defines for IEEE PFC TLV */ +#define I40E_DCB_PFC_ENABLED 2 +#define I40E_DCB_PFC_FORCED_NUM_TC 2 #define I40E_IEEE_PFC_CAP_SHIFT 0 #define I40E_IEEE_PFC_CAP_MASK (0xF << I40E_IEEE_PFC_CAP_SHIFT) #define I40E_IEEE_PFC_MBC_SHIFT 6 @@ -77,9 +82,30 @@ #define I40E_IEEE_APP_PRIO_SHIFT 5 #define I40E_IEEE_APP_PRIO_MASK (0x7 << I40E_IEEE_APP_PRIO_SHIFT) +/* TLV definitions for preparing MIB */ +#define I40E_TLV_ID_CHASSIS_ID 0 +#define I40E_TLV_ID_PORT_ID 1 +#define I40E_TLV_ID_TIME_TO_LIVE 2 +#define I40E_IEEE_TLV_ID_ETS_CFG 3 +#define I40E_IEEE_TLV_ID_ETS_REC 4 +#define I40E_IEEE_TLV_ID_PFC_CFG 5 +#define I40E_IEEE_TLV_ID_APP_PRI 6 +#define I40E_TLV_ID_END_OF_LLDPPDU 7 +#define I40E_TLV_ID_START I40E_IEEE_TLV_ID_ETS_CFG -#pragma pack(1) +#define I40E_IEEE_TLV_HEADER_LENGTH 2 +#define I40E_IEEE_ETS_TLV_LENGTH 25 +#define I40E_IEEE_PFC_TLV_LENGTH 6 +#define I40E_IEEE_APP_TLV_LENGTH 11 + +/* Defines for default SW DCB config */ +#define I40E_IEEE_DEFAULT_ETS_TCBW 100 +#define I40E_IEEE_DEFAULT_ETS_WILLING 1 +#define I40E_IEEE_DEFAULT_PFC_WILLING 1 +#define I40E_IEEE_DEFAULT_NUM_APPS 1 +#define I40E_IEEE_DEFAULT_APP_PRIO 3 +#pragma pack(1) /* IEEE 802.1AB LLDP Organization specific TLV */ struct i40e_lldp_org_tlv { __be16 typelength; @@ -102,7 +128,9 @@ struct i40e_cee_ctrl_tlv { struct i40e_cee_feat_tlv { struct i40e_cee_tlv_hdr hdr; u8 en_will_err; /* Bits: |En|Will|Err|Reserved(5)| */ +#define I40E_CEE_FEAT_TLV_ENABLE_MASK 0x80 #define I40E_CEE_FEAT_TLV_WILLING_MASK 0x40 +#define I40E_CEE_FEAT_TLV_ERR_MASK 0x20 u8 subtype; u8 tlvinfo[1]; }; @@ -116,13 +144,140 @@ struct i40e_cee_app_prio { }; #pragma pack() +enum i40e_get_fw_lldp_status_resp { + I40E_GET_FW_LLDP_STATUS_DISABLED = 0, + I40E_GET_FW_LLDP_STATUS_ENABLED = 1 +}; + +/* Data structures to pass for SW DCBX */ +struct i40e_rx_pb_config { + u32 shared_pool_size; + u32 shared_pool_high_wm; + u32 shared_pool_low_wm; + u32 shared_pool_high_thresh[I40E_MAX_TRAFFIC_CLASS]; + u32 shared_pool_low_thresh[I40E_MAX_TRAFFIC_CLASS]; + u32 tc_pool_size[I40E_MAX_TRAFFIC_CLASS]; + u32 tc_pool_high_wm[I40E_MAX_TRAFFIC_CLASS]; + u32 tc_pool_low_wm[I40E_MAX_TRAFFIC_CLASS]; +}; + +enum i40e_dcb_arbiter_mode { + I40E_DCB_ARB_MODE_STRICT_PRIORITY = 0, + I40E_DCB_ARB_MODE_ROUND_ROBIN = 1 +}; + +#define I40E_DCB_DEFAULT_MAX_EXPONENT 0xB +#define I40E_DEFAULT_PAUSE_TIME 0xffff +#define I40E_MAX_FRAME_SIZE 4608 /* 4.5 KB */ + +#define I40E_DEVICE_RPB_SIZE 968000 /* 968 KB */ + +/* BitTimes (BT) conversion */ +#define I40E_BT2KB(BT) (((BT) + (8 * 1024 - 1)) / (8 * 1024)) +#define I40E_B2BT(BT) ((BT) * 8) +#define I40E_BT2B(BT) (((BT) + (8 - 1)) / 8) + +/* Max Frame(TC) = MFS(max) + MFS(TC) */ +#define I40E_MAX_FRAME_TC(mfs_max, mfs_tc) I40E_B2BT((mfs_max) + (mfs_tc)) + +/* EEE Tx LPI Exit time in Bit Times */ +#define I40E_EEE_TX_LPI_EXIT_TIME 142500 + +/* PCI Round Trip Time in Bit Times */ +#define I40E_PCIRTT_LINK_SPEED_10G 20000 +#define I40E_PCIRTT_BYTE_LINK_SPEED_20G 40000 +#define I40E_PCIRTT_BYTE_LINK_SPEED_40G 80000 + +/* PFC Frame Delay Bit Times */ +#define I40E_PFC_FRAME_DELAY 672 + +/* Worst case Cable (10GBase-T) Delay Bit Times */ +#define I40E_CABLE_DELAY 5556 + +/* Higher Layer Delay @10G Bit Times */ +#define I40E_HIGHER_LAYER_DELAY_10G 6144 + +/* Interface Delays in Bit Times */ +/* TODO: Add for other link speeds 20G/40G/etc. */ +#define I40E_INTERFACE_DELAY_10G_MAC_CONTROL 8192 +#define I40E_INTERFACE_DELAY_10G_MAC 8192 +#define I40E_INTERFACE_DELAY_10G_RS 8192 + +#define I40E_INTERFACE_DELAY_XGXS 2048 +#define I40E_INTERFACE_DELAY_XAUI 2048 + +#define I40E_INTERFACE_DELAY_10G_BASEX_PCS 2048 +#define I40E_INTERFACE_DELAY_10G_BASER_PCS 3584 +#define I40E_INTERFACE_DELAY_LX4_PMD 512 +#define I40E_INTERFACE_DELAY_CX4_PMD 512 +#define I40E_INTERFACE_DELAY_SERIAL_PMA 512 +#define I40E_INTERFACE_DELAY_PMD 512 + +#define I40E_INTERFACE_DELAY_10G_BASET 25600 + +/* Hardware RX DCB config related defines */ +#define I40E_DCB_1_PORT_THRESHOLD 0xF +#define I40E_DCB_1_PORT_FIFO_SIZE 0x10 +#define I40E_DCB_2_PORT_THRESHOLD_LOW_NUM_TC 0xF +#define I40E_DCB_2_PORT_FIFO_SIZE_LOW_NUM_TC 0x10 +#define I40E_DCB_2_PORT_THRESHOLD_HIGH_NUM_TC 0xC +#define I40E_DCB_2_PORT_FIFO_SIZE_HIGH_NUM_TC 0x8 +#define I40E_DCB_4_PORT_THRESHOLD_LOW_NUM_TC 0x9 +#define I40E_DCB_4_PORT_FIFO_SIZE_LOW_NUM_TC 0x8 +#define I40E_DCB_4_PORT_THRESHOLD_HIGH_NUM_TC 0x6 +#define I40E_DCB_4_PORT_FIFO_SIZE_HIGH_NUM_TC 0x4 +#define I40E_DCB_WATERMARK_START_FACTOR 0x2 + +/* delay values for with 10G BaseT in Bit Times */ +#define I40E_INTERFACE_DELAY_10G_COPPER \ + (I40E_INTERFACE_DELAY_10G_MAC + (2 * I40E_INTERFACE_DELAY_XAUI) \ + + I40E_INTERFACE_DELAY_10G_BASET) +#define I40E_DV_TC(mfs_max, mfs_tc) \ + ((2 * I40E_MAX_FRAME_TC(mfs_max, mfs_tc)) \ + + I40E_PFC_FRAME_DELAY \ + + (2 * I40E_CABLE_DELAY) \ + + (2 * I40E_INTERFACE_DELAY_10G_COPPER) \ + + I40E_HIGHER_LAYER_DELAY_10G) +static inline u32 I40E_STD_DV_TC(u32 mfs_max, u32 mfs_tc) +{ + return I40E_DV_TC(mfs_max, mfs_tc) + I40E_B2BT(mfs_max); +} + +/* APIs for SW DCBX */ +void i40e_dcb_hw_rx_fifo_config(struct i40e_hw *hw, + enum i40e_dcb_arbiter_mode ets_mode, + enum i40e_dcb_arbiter_mode non_ets_mode, + u32 max_exponent, u8 lltc_map); +void i40e_dcb_hw_rx_cmd_monitor_config(struct i40e_hw *hw, + u8 num_tc, u8 num_ports); +void i40e_dcb_hw_pfc_config(struct i40e_hw *hw, + u8 pfc_en, u8 *prio_tc); +void i40e_dcb_hw_set_num_tc(struct i40e_hw *hw, u8 num_tc); +u8 i40e_dcb_hw_get_num_tc(struct i40e_hw *hw); +void i40e_dcb_hw_rx_ets_bw_config(struct i40e_hw *hw, u8 *bw_share, + u8 *mode, u8 *prio_type); +void i40e_dcb_hw_rx_up2tc_config(struct i40e_hw *hw, u8 *prio_tc); +void i40e_dcb_hw_calculate_pool_sizes(struct i40e_hw *hw, + u8 num_ports, bool eee_enabled, + u8 pfc_en, u32 *mfs_tc, + struct i40e_rx_pb_config *pb_cfg); +void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, + struct i40e_rx_pb_config *old_pb_cfg, + struct i40e_rx_pb_config *new_pb_cfg); i40e_status i40e_get_dcbx_status(struct i40e_hw *hw, - u16 *status); + u16 *status); i40e_status i40e_lldp_to_dcb_config(u8 *lldpmib, - struct i40e_dcbx_config *dcbcfg); + struct i40e_dcbx_config *dcbcfg); i40e_status i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type, - u8 bridgetype, - struct i40e_dcbx_config *dcbcfg); + u8 bridgetype, + struct i40e_dcbx_config *dcbcfg); i40e_status i40e_get_dcb_config(struct i40e_hw *hw); -i40e_status i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change); +i40e_status i40e_init_dcb(struct i40e_hw *hw, + bool enable_mib_change); +enum i40e_status_code +i40e_get_fw_lldp_status(struct i40e_hw *hw, + enum i40e_get_fw_lldp_status_resp *lldp_status); +i40e_status i40e_set_dcb_config(struct i40e_hw *hw); +i40e_status i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen, + struct i40e_dcbx_config *dcbcfg); #endif /* _I40E_DCB_H_ */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h index 5c1378641b3b..ec8c4f7c9ff0 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #ifndef _I40E_PROTOTYPE_H_ #define _I40E_PROTOTYPE_H_ @@ -200,6 +200,10 @@ i40e_status i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, u8 mib_type, void *buff, u16 buff_size, u16 *local_len, u16 *remote_len, struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code +i40e_aq_set_lldp_mib(struct i40e_hw *hw, + u8 mib_type, void *buff, u16 buff_size, + struct i40e_asq_cmd_details *cmd_details); i40e_status i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, bool enable_update, struct i40e_asq_cmd_details *cmd_details); @@ -289,6 +293,9 @@ i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid, u8 filter_count); i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw, struct i40e_lldp_variables *lldp_cfg); +enum i40e_status_code +i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid, + struct i40e_asq_cmd_details *cmd_details); /* i40e_common */ i40e_status i40e_init_shared_code(struct i40e_hw *hw); i40e_status i40e_pf_reset(struct i40e_hw *hw); diff --git a/drivers/net/ethernet/intel/i40e/i40e_register.h b/drivers/net/ethernet/intel/i40e/i40e_register.h index 8d5c65b350fc..a68fad48c349 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_register.h +++ b/drivers/net/ethernet/intel/i40e/i40e_register.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #ifndef _I40E_REGISTER_H_ #define _I40E_REGISTER_H_ @@ -34,12 +34,137 @@ #define I40E_PF_ATQLEN_ATQENABLE_SHIFT 31 #define I40E_PF_ATQLEN_ATQENABLE_MASK I40E_MASK(0x1u, I40E_PF_ATQLEN_ATQENABLE_SHIFT) #define I40E_PF_ATQT 0x00080400 /* Reset: EMPR */ +#define I40E_PRT_SWR_PM_THR 0x0026CD00 /* Reset: CORER */ +#define I40E_PRT_SWR_PM_THR_THRESHOLD_SHIFT 0 +#define I40E_PRT_SWR_PM_THR_THRESHOLD_MASK I40E_MASK(0xFF, I40E_PRT_SWR_PM_THR_THRESHOLD_SHIFT) +#define I40E_PRTDCB_FCCFG 0x001E4640 /* Reset: GLOBR */ +#define I40E_PRTDCB_FCCFG_TFCE_SHIFT 3 +#define I40E_PRTDCB_FCCFG_TFCE_MASK I40E_MASK(0x3, I40E_PRTDCB_FCCFG_TFCE_SHIFT) #define I40E_PRTDCB_GENC 0x00083000 /* Reset: CORER */ +#define I40E_PRTDCB_GENC_NUMTC_SHIFT 2 +#define I40E_PRTDCB_GENC_NUMTC_MASK I40E_MASK(0xF, I40E_PRTDCB_GENC_NUMTC_SHIFT) #define I40E_PRTDCB_GENC_PFCLDA_SHIFT 16 #define I40E_PRTDCB_GENC_PFCLDA_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_GENC_PFCLDA_SHIFT) #define I40E_PRTDCB_GENS 0x00083020 /* Reset: CORER */ #define I40E_PRTDCB_GENS_DCBX_STATUS_SHIFT 0 #define I40E_PRTDCB_GENS_DCBX_STATUS_MASK I40E_MASK(0x7, I40E_PRTDCB_GENS_DCBX_STATUS_SHIFT) +#define I40E_PRTDCB_MFLCN 0x001E2400 /* Reset: GLOBR */ +#define I40E_PRTDCB_MFLCN_PMCF_SHIFT 0 +#define I40E_PRTDCB_MFLCN_PMCF_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_PMCF_SHIFT) +#define I40E_PRTDCB_MFLCN_DPF_SHIFT 1 +#define I40E_PRTDCB_MFLCN_DPF_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_DPF_SHIFT) +#define I40E_PRTDCB_MFLCN_RPFCM_SHIFT 2 +#define I40E_PRTDCB_MFLCN_RPFCM_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_RPFCM_SHIFT) +#define I40E_PRTDCB_MFLCN_RFCE_SHIFT 3 +#define I40E_PRTDCB_MFLCN_RFCE_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_RFCE_SHIFT) +#define I40E_PRTDCB_MFLCN_RPFCE_SHIFT 4 +#define I40E_PRTDCB_MFLCN_RPFCE_MASK I40E_MASK(0xFF, I40E_PRTDCB_MFLCN_RPFCE_SHIFT) +#define I40E_PRTDCB_RETSC 0x001223E0 /* Reset: CORER */ +#define I40E_PRTDCB_RETSC_ETS_MODE_SHIFT 0 +#define I40E_PRTDCB_RETSC_ETS_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSC_ETS_MODE_SHIFT) +#define I40E_PRTDCB_RETSC_NON_ETS_MODE_SHIFT 1 +#define I40E_PRTDCB_RETSC_NON_ETS_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSC_NON_ETS_MODE_SHIFT) +#define I40E_PRTDCB_RETSC_ETS_MAX_EXP_SHIFT 2 +#define I40E_PRTDCB_RETSC_ETS_MAX_EXP_MASK I40E_MASK(0xF, I40E_PRTDCB_RETSC_ETS_MAX_EXP_SHIFT) +#define I40E_PRTDCB_RETSC_LLTC_SHIFT 8 +#define I40E_PRTDCB_RETSC_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_RETSC_LLTC_SHIFT) +#define I40E_PRTDCB_RETSTCC(_i) (0x00122180 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTDCB_RETSTCC_MAX_INDEX 7 +#define I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT 0 +#define I40E_PRTDCB_RETSTCC_BWSHARE_MASK I40E_MASK(0x7F, I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT) +#define I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT 30 +#define I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT) +#define I40E_PRTDCB_RETSTCC_ETSTC_SHIFT 31 +#define I40E_PRTDCB_RETSTCC_ETSTC_MASK I40E_MASK(0x1u, I40E_PRTDCB_RETSTCC_ETSTC_SHIFT) +#define I40E_PRTDCB_RPPMC 0x001223A0 /* Reset: CORER */ +#define I40E_PRTDCB_RPPMC_LANRPPM_SHIFT 0 +#define I40E_PRTDCB_RPPMC_LANRPPM_MASK I40E_MASK(0xFF, I40E_PRTDCB_RPPMC_LANRPPM_SHIFT) +#define I40E_PRTDCB_RPPMC_RDMARPPM_SHIFT 8 +#define I40E_PRTDCB_RPPMC_RDMARPPM_MASK I40E_MASK(0xFF, I40E_PRTDCB_RPPMC_RDMARPPM_SHIFT) +#define I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_SHIFT 16 +#define I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_MASK I40E_MASK(0xFF, I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_SHIFT) +#define I40E_PRTDCB_RUP 0x001C0B00 /* Reset: CORER */ +#define I40E_PRTDCB_RUP_NOVLANUP_SHIFT 0 +#define I40E_PRTDCB_RUP_NOVLANUP_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP_NOVLANUP_SHIFT) +#define I40E_PRTDCB_RUP2TC 0x001C09A0 /* Reset: CORER */ +#define I40E_PRTDCB_RUP2TC_UP0TC_SHIFT 0 +#define I40E_PRTDCB_RUP2TC_UP0TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP0TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP1TC_SHIFT 3 +#define I40E_PRTDCB_RUP2TC_UP1TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP1TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP2TC_SHIFT 6 +#define I40E_PRTDCB_RUP2TC_UP2TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP2TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP3TC_SHIFT 9 +#define I40E_PRTDCB_RUP2TC_UP3TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP3TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP4TC_SHIFT 12 +#define I40E_PRTDCB_RUP2TC_UP4TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP4TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP5TC_SHIFT 15 +#define I40E_PRTDCB_RUP2TC_UP5TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP5TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP6TC_SHIFT 18 +#define I40E_PRTDCB_RUP2TC_UP6TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP6TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP7TC_SHIFT 21 +#define I40E_PRTDCB_RUP2TC_UP7TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP7TC_SHIFT) +#define I40E_PRTDCB_RUPTQ(_i) (0x00122400 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTDCB_RUPTQ_MAX_INDEX 7 +#define I40E_PRTDCB_RUPTQ_RXQNUM_SHIFT 0 +#define I40E_PRTDCB_RUPTQ_RXQNUM_MASK I40E_MASK(0x3FFF, I40E_PRTDCB_RUPTQ_RXQNUM_SHIFT) +#define I40E_PRTDCB_TC2PFC 0x001C0980 /* Reset: CORER */ +#define I40E_PRTDCB_TC2PFC_TC2PFC_SHIFT 0 +#define I40E_PRTDCB_TC2PFC_TC2PFC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TC2PFC_TC2PFC_SHIFT) +#define I40E_PRTDCB_TCMSTC(_i) (0x000A0040 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTDCB_TCMSTC_MAX_INDEX 7 +#define I40E_PRTDCB_TCMSTC_MSTC_SHIFT 0 +#define I40E_PRTDCB_TCMSTC_MSTC_MASK I40E_MASK(0xFFFFF, I40E_PRTDCB_TCMSTC_MSTC_SHIFT) +#define I40E_PRTDCB_TCPMC 0x000A21A0 /* Reset: CORER */ +#define I40E_PRTDCB_TCPMC_CPM_SHIFT 0 +#define I40E_PRTDCB_TCPMC_CPM_MASK I40E_MASK(0x1FFF, I40E_PRTDCB_TCPMC_CPM_SHIFT) +#define I40E_PRTDCB_TCPMC_LLTC_SHIFT 13 +#define I40E_PRTDCB_TCPMC_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TCPMC_LLTC_SHIFT) +#define I40E_PRTDCB_TCPMC_TCPM_MODE_SHIFT 30 +#define I40E_PRTDCB_TCPMC_TCPM_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_TCPMC_TCPM_MODE_SHIFT) +#define I40E_PRTDCB_TCWSTC(_i) (0x000A2040 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTDCB_TCWSTC_MAX_INDEX 7 +#define I40E_PRTDCB_TCWSTC_MSTC_SHIFT 0 +#define I40E_PRTDCB_TCWSTC_MSTC_MASK I40E_MASK(0xFFFFF, I40E_PRTDCB_TCWSTC_MSTC_SHIFT) +#define I40E_PRTDCB_TDPMC 0x000A0180 /* Reset: CORER */ +#define I40E_PRTDCB_TDPMC_DPM_SHIFT 0 +#define I40E_PRTDCB_TDPMC_DPM_MASK I40E_MASK(0xFF, I40E_PRTDCB_TDPMC_DPM_SHIFT) +#define I40E_PRTDCB_TDPMC_TCPM_MODE_SHIFT 30 +#define I40E_PRTDCB_TDPMC_TCPM_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_TDPMC_TCPM_MODE_SHIFT) +#define I40E_PRTDCB_TETSC_TCB 0x000AE060 /* Reset: CORER */ +#define I40E_PRTDCB_TETSC_TCB_EN_LL_STRICT_PRIORITY_SHIFT 0 +#define I40E_PRTDCB_TETSC_TCB_EN_LL_STRICT_PRIORITY_MASK I40E_MASK(0x1, \ + I40E_PRTDCB_TETSC_TCB_EN_LL_STRICT_PRIORITY_SHIFT) +#define I40E_PRTDCB_TETSC_TCB_LLTC_SHIFT 8 +#define I40E_PRTDCB_TETSC_TCB_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TETSC_TCB_LLTC_SHIFT) +#define I40E_PRTDCB_TETSC_TPB 0x00098060 /* Reset: CORER */ +#define I40E_PRTDCB_TETSC_TPB_EN_LL_STRICT_PRIORITY_SHIFT 0 +#define I40E_PRTDCB_TETSC_TPB_EN_LL_STRICT_PRIORITY_MASK I40E_MASK(0x1, \ + I40E_PRTDCB_TETSC_TPB_EN_LL_STRICT_PRIORITY_SHIFT) +#define I40E_PRTDCB_TETSC_TPB_LLTC_SHIFT 8 +#define I40E_PRTDCB_TETSC_TPB_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TETSC_TPB_LLTC_SHIFT) +#define I40E_PRTDCB_TFCS 0x001E4560 /* Reset: GLOBR */ +#define I40E_PRTDCB_TFCS_TXOFF_SHIFT 0 +#define I40E_PRTDCB_TFCS_TXOFF_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF0_SHIFT 8 +#define I40E_PRTDCB_TFCS_TXOFF0_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF0_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF1_SHIFT 9 +#define I40E_PRTDCB_TFCS_TXOFF1_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF1_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF2_SHIFT 10 +#define I40E_PRTDCB_TFCS_TXOFF2_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF2_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF3_SHIFT 11 +#define I40E_PRTDCB_TFCS_TXOFF3_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF3_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF4_SHIFT 12 +#define I40E_PRTDCB_TFCS_TXOFF4_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF4_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF5_SHIFT 13 +#define I40E_PRTDCB_TFCS_TXOFF5_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF5_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF6_SHIFT 14 +#define I40E_PRTDCB_TFCS_TXOFF6_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF6_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF7_SHIFT 15 +#define I40E_PRTDCB_TFCS_TXOFF7_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF7_SHIFT) +#define I40E_PRTDCB_TPFCTS(_i) (0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset: GLOBR */ +#define I40E_PRTDCB_TPFCTS_MAX_INDEX 7 +#define I40E_PRTDCB_TPFCTS_PFCTIMER_SHIFT 0 +#define I40E_PRTDCB_TPFCTS_PFCTIMER_MASK I40E_MASK(0x3FFF, I40E_PRTDCB_TPFCTS_PFCTIMER_SHIFT) #define I40E_GL_FWSTS 0x00083048 /* Reset: POR */ #define I40E_GL_FWSTS_FWS1B_SHIFT 16 #define I40E_GL_FWSTS_FWS1B_MASK I40E_MASK(0xFF, I40E_GL_FWSTS_FWS1B_SHIFT) @@ -366,6 +491,27 @@ #define I40E_PRTGL_SAL 0x001E2120 /* Reset: GLOBR */ #define I40E_PRTGL_SAL_FC_SAL_SHIFT 0 #define I40E_PRTGL_SAL_FC_SAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTGL_SAL_FC_SAL_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP 0x001E3260 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_MASK I40E_MASK(0x1, \ + I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP 0x001E32E0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_MASK I40E_MASK(0x1, \ + I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE 0x001E30C0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_MASK I40E_MASK(0x1FF, \ + I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE 0x001E30D0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_MASK I40E_MASK(0x1FF, \ + I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3400 + ((_i) * 16)) /* _i=0...8 */ +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8 +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MASK I40E_MASK(0xFFFF, \ + I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_SHIFT) #define I40E_GLNVM_FLA 0x000B6108 /* Reset: POR */ #define I40E_GLNVM_FLA_LOCKED_SHIFT 6 #define I40E_GLNVM_FLA_LOCKED_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_LOCKED_SHIFT) @@ -409,6 +555,30 @@ #define I40E_PRTPM_EEER_TX_LPI_EN_MASK I40E_MASK(0x1, I40E_PRTPM_EEER_TX_LPI_EN_SHIFT) #define I40E_PRTPM_RLPIC 0x001E43A0 /* Reset: GLOBR */ #define I40E_PRTPM_TLPIC 0x001E43C0 /* Reset: GLOBR */ +#define I40E_PRTRPB_DHW(_i) (0x000AC100 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_DHW_DHW_TCN_SHIFT 0 +#define I40E_PRTRPB_DHW_DHW_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DHW_DHW_TCN_SHIFT) +#define I40E_PRTRPB_DLW(_i) (0x000AC220 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_DLW_DLW_TCN_SHIFT 0 +#define I40E_PRTRPB_DLW_DLW_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DLW_DLW_TCN_SHIFT) +#define I40E_PRTRPB_DPS(_i) (0x000AC320 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_DPS_DPS_TCN_SHIFT 0 +#define I40E_PRTRPB_DPS_DPS_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DPS_DPS_TCN_SHIFT) +#define I40E_PRTRPB_SHT(_i) (0x000AC480 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_SHT_SHT_TCN_SHIFT 0 +#define I40E_PRTRPB_SHT_SHT_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SHT_SHT_TCN_SHIFT) +#define I40E_PRTRPB_SHW 0x000AC580 /* Reset: CORER */ +#define I40E_PRTRPB_SHW_SHW_SHIFT 0 +#define I40E_PRTRPB_SHW_SHW_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SHW_SHW_SHIFT) +#define I40E_PRTRPB_SLT(_i) (0x000AC5A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_SLT_SLT_TCN_SHIFT 0 +#define I40E_PRTRPB_SLT_SLT_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SLT_SLT_TCN_SHIFT) +#define I40E_PRTRPB_SLW 0x000AC6A0 /* Reset: CORER */ +#define I40E_PRTRPB_SLW_SLW_SHIFT 0 +#define I40E_PRTRPB_SLW_SLW_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SLW_SLW_SHIFT) +#define I40E_PRTRPB_SPS 0x000AC7C0 /* Reset: CORER */ +#define I40E_PRTRPB_SPS_SPS_SHIFT 0 +#define I40E_PRTRPB_SPS_SPS_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SPS_SPS_SHIFT) #define I40E_GLQF_FDCNT_0 0x00269BAC /* Reset: CORER */ #define I40E_GLQF_FDCNT_0_GUARANT_CNT_SHIFT 0 #define I40E_GLQF_FDCNT_0_GUARANT_CNT_MASK I40E_MASK(0x1FFF, I40E_GLQF_FDCNT_0_GUARANT_CNT_SHIFT) diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h index c0bdc666f557..16332f04a5d3 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_type.h +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #ifndef _I40E_TYPE_H_ #define _I40E_TYPE_H_ @@ -517,6 +517,7 @@ struct i40e_dcbx_config { #define I40E_DCBX_MODE_CEE 0x1 #define I40E_DCBX_MODE_IEEE 0x2 u8 app_mode; +#define I40E_DCBX_APPS_NON_WILLING 0x1 u32 numapps; u32 tlv_status; /* CEE mode TLV status */ struct i40e_dcb_ets_config etscfg; From patchwork Mon Oct 19 23:50:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kubalewski, Arkadiusz" X-Patchwork-Id: 1384545 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CFYYj4z6Xz9sSn for ; Tue, 20 Oct 2020 10:56:09 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id DBB60869F4; Mon, 19 Oct 2020 23:56:07 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id gc4Onl-xDtKK; Mon, 19 Oct 2020 23:56:05 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by fraxinus.osuosl.org (Postfix) with ESMTP id 87BFA869DE; Mon, 19 Oct 2020 23:56:05 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id DFEDD1BF2FC for ; Mon, 19 Oct 2020 23:56:03 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id D1BA68732A for ; Mon, 19 Oct 2020 23:56:03 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id st4tbFSQpAiw for ; Mon, 19 Oct 2020 23:56:01 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by hemlock.osuosl.org (Postfix) with ESMTPS id 7BCFC872CB for ; Mon, 19 Oct 2020 23:56:01 +0000 (UTC) IronPort-SDR: Y/q9lmWCv44fQdbPLjKlZEX1Uktsq/+T5A87VdzLn5Q+h1hxtvPiPWISkrf1oHtGjxQFU218pl AlGgeF+HyCfQ== X-IronPort-AV: E=McAfee;i="6000,8403,9779"; a="166351758" X-IronPort-AV: E=Sophos;i="5.77,395,1596524400"; d="scan'208";a="166351758" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Oct 2020 16:56:00 -0700 IronPort-SDR: MGTkBS/HCtkzRh/yYW1T3LKPmIEYiLEs1P6fdiwIeY+ePcpXQhOsKarf8PTCR9WHMjBMpwO0RT NfrYvYXUNnwg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,395,1596524400"; d="scan'208";a="465712632" Received: from amlin-018-053.igk.intel.com ([10.102.18.53]) by orsmga004.jf.intel.com with ESMTP; 19 Oct 2020 16:55:59 -0700 From: Arkadiusz Kubalewski To: intel-wired-lan@lists.osuosl.org Date: Mon, 19 Oct 2020 23:50:28 +0000 Message-Id: <20201019235029.176050-3-arkadiusz.kubalewski@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20201019235029.176050-1-arkadiusz.kubalewski@intel.com> References: <20201019235029.176050-1-arkadiusz.kubalewski@intel.com> MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH net-next 2/3] i40e: Add init and default config of software based DCB X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aleksandr Loktionov Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Add extra handling on changing the "disable-fw-lldp" private flag to properly initialize software based DCB feature. Add default configuration of DCB functionality when Firmware LLDP agent is turned off, in case of driver probe and device reset on reconfiguration. Update copyright dates as appropriate. Software based DCB is a brand-new feature in i40e driver. Before, DCB was implemented by Firmware LLDP agent only. The agent was responsible for handling incoming DCB-related LLDP frames and applying received DCB configuration to hardware. Default configuration and new initialization flow for software based DCB is required. If LLDP agent is turned off in BIOS, or after setting private flag ("disable-fw-lldp on"). The driver initializes DCB functionality with default values, one traffic class with 100% bandwidth allocated. Signed-off-by: Aleksandr Loktionov Signed-off-by: Arkadiusz Kubalewski --- drivers/net/ethernet/intel/i40e/i40e.h | 15 +- .../net/ethernet/intel/i40e/i40e_ethtool.c | 20 +- drivers/net/ethernet/intel/i40e/i40e_main.c | 520 ++++++++++++++++-- 3 files changed, 496 insertions(+), 59 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index c517c47..3d5a46a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #ifndef _I40E_H_ #define _I40E_H_ @@ -292,6 +292,9 @@ struct i40e_cloud_filter { u8 tunnel_type; }; +#define I40E_DCB_PRIO_TYPE_STRICT 0 +#define I40E_DCB_PRIO_TYPE_ETS 1 +#define I40E_DCB_STRICT_PRIO_CREDITS 127 /* DCB per TC information data structure */ struct i40e_tc_info { u16 qoffset; /* Queue offset from base queue */ @@ -635,6 +638,8 @@ struct i40e_pf { u16 dcbx_cap; struct i40e_filter_control_settings filter_settings; + struct i40e_rx_pb_config pb_cfg; /* Current Rx packet buffer config */ + struct i40e_dcbx_config tmp_cfg; /* GPIO defines used by PTP */ #define I40E_SDP3_2 18 @@ -1194,6 +1199,12 @@ bool i40e_is_vsi_in_vlan(struct i40e_vsi *vsi); int i40e_count_filters(struct i40e_vsi *vsi); struct i40e_mac_filter *i40e_find_mac(struct i40e_vsi *vsi, const u8 *macaddr); void i40e_vlan_stripping_enable(struct i40e_vsi *vsi); +static inline bool i40e_is_sw_dcb(struct i40e_pf *pf) +{ + return !!(pf->flags & I40E_FLAG_DISABLE_FW_LLDP); +} + +void i40e_set_lldp_forwarding(struct i40e_pf *pf, bool enable); #ifdef CONFIG_I40E_DCB void i40e_dcbnl_flush_apps(struct i40e_pf *pf, struct i40e_dcbx_config *old_cfg, @@ -1203,6 +1214,8 @@ void i40e_dcbnl_setup(struct i40e_vsi *vsi); bool i40e_dcb_need_reconfig(struct i40e_pf *pf, struct i40e_dcbx_config *old_cfg, struct i40e_dcbx_config *new_cfg); +int i40e_hw_dcb_config(struct i40e_pf *pf, struct i40e_dcbx_config *new_cfg); +int i40e_dcb_sw_default_config(struct i40e_pf *pf); #endif /* CONFIG_I40E_DCB */ void i40e_ptp_rx_hang(struct i40e_pf *pf); void i40e_ptp_tx_hang(struct i40e_pf *pf); diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index ea3388f..f9d76f2 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -5217,23 +5217,13 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags) if (changed_flags & I40E_FLAG_DISABLE_FW_LLDP) { if (new_flags & I40E_FLAG_DISABLE_FW_LLDP) { - struct i40e_dcbx_config *dcbcfg; - +#ifdef CONFIG_I40E_DCB + i40e_dcb_sw_default_config(pf); +#endif /* CONFIG_I40E_DCB */ + i40e_aq_cfg_lldp_mib_change_event(&pf->hw, false, NULL); i40e_aq_stop_lldp(&pf->hw, true, false, NULL); - i40e_aq_set_dcb_parameters(&pf->hw, true, NULL); - /* reset local_dcbx_config to default */ - dcbcfg = &pf->hw.local_dcbx_config; - dcbcfg->etscfg.willing = 1; - dcbcfg->etscfg.maxtcs = 0; - dcbcfg->etscfg.tcbwtable[0] = 100; - for (i = 1; i < I40E_MAX_TRAFFIC_CLASS; i++) - dcbcfg->etscfg.tcbwtable[i] = 0; - for (i = 0; i < I40E_MAX_USER_PRIORITY; i++) - dcbcfg->etscfg.prioritytable[i] = 0; - dcbcfg->etscfg.tsatable[0] = I40E_IEEE_TSA_ETS; - dcbcfg->pfc.willing = 1; - dcbcfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS; } else { + i40e_set_lldp_forwarding(pf, false); status = i40e_aq_start_lldp(&pf->hw, false, NULL); if (status) { adq_err = pf->hw.aq.asq_last_status; diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 8dd8d7b..41850fb 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #include #include @@ -35,7 +35,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit); static int i40e_setup_misc_vector(struct i40e_pf *pf); static void i40e_determine_queue_usage(struct i40e_pf *pf); static int i40e_setup_pf_filter_control(struct i40e_pf *pf); -static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired); +static void i40e_prep_for_reset(struct i40e_pf *pf); static int i40e_reset(struct i40e_pf *pf); static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired); static int i40e_setup_misc_vector_for_recovery_mode(struct i40e_pf *pf); @@ -5307,6 +5307,7 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc, vsi->seid); return ret; } + memset(&bw_data, 0, sizeof(bw_data)); bw_data.tc_valid_bits = enabled_tc; for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) bw_data.tc_bw_credits[i] = bw_share[i]; @@ -5959,6 +5960,7 @@ static int i40e_channel_config_bw(struct i40e_vsi *vsi, struct i40e_channel *ch, i40e_status ret; int i; + memset(&bw_data, 0, sizeof(bw_data)); bw_data.tc_valid_bits = ch->enabled_tc; for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) bw_data.tc_bw_credits[i] = bw_share[i]; @@ -6414,6 +6416,9 @@ static void i40e_dcb_reconfigure(struct i40e_pf *pf) /* Enable the TCs available on PF to all VEBs */ tc_map = i40e_pf_get_tc_map(pf); + if (tc_map == I40E_DEFAULT_TRAFFIC_CLASS) + return; + for (v = 0; v < I40E_MAX_VEB; v++) { if (!pf->veb[v]) continue; @@ -6480,6 +6485,316 @@ static int i40e_resume_port_tx(struct i40e_pf *pf) return ret; } +/** + * i40e_suspend_port_tx - Suspend port Tx + * @pf: PF struct + * + * Suspend a port's Tx and issue a PF reset in case of failure. + **/ +static int i40e_suspend_port_tx(struct i40e_pf *pf) +{ + struct i40e_hw *hw = &pf->hw; + int ret; + + ret = i40e_aq_suspend_port_tx(hw, pf->mac_seid, NULL); + if (ret) { + dev_info(&pf->pdev->dev, + "Suspend Port Tx failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + /* Schedule PF reset to recover */ + set_bit(__I40E_PF_RESET_REQUESTED, pf->state); + i40e_service_event_schedule(pf); + } + + return ret; +} + +/** + * i40e_hw_set_dcb_config - Program new DCBX settings into HW + * @pf: PF being configured + * @new_cfg: New DCBX configuration + * + * Program DCB settings into HW and reconfigure VEB/VSIs on + * given PF. Uses "Set LLDP MIB" AQC to program the hardware. + **/ +static int i40e_hw_set_dcb_config(struct i40e_pf *pf, + struct i40e_dcbx_config *new_cfg) +{ + struct i40e_dcbx_config *old_cfg = &pf->hw.local_dcbx_config; + int ret; + + /* Check if need reconfiguration */ + if (!memcmp(&new_cfg, &old_cfg, sizeof(new_cfg))) { + dev_dbg(&pf->pdev->dev, "No Change in DCB Config required.\n"); + return 0; + } + + /* Config change disable all VSIs */ + i40e_pf_quiesce_all_vsi(pf); + + /* Copy the new config to the current config */ + *old_cfg = *new_cfg; + old_cfg->etsrec = old_cfg->etscfg; + ret = i40e_set_dcb_config(&pf->hw); + if (ret) { + dev_info(&pf->pdev->dev, + "Set DCB Config failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + goto out; + } + + /* Changes in configuration update VEB/VSI */ + i40e_dcb_reconfigure(pf); +out: + /* In case of reset do not try to resume anything */ + if (!test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state)) { + /* Re-start the VSIs if disabled */ + ret = i40e_resume_port_tx(pf); + /* In case of error no point in resuming VSIs */ + if (ret) + goto err; + i40e_pf_unquiesce_all_vsi(pf); + } +err: + return ret; +} + +/** + * i40e_hw_dcb_config - Program new DCBX settings into HW + * @pf: PF being configured + * @new_cfg: New DCBX configuration + * + * Program DCB settings into HW and reconfigure VEB/VSIs on + * given PF + **/ +int i40e_hw_dcb_config(struct i40e_pf *pf, struct i40e_dcbx_config *new_cfg) +{ + struct i40e_aqc_configure_switching_comp_ets_data ets_data; + u8 prio_type[I40E_MAX_TRAFFIC_CLASS] = {0}; + u32 mfs_tc[I40E_MAX_TRAFFIC_CLASS]; + struct i40e_dcbx_config *old_cfg; + u8 mode[I40E_MAX_TRAFFIC_CLASS]; + struct i40e_rx_pb_config pb_cfg; + struct i40e_hw *hw = &pf->hw; + u8 num_ports = hw->num_ports; + bool need_reconfig; + int ret = -EINVAL; + u8 lltc_map = 0; + u8 tc_map = 0; + u8 new_numtc; + u8 i; + + dev_dbg(&pf->pdev->dev, "Configuring DCB registers directly\n"); + /* Un-pack information to Program ETS HW via shared API + * numtc, tcmap + * LLTC map + * ETS/NON-ETS arbiter mode + * max exponent (credit refills) + * Total number of ports + * PFC priority bit-map + * Priority Table + * BW % per TC + * Arbiter mode between UPs sharing same TC + * TSA table (ETS or non-ETS) + * EEE enabled or not + * MFS TC table + */ + + new_numtc = i40e_dcb_get_num_tc(new_cfg); + + memset(&ets_data, 0, sizeof(ets_data)); + for (i = 0; i < new_numtc; i++) { + tc_map |= BIT(i); + switch (new_cfg->etscfg.tsatable[i]) { + case I40E_IEEE_TSA_ETS: + prio_type[i] = I40E_DCB_PRIO_TYPE_ETS; + ets_data.tc_bw_share_credits[i] = + new_cfg->etscfg.tcbwtable[i]; + break; + case I40E_IEEE_TSA_STRICT: + prio_type[i] = I40E_DCB_PRIO_TYPE_STRICT; + lltc_map |= BIT(i); + ets_data.tc_bw_share_credits[i] = + I40E_DCB_STRICT_PRIO_CREDITS; + break; + default: + /* Invalid TSA type */ + need_reconfig = false; + goto out; + } + } + + old_cfg = &hw->local_dcbx_config; + /* Check if need reconfiguration */ + need_reconfig = i40e_dcb_need_reconfig(pf, old_cfg, new_cfg); + + /* If needed, enable/disable frame tagging, disable all VSIs + * and suspend port tx + */ + if (need_reconfig) { + /* Enable DCB tagging only when more than one TC */ + if (new_numtc > 1) + pf->flags |= I40E_FLAG_DCB_ENABLED; + else + pf->flags &= ~I40E_FLAG_DCB_ENABLED; + + set_bit(__I40E_PORT_SUSPENDED, pf->state); + /* Reconfiguration needed quiesce all VSIs */ + i40e_pf_quiesce_all_vsi(pf); + ret = i40e_suspend_port_tx(pf); + if (ret) + goto err; + } + + /* Configure Port ETS Tx Scheduler */ + ets_data.tc_valid_bits = tc_map; + ets_data.tc_strict_priority_flags = lltc_map; + ret = i40e_aq_config_switch_comp_ets + (hw, pf->mac_seid, &ets_data, + i40e_aqc_opc_modify_switching_comp_ets, NULL); + if (ret) { + dev_info(&pf->pdev->dev, + "Modify Port ETS failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + goto out; + } + + /* Configure Rx ETS HW */ + memset(&mode, I40E_DCB_ARB_MODE_ROUND_ROBIN, sizeof(mode)); + i40e_dcb_hw_set_num_tc(hw, new_numtc); + i40e_dcb_hw_rx_fifo_config(hw, I40E_DCB_ARB_MODE_ROUND_ROBIN, + I40E_DCB_ARB_MODE_STRICT_PRIORITY, + I40E_DCB_DEFAULT_MAX_EXPONENT, + lltc_map); + i40e_dcb_hw_rx_cmd_monitor_config(hw, new_numtc, num_ports); + i40e_dcb_hw_rx_ets_bw_config(hw, new_cfg->etscfg.tcbwtable, mode, + prio_type); + i40e_dcb_hw_pfc_config(hw, new_cfg->pfc.pfcenable, + new_cfg->etscfg.prioritytable); + i40e_dcb_hw_rx_up2tc_config(hw, new_cfg->etscfg.prioritytable); + + /* Configure Rx Packet Buffers in HW */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + mfs_tc[i] = pf->vsi[pf->lan_vsi]->netdev->mtu; + mfs_tc[i] += I40E_PACKET_HDR_PAD; + } + + i40e_dcb_hw_calculate_pool_sizes(hw, num_ports, + false, new_cfg->pfc.pfcenable, + mfs_tc, &pb_cfg); + i40e_dcb_hw_rx_pb_config(hw, &pf->pb_cfg, &pb_cfg); + + /* Update the local Rx Packet buffer config */ + pf->pb_cfg = pb_cfg; + + /* Inform the FW about changes to DCB configuration */ + ret = i40e_aq_dcb_updated(&pf->hw, NULL); + if (ret) { + dev_info(&pf->pdev->dev, + "DCB Updated failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + goto out; + } + + /* Update the port DCBx configuration */ + *old_cfg = *new_cfg; + + /* Changes in configuration update VEB/VSI */ + i40e_dcb_reconfigure(pf); +out: + /* Re-start the VSIs if disabled */ + if (need_reconfig) { + ret = i40e_resume_port_tx(pf); + + clear_bit(__I40E_PORT_SUSPENDED, pf->state); + /* In case of error no point in resuming VSIs */ + if (ret) + goto err; + + /* Wait for the PF's queues to be disabled */ + ret = i40e_pf_wait_queues_disabled(pf); + if (ret) { + /* Schedule PF reset to recover */ + set_bit(__I40E_PF_RESET_REQUESTED, pf->state); + i40e_service_event_schedule(pf); + goto err; + } else { + i40e_pf_unquiesce_all_vsi(pf); + set_bit(__I40E_CLIENT_SERVICE_REQUESTED, pf->state); + set_bit(__I40E_CLIENT_L2_CHANGE, pf->state); + } + /* registers are set, lets apply */ + if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) + ret = i40e_hw_set_dcb_config(pf, new_cfg); + } + +err: + return ret; +} + +/** + * i40e_dcb_sw_default_config - Set default DCB configuration when DCB in SW + * @pf: PF being queried + * + * Set default DCB configuration in case DCB is to be done in SW. + **/ +int i40e_dcb_sw_default_config(struct i40e_pf *pf) +{ + struct i40e_dcbx_config *dcb_cfg = &pf->hw.local_dcbx_config; + struct i40e_aqc_configure_switching_comp_ets_data ets_data; + struct i40e_hw *hw = &pf->hw; + int err; + + if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) { + /* Update the local cached instance with TC0 ETS */ + memset(&pf->tmp_cfg, 0, sizeof(struct i40e_dcbx_config)); + pf->tmp_cfg.etscfg.willing = I40E_IEEE_DEFAULT_ETS_WILLING; + pf->tmp_cfg.etscfg.maxtcs = 0; + pf->tmp_cfg.etscfg.tcbwtable[0] = I40E_IEEE_DEFAULT_ETS_TCBW; + pf->tmp_cfg.etscfg.tsatable[0] = I40E_IEEE_TSA_ETS; + pf->tmp_cfg.pfc.willing = I40E_IEEE_DEFAULT_PFC_WILLING; + pf->tmp_cfg.pfc.pfccap = I40E_MAX_TRAFFIC_CLASS; + /* FW needs one App to configure HW */ + pf->tmp_cfg.numapps = I40E_IEEE_DEFAULT_NUM_APPS; + pf->tmp_cfg.app[0].selector = I40E_APP_SEL_ETHTYPE; + pf->tmp_cfg.app[0].priority = I40E_IEEE_DEFAULT_APP_PRIO; + pf->tmp_cfg.app[0].protocolid = I40E_APP_PROTOID_FCOE; + + return i40e_hw_set_dcb_config(pf, &pf->tmp_cfg); + } + + memset(&ets_data, 0, sizeof(ets_data)); + ets_data.tc_valid_bits = I40E_DEFAULT_TRAFFIC_CLASS; /* TC0 only */ + ets_data.tc_strict_priority_flags = 0; /* ETS */ + ets_data.tc_bw_share_credits[0] = I40E_IEEE_DEFAULT_ETS_TCBW; /* 100% to TC0 */ + + /* Enable ETS on the Physical port */ + err = i40e_aq_config_switch_comp_ets + (hw, pf->mac_seid, &ets_data, + i40e_aqc_opc_enable_switching_comp_ets, NULL); + if (err) { + dev_info(&pf->pdev->dev, + "Enable Port ETS failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, err), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + err = -ENOENT; + goto out; + } + + /* Update the local cached instance with TC0 ETS */ + dcb_cfg->etscfg.willing = I40E_IEEE_DEFAULT_ETS_WILLING; + dcb_cfg->etscfg.cbs = 0; + dcb_cfg->etscfg.maxtcs = I40E_MAX_TRAFFIC_CLASS; + dcb_cfg->etscfg.tcbwtable[0] = I40E_IEEE_DEFAULT_ETS_TCBW; + +out: + return err; +} + /** * i40e_init_pf_dcb - Initialize DCB configuration * @pf: PF being configured @@ -6490,18 +6805,31 @@ static int i40e_resume_port_tx(struct i40e_pf *pf) static int i40e_init_pf_dcb(struct i40e_pf *pf) { struct i40e_hw *hw = &pf->hw; - int err = 0; + int err; /* Do not enable DCB for SW1 and SW2 images even if the FW is capable * Also do not enable DCBx if FW LLDP agent is disabled */ - if ((pf->hw_features & I40E_HW_NO_DCB_SUPPORT) || - (pf->flags & I40E_FLAG_DISABLE_FW_LLDP)) { - dev_info(&pf->pdev->dev, "DCB is not supported or FW LLDP is disabled\n"); + if (pf->hw_features & I40E_HW_NO_DCB_SUPPORT) { + dev_info(&pf->pdev->dev, "DCB is not supported.\n"); err = I40E_NOT_SUPPORTED; goto out; } - + if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) { + dev_info(&pf->pdev->dev, "FW LLDP is disabled, attempting SW DCB\n"); + err = i40e_dcb_sw_default_config(pf); + if (err) { + dev_info(&pf->pdev->dev, "Could not initialize SW DCB\n"); + goto out; + } + dev_info(&pf->pdev->dev, "SW DCB initialization succeeded.\n"); + pf->dcbx_cap = DCB_CAP_DCBX_HOST | + DCB_CAP_DCBX_VER_IEEE; + /* at init capable but disabled */ + pf->flags |= I40E_FLAG_DCB_CAPABLE; + pf->flags &= ~I40E_FLAG_DCB_ENABLED; + goto out; + } err = i40e_init_dcb(hw, true); if (!err) { /* Device/Function is not DCBX capable */ @@ -6540,6 +6868,40 @@ static int i40e_init_pf_dcb(struct i40e_pf *pf) } #endif /* CONFIG_I40E_DCB */ +/** + * i40e_set_lldp_forwarding - set forwarding of lldp frames + * @pf: PF being configured + * @enable: if forwarding to OS shall be enabled + * + * Toggle forwarding of lldp frames behavior, + * When passing DCB control from firmware to software + * lldp frames must be forwarded to the software based + * lldp agent. + */ +void i40e_set_lldp_forwarding(struct i40e_pf *pf, bool enable) +{ + if (pf->lan_vsi == I40E_NO_VSI) + return; + + if (!pf->vsi[pf->lan_vsi]) + return; + + /* No need to check the outcome, commands may fail + * if desired value is already set + */ + i40e_aq_add_rem_control_packet_filter(&pf->hw, NULL, ETH_P_LLDP, + I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX | + I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC, + pf->vsi[pf->lan_vsi]->seid, 0, + enable, NULL, NULL); + + i40e_aq_add_rem_control_packet_filter(&pf->hw, NULL, ETH_P_LLDP, + I40E_AQC_ADD_CONTROL_PACKET_FLAGS_RX | + I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC, + pf->vsi[pf->lan_vsi]->seid, 0, + enable, NULL, NULL); +} + /** * i40e_print_link_message - print link up or down * @vsi: the VSI for which link needs a message @@ -8304,7 +8666,6 @@ int i40e_open(struct net_device *netdev) TCP_FLAG_FIN | TCP_FLAG_CWR) >> 16); wr32(&pf->hw, I40E_GLLAN_TSOMSK_L, be32_to_cpu(TCP_FLAG_CWR) >> 16); - udp_tunnel_get_rx_info(netdev); return 0; @@ -8684,6 +9045,14 @@ static int i40e_handle_lldp_event(struct i40e_pf *pf, int ret = 0; u8 type; + /* X710-T*L 2.5G and 5G speeds don't support DCB */ + if (I40E_IS_X710TL_DEVICE(hw->device_id) && + (hw->phy.link_info.link_speed & + ~(I40E_LINK_SPEED_2_5GB | I40E_LINK_SPEED_5GB)) && + !(pf->flags & I40E_FLAG_DCB_CAPABLE)) + /* let firmware decide if the DCB should be disabled */ + pf->flags |= I40E_FLAG_DCB_CAPABLE; + /* Not DCB capable or capability disabled */ if (!(pf->flags & I40E_FLAG_DCB_CAPABLE)) return ret; @@ -8715,10 +9084,20 @@ static int i40e_handle_lldp_event(struct i40e_pf *pf, /* Get updated DCBX data from firmware */ ret = i40e_get_dcb_config(&pf->hw); if (ret) { - dev_info(&pf->pdev->dev, - "Failed querying DCB configuration data from firmware, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), - i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + /* X710-T*L 2.5G and 5G speeds don't support DCB */ + if (I40E_IS_X710TL_DEVICE(hw->device_id) && + (hw->phy.link_info.link_speed & + (I40E_LINK_SPEED_2_5GB | I40E_LINK_SPEED_5GB))) { + dev_warn(&pf->pdev->dev, + "DCB is not supported for X710-T*L 2.5/5G speeds\n"); + pf->flags &= ~I40E_FLAG_DCB_CAPABLE; + } else { + dev_info(&pf->pdev->dev, + "Failed querying DCB configuration data from firmware, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, + pf->hw.aq.asq_last_status)); + } goto exit; } @@ -9165,6 +9544,9 @@ static void i40e_link_event(struct i40e_pf *pf) u8 new_link_speed, old_link_speed; i40e_status status; bool new_link, old_link; +#ifdef CONFIG_I40E_DCB + int err; +#endif /* CONFIG_I40E_DCB */ /* set this to force the get_link_status call to refresh state */ pf->hw.phy.get_link_info = true; @@ -9208,6 +9590,31 @@ static void i40e_link_event(struct i40e_pf *pf) if (pf->flags & I40E_FLAG_PTP) i40e_ptp_set_increment(pf); +#ifdef CONFIG_I40E_DCB + if (new_link == old_link) + return; + /* Not SW DCB so firmware will take care of default settings */ + if (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED) + return; + + /* We cover here only link down, as after link up in case of SW DCB + * SW LLDP agent will take care of setting it up + */ + if (!new_link) { + dev_dbg(&pf->pdev->dev, "Reconfig DCB to single TC as result of Link Down\n"); + memset(&pf->tmp_cfg, 0, sizeof(pf->tmp_cfg)); + err = i40e_dcb_sw_default_config(pf); + if (err) { + pf->flags &= ~(I40E_FLAG_DCB_CAPABLE | + I40E_FLAG_DCB_ENABLED); + } else { + pf->dcbx_cap = DCB_CAP_DCBX_HOST | + DCB_CAP_DCBX_VER_IEEE; + pf->flags |= I40E_FLAG_DCB_CAPABLE; + pf->flags &= ~I40E_FLAG_DCB_ENABLED; + } + } +#endif /* CONFIG_I40E_DCB */ } /** @@ -9284,7 +9691,7 @@ static void i40e_reset_subtask(struct i40e_pf *pf) * precedence before starting a new reset sequence. */ if (test_bit(__I40E_RESET_INTR_RECEIVED, pf->state)) { - i40e_prep_for_reset(pf, false); + i40e_prep_for_reset(pf); i40e_reset(pf); i40e_rebuild(pf, false, false); } @@ -9416,7 +9823,9 @@ static void i40e_clean_adminq_subtask(struct i40e_pf *pf) switch (opcode) { case i40e_aqc_opc_get_link_status: + rtnl_lock(); i40e_handle_link_event(pf, &event); + rtnl_unlock(); break; case i40e_aqc_opc_send_msg_to_pf: ret = i40e_vc_process_vf_msg(pf, @@ -9916,12 +10325,10 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi) /** * i40e_prep_for_reset - prep for the core to reset * @pf: board private structure - * @lock_acquired: indicates whether or not the lock has been acquired - * before this function was called. * * Close up the VFs and other things in prep for PF Reset. **/ -static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired) +static void i40e_prep_for_reset(struct i40e_pf *pf) { struct i40e_hw *hw = &pf->hw; i40e_status ret = 0; @@ -9936,12 +10343,7 @@ static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired) dev_dbg(&pf->pdev->dev, "Tearing down internal switch for reset\n"); /* quiesce the VSIs and their queues that are not already DOWN */ - /* pf_quiesce_all_vsi modifies netdev structures -rtnl_lock needed */ - if (!lock_acquired) - rtnl_lock(); i40e_pf_quiesce_all_vsi(pf); - if (!lock_acquired) - rtnl_unlock(); for (v = 0; v < pf->num_alloc_vsi; v++) { if (pf->vsi[v]) @@ -10153,24 +10555,41 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) goto end_core_reset; } - /* Enable FW to write a default DCB config on link-up */ - i40e_aq_set_dcb_parameters(hw, true, NULL); - -#ifdef CONFIG_I40E_DCB - ret = i40e_init_pf_dcb(pf); - if (ret) { - dev_info(&pf->pdev->dev, "DCB init failed %d, disabled\n", ret); - pf->flags &= ~I40E_FLAG_DCB_CAPABLE; - /* Continue without DCB enabled */ - } -#endif /* CONFIG_I40E_DCB */ - /* do basic switch setup */ if (!lock_acquired) rtnl_lock(); ret = i40e_setup_pf_switch(pf, reinit); if (ret) goto end_unlock; +#ifdef CONFIG_I40E_DCB + /* Enable FW to write a default DCB config on link-up + * unless I40E_FLAG_TC_MQPRIO was enabled or DCB + * is not supported with new link speed + */ + if (pf->flags & I40E_FLAG_TC_MQPRIO) { + i40e_aq_set_dcb_parameters(hw, false, NULL); + } else { + if (I40E_IS_X710TL_DEVICE(hw->device_id) && + (hw->phy.link_info.link_speed & + (I40E_LINK_SPEED_2_5GB | I40E_LINK_SPEED_5GB))) { + i40e_aq_set_dcb_parameters(hw, false, NULL); + dev_warn(&pf->pdev->dev, + "DCB is not supported for X710-T*L 2.5/5G speeds\n"); + pf->flags &= ~I40E_FLAG_DCB_CAPABLE; + } else { + i40e_aq_set_dcb_parameters(hw, true, NULL); + ret = i40e_init_pf_dcb(pf); + if (ret) { + dev_info(&pf->pdev->dev, "DCB init failed %d, disabled\n", + ret); + pf->flags &= ~I40E_FLAG_DCB_CAPABLE; + /* Continue without DCB enabled */ + } + } + } + +#endif /* CONFIG_I40E_DCB */ + /* The driver only wants link up/down and module qualification * reports from firmware. Note the negative logic. */ @@ -10307,6 +10726,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) */ i40e_add_filter_to_drop_tx_flow_control_frames(&pf->hw, pf->main_vsi_seid); +#ifdef CONFIG_I40E_DCB + if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) + i40e_set_lldp_forwarding(pf, true); +#endif /* CONFIG_I40E_DCB */ /* restart the VSIs that were rebuilt and running before the reset */ i40e_pf_unquiesce_all_vsi(pf); @@ -10373,7 +10796,7 @@ static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit, **/ static void i40e_handle_reset_warning(struct i40e_pf *pf, bool lock_acquired) { - i40e_prep_for_reset(pf, lock_acquired); + i40e_prep_for_reset(pf); i40e_reset_and_rebuild(pf, false, lock_acquired); } @@ -11707,7 +12130,7 @@ int i40e_reconfig_rss_queues(struct i40e_pf *pf, int queue_count) u16 qcount; vsi->req_queue_pairs = queue_count; - i40e_prep_for_reset(pf, true); + i40e_prep_for_reset(pf); pf->alloc_rss_size = new_rss_size; @@ -12525,7 +12948,7 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, need_reset = (i40e_enabled_xdp_vsi(vsi) != !!prog); if (need_reset) - i40e_prep_for_reset(pf, true); + i40e_prep_for_reset(pf); old_prog = xchg(&vsi->xdp_prog, prog); @@ -14765,6 +15188,10 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw) static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { struct i40e_aq_get_phy_abilities_resp abilities; +#ifdef CONFIG_I40E_DCB + enum i40e_get_fw_lldp_status_resp lldp_status; + i40e_status status; +#endif /* CONFIG_I40E_DCB */ struct i40e_pf *pf; struct i40e_hw *hw; static u16 pfs_found; @@ -15017,11 +15444,15 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) i40e_get_port_mac_addr(hw, hw->mac.port_addr); if (is_valid_ether_addr(hw->mac.port_addr)) pf->hw_features |= I40E_HW_PORT_ID_VALID; - i40e_ptp_alloc_pins(pf); pci_set_drvdata(pdev, pf); pci_save_state(pdev); - +#ifdef CONFIG_I40E_DCB + status = i40e_get_fw_lldp_status(&pf->hw, &lldp_status); + (!status && + lldp_status == I40E_GET_FW_LLDP_STATUS_ENABLED) ? + (pf->flags &= ~I40E_FLAG_DISABLE_FW_LLDP) : + (pf->flags |= I40E_FLAG_DISABLE_FW_LLDP); dev_info(&pdev->dev, (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) ? "FW LLDP is disabled\n" : @@ -15030,7 +15461,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* Enable FW to write default DCB config on link-up */ i40e_aq_set_dcb_parameters(hw, true, NULL); -#ifdef CONFIG_I40E_DCB err = i40e_init_pf_dcb(pf); if (err) { dev_info(&pdev->dev, "DCB init failed %d, disabled\n", err); @@ -15323,6 +15753,10 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) */ i40e_add_filter_to_drop_tx_flow_control_frames(&pf->hw, pf->main_vsi_seid); +#ifdef CONFIG_I40E_DCB + if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) + i40e_set_lldp_forwarding(pf, true); +#endif /* CONFIG_I40E_DCB */ if ((pf->hw.device_id == I40E_DEV_ID_10G_BASE_T) || (pf->hw.device_id == I40E_DEV_ID_10G_BASE_T4)) @@ -15525,7 +15959,7 @@ static pci_ers_result_t i40e_pci_error_detected(struct pci_dev *pdev, /* shutdown all operations */ if (!test_bit(__I40E_SUSPENDED, pf->state)) - i40e_prep_for_reset(pf, false); + i40e_prep_for_reset(pf); /* Request a slot reset */ return PCI_ERS_RESULT_NEED_RESET; @@ -15575,7 +16009,7 @@ static void i40e_pci_error_reset_prepare(struct pci_dev *pdev) { struct i40e_pf *pf = pci_get_drvdata(pdev); - i40e_prep_for_reset(pf, false); + i40e_prep_for_reset(pf); } /** @@ -15679,7 +16113,7 @@ static void i40e_shutdown(struct pci_dev *pdev) if (pf->wol_en && (pf->hw_features & I40E_HW_WOL_MC_MAGIC_PKT_WAKE)) i40e_enable_mc_magic_wake(pf); - i40e_prep_for_reset(pf, false); + i40e_prep_for_reset(pf); wr32(hw, I40E_PFPM_APM, (pf->wol_en ? I40E_PFPM_APM_APME_MASK : 0)); @@ -15738,7 +16172,7 @@ static int __maybe_unused i40e_suspend(struct device *dev) */ rtnl_lock(); - i40e_prep_for_reset(pf, true); + i40e_prep_for_reset(pf); wr32(hw, I40E_PFPM_APM, (pf->wol_en ? I40E_PFPM_APM_APME_MASK : 0)); wr32(hw, I40E_PFPM_WUFC, (pf->wol_en ? I40E_PFPM_WUFC_MAG_MASK : 0)); From patchwork Mon Oct 19 23:50:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kubalewski, Arkadiusz" X-Patchwork-Id: 1384547 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CFYYn2G2jz9sSC for ; Tue, 20 Oct 2020 10:56:13 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id EBE2A869EA; Mon, 19 Oct 2020 23:56:11 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dLnbLNuxwXng; Mon, 19 Oct 2020 23:56:09 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by fraxinus.osuosl.org (Postfix) with ESMTP id 58862869F7; Mon, 19 Oct 2020 23:56:09 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 731B71BF969 for ; Mon, 19 Oct 2020 23:56:07 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 2B76484371 for ; Mon, 19 Oct 2020 23:56:06 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UfKrRlrBjgxD for ; Mon, 19 Oct 2020 23:56:03 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by hemlock.osuosl.org (Postfix) with ESMTPS id 0A78187298 for ; Mon, 19 Oct 2020 23:56:03 +0000 (UTC) IronPort-SDR: oIDCldpxpNZ8MjCXOkWHFKE9ME/mZ/EwRMzL+e3aQiq9KHfEOM2DX2fQQow3NmVmFNNBYg9+Li T6fNu9qkC8dw== X-IronPort-AV: E=McAfee;i="6000,8403,9779"; a="166351761" X-IronPort-AV: E=Sophos;i="5.77,395,1596524400"; d="scan'208";a="166351761" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Oct 2020 16:56:02 -0700 IronPort-SDR: OlZdwy5SNRzs6H1ijqEl3efuMigGJrOFCsLCkg1pK51NWDYog/s4P3BMr0Zpfmr8zS4gcnu1J3 FExKwkRVOh9A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,395,1596524400"; d="scan'208";a="465712647" Received: from amlin-018-053.igk.intel.com ([10.102.18.53]) by orsmga004.jf.intel.com with ESMTP; 19 Oct 2020 16:56:01 -0700 From: Arkadiusz Kubalewski To: intel-wired-lan@lists.osuosl.org Date: Mon, 19 Oct 2020 23:50:29 +0000 Message-Id: <20201019235029.176050-4-arkadiusz.kubalewski@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20201019235029.176050-1-arkadiusz.kubalewski@intel.com> References: <20201019235029.176050-1-arkadiusz.kubalewski@intel.com> MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH net-next 3/3] i40e: Add netlink callbacks support for software based DCB X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aleksandr Loktionov Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Add callbacks used by software based LLDP agent, which allows to configure DCB feature from userspace. Update copyright dates as appropriate. If LLDP agent is turned off in BIOS, or after setting private flag ("disable-fw-lldp on"). The driver initialized DCB functionality with default values, one traffic class with 100% bandwidth allocated. The new netlink callbacks are required for software LLDP agent, it must be able to acquire current DCB configuration of a network port and apply DCB configuration changes, if required. Signed-off-by: Aleksandr Loktionov Signed-off-by: Arkadiusz Kubalewski --- drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c | 752 +++++++++++++++++- 1 file changed, 745 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c index 9deae9a35423..137ec47968c4 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c @@ -1,10 +1,14 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2020 Intel Corporation. */ #ifdef CONFIG_I40E_DCB #include "i40e.h" #include +#define I40E_DCBNL_STATUS_SUCCESS 0 +#define I40E_DCBNL_STATUS_ERROR 1 +static bool i40e_dcbnl_find_app(struct i40e_dcbx_config *cfg, + struct i40e_dcb_app_priority_table *app); /** * i40e_get_pfc_delay - retrieve PFC Link Delay * @hw: pointer to hardware struct @@ -33,14 +37,13 @@ static int i40e_dcbnl_ieee_getets(struct net_device *dev, { struct i40e_pf *pf = i40e_netdev_to_pf(dev); struct i40e_dcbx_config *dcbxcfg; - struct i40e_hw *hw = &pf->hw; if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_IEEE)) return -EINVAL; - dcbxcfg = &hw->local_dcbx_config; + dcbxcfg = &pf->hw.local_dcbx_config; ets->willing = dcbxcfg->etscfg.willing; - ets->ets_cap = dcbxcfg->etscfg.maxtcs; + ets->ets_cap = I40E_MAX_TRAFFIC_CLASS; ets->cbs = dcbxcfg->etscfg.cbs; memcpy(ets->tc_tx_bw, dcbxcfg->etscfg.tcbwtable, sizeof(ets->tc_tx_bw)); @@ -84,7 +87,7 @@ static int i40e_dcbnl_ieee_getpfc(struct net_device *dev, pfc->mbc = dcbxcfg->pfc.mbc; i40e_get_pfc_delay(hw, &pfc->delay); - /* Get Requests/Indicatiosn */ + /* Get Requests/Indications */ for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { pfc->requests[i] = pf->stats.priority_xoff_tx[i]; pfc->indications[i] = pf->stats.priority_xoff_rx[i]; @@ -93,6 +96,713 @@ static int i40e_dcbnl_ieee_getpfc(struct net_device *dev, return 0; } +/** + * i40e_dcbnl_ieee_setets - set IEEE ETS configuration + * @netdev: the corresponding netdev + * @ets: structure to hold the ETS information + * + * Set IEEE ETS configuration + **/ +static int i40e_dcbnl_ieee_setets(struct net_device *netdev, + struct ieee_ets *ets) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + struct i40e_dcbx_config *old_cfg; + int i, ret; + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return -EINVAL; + + old_cfg = &pf->hw.local_dcbx_config; + /* Copy current config into temp */ + pf->tmp_cfg = *old_cfg; + + /* Update the ETS configuration for temp */ + pf->tmp_cfg.etscfg.willing = ets->willing; + pf->tmp_cfg.etscfg.maxtcs = I40E_MAX_TRAFFIC_CLASS; + pf->tmp_cfg.etscfg.cbs = ets->cbs; + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + pf->tmp_cfg.etscfg.tcbwtable[i] = ets->tc_tx_bw[i]; + pf->tmp_cfg.etscfg.tsatable[i] = ets->tc_tsa[i]; + pf->tmp_cfg.etscfg.prioritytable[i] = ets->prio_tc[i]; + pf->tmp_cfg.etsrec.tcbwtable[i] = ets->tc_reco_bw[i]; + pf->tmp_cfg.etsrec.tsatable[i] = ets->tc_reco_tsa[i]; + pf->tmp_cfg.etsrec.prioritytable[i] = ets->reco_prio_tc[i]; + } + + /* Commit changes to HW */ + ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg); + if (ret) { + dev_info(&pf->pdev->dev, + "Failed setting DCB ETS configuration err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + return -EINVAL; + } + + return 0; +} + +/** + * i40e_dcbnl_ieee_setpfc - set local IEEE PFC configuration + * @netdev: the corresponding netdev + * @pfc: structure to hold the PFC information + * + * Sets local IEEE PFC configuration + **/ +static int i40e_dcbnl_ieee_setpfc(struct net_device *netdev, + struct ieee_pfc *pfc) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + struct i40e_dcbx_config *old_cfg; + int ret; + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return -EINVAL; + + old_cfg = &pf->hw.local_dcbx_config; + /* Copy current config into temp */ + pf->tmp_cfg = *old_cfg; + if (pfc->pfc_cap) + pf->tmp_cfg.pfc.pfccap = pfc->pfc_cap; + else + pf->tmp_cfg.pfc.pfccap = I40E_MAX_TRAFFIC_CLASS; + pf->tmp_cfg.pfc.pfcenable = pfc->pfc_en; + + ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg); + if (ret) { + dev_info(&pf->pdev->dev, + "Failed setting DCB PFC configuration err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + return -EINVAL; + } + + return 0; +} + +/** + * i40e_dcbnl_ieee_setapp - set local IEEE App configuration + * @netdev: the corresponding netdev + * @app: structure to hold the Application information + * + * Sets local IEEE App configuration + **/ +static int i40e_dcbnl_ieee_setapp(struct net_device *netdev, + struct dcb_app *app) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + struct i40e_dcb_app_priority_table new_app; + struct i40e_dcbx_config *old_cfg; + int ret; + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return -EINVAL; + + old_cfg = &pf->hw.local_dcbx_config; + if (old_cfg->numapps == I40E_DCBX_MAX_APPS) + return -EINVAL; + + ret = dcb_ieee_setapp(netdev, app); + if (ret) + return ret; + + new_app.selector = app->selector; + new_app.protocolid = app->protocol; + new_app.priority = app->priority; + /* Already internally available */ + if (i40e_dcbnl_find_app(old_cfg, &new_app)) + return 0; + + /* Copy current config into temp */ + pf->tmp_cfg = *old_cfg; + /* Add the app */ + pf->tmp_cfg.app[pf->tmp_cfg.numapps++] = new_app; + + ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg); + if (ret) { + dev_info(&pf->pdev->dev, + "Failed setting DCB configuration err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + return -EINVAL; + } + + return 0; +} + +/** + * i40e_dcbnl_ieee_delapp - delete local IEEE App configuration + * @netdev: the corresponding netdev + * @app: structure to hold the Application information + * + * Deletes local IEEE App configuration other than the first application + * required by firmware + **/ +static int i40e_dcbnl_ieee_delapp(struct net_device *netdev, + struct dcb_app *app) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + struct i40e_dcbx_config *old_cfg; + int i, j, ret; + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return -EINVAL; + + ret = dcb_ieee_delapp(netdev, app); + if (ret) + return ret; + + old_cfg = &pf->hw.local_dcbx_config; + /* Need one app for FW so keep it */ + if (old_cfg->numapps == 1) + return 0; + + /* Copy current config into temp */ + pf->tmp_cfg = *old_cfg; + + /* Find and reset the app */ + for (i = 1; i < pf->tmp_cfg.numapps; i++) { + if (app->selector == pf->tmp_cfg.app[i].selector && + app->protocol == pf->tmp_cfg.app[i].protocolid && + app->priority == pf->tmp_cfg.app[i].priority) { + /* Reset the app data */ + pf->tmp_cfg.app[i].selector = 0; + pf->tmp_cfg.app[i].protocolid = 0; + pf->tmp_cfg.app[i].priority = 0; + break; + } + } + + /* If the specific DCB app not found */ + if (i == pf->tmp_cfg.numapps) + return -EINVAL; + + pf->tmp_cfg.numapps--; + /* Overwrite the tmp_cfg app */ + for (j = i; j < pf->tmp_cfg.numapps; j++) + pf->tmp_cfg.app[j] = old_cfg->app[j + 1]; + + ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg); + if (ret) { + dev_info(&pf->pdev->dev, + "Failed setting DCB configuration err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + return -EINVAL; + } + + return 0; +} + +/** + * i40e_dcbnl_getstate - Get DCB enabled state + * @netdev: the corresponding netdev + * + * Get the current DCB enabled state + **/ +static u8 i40e_dcbnl_getstate(struct net_device *netdev) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + dev_dbg(&pf->pdev->dev, "DCB state=%d\n", + !!(pf->flags & I40E_FLAG_DCB_ENABLED)); + return !!(pf->flags & I40E_FLAG_DCB_ENABLED); +} + +/** + * i40e_dcbnl_setstate - Set DCB state + * @netdev: the corresponding netdev + * @state: enable or disable + * + * Set the DCB state + **/ +static u8 i40e_dcbnl_setstate(struct net_device *netdev, u8 state) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + int ret = I40E_DCBNL_STATUS_SUCCESS; + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return ret; + + dev_dbg(&pf->pdev->dev, "new state=%d current state=%d\n", + state, (pf->flags & I40E_FLAG_DCB_ENABLED) ? 1 : 0); + /* Nothing to do */ + if (!state == !(pf->flags & I40E_FLAG_DCB_ENABLED)) + return ret; + + if (i40e_is_sw_dcb(pf)) { + if (state) { + pf->flags |= I40E_FLAG_DCB_ENABLED; + memcpy(&pf->hw.desired_dcbx_config, + &pf->hw.local_dcbx_config, + sizeof(struct i40e_dcbx_config)); + } else { + pf->flags &= ~I40E_FLAG_DCB_ENABLED; + } + } else { + /* Cannot directly manipulate FW LLDP Agent */ + ret = I40E_DCBNL_STATUS_ERROR; + } + return ret; +} + +/** + * i40e_dcbnl_set_pg_tc_cfg_tx - Set CEE PG Tx config + * @netdev: the corresponding netdev + * @tc: the corresponding traffic class + * @prio_type: the traffic priority type + * @bwg_id: the BW group id the traffic class belongs to + * @bw_pct: the BW percentage for the corresponding BWG + * @up_map: prio mapped to corresponding tc + * + * Set Tx PG settings for CEE mode + **/ +static void i40e_dcbnl_set_pg_tc_cfg_tx(struct net_device *netdev, int tc, + u8 prio_type, u8 bwg_id, u8 bw_pct, + u8 up_map) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + int i; + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return; + + /* LLTC not supported yet */ + if (tc >= I40E_MAX_TRAFFIC_CLASS) + return; + + /* prio_type, bwg_id and bw_pct per UP are not supported */ + + /* Use only up_map to map tc */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + if (up_map & BIT(i)) + pf->tmp_cfg.etscfg.prioritytable[i] = tc; + } + pf->tmp_cfg.etscfg.tsatable[tc] = I40E_IEEE_TSA_ETS; + dev_dbg(&pf->pdev->dev, + "Set PG config tc=%d bwg_id=%d prio_type=%d bw_pct=%d up_map=%d\n", + tc, bwg_id, prio_type, bw_pct, up_map); +} + +/** + * i40e_dcbnl_set_pg_tc_cfg_tx - Set CEE PG Tx BW config + * @netdev: the corresponding netdev + * @pgid: the corresponding traffic class + * @bw_pct: the BW percentage for the specified traffic class + * + * Set Tx BW settings for CEE mode + **/ +static void i40e_dcbnl_set_pg_bwg_cfg_tx(struct net_device *netdev, int pgid, + u8 bw_pct) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return; + + /* LLTC not supported yet */ + if (pgid >= I40E_MAX_TRAFFIC_CLASS) + return; + + pf->tmp_cfg.etscfg.tcbwtable[pgid] = bw_pct; + dev_dbg(&pf->pdev->dev, "Set PG BW config tc=%d bw_pct=%d\n", + pgid, bw_pct); +} + +/** + * i40e_dcbnl_set_pg_tc_cfg_rx - Set CEE PG Rx config + * @netdev: the corresponding netdev + * @prio: the corresponding traffic class + * @prio_type: the traffic priority type + * @pgid: the BW group id the traffic class belongs to + * @bw_pct: the BW percentage for the corresponding BWG + * @up_map: prio mapped to corresponding tc + * + * Set Rx BW settings for CEE mode. The hardware does not support this + * so we won't allow setting of this parameter. + **/ +static void i40e_dcbnl_set_pg_tc_cfg_rx(struct net_device *netdev, + int __always_unused prio, + u8 __always_unused prio_type, + u8 __always_unused pgid, + u8 __always_unused bw_pct, + u8 __always_unused up_map) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + dev_dbg(&pf->pdev->dev, "Rx TC PG Config Not Supported.\n"); +} + +/** + * i40e_dcbnl_set_pg_bwg_cfg_rx - Set CEE PG Rx config + * @netdev: the corresponding netdev + * @pgid: the corresponding traffic class + * @bw_pct: the BW percentage for the specified traffic class + * + * Set Rx BW settings for CEE mode. The hardware does not support this + * so we won't allow setting of this parameter. + **/ +static void i40e_dcbnl_set_pg_bwg_cfg_rx(struct net_device *netdev, int pgid, + u8 bw_pct) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + dev_dbg(&pf->pdev->dev, "Rx BWG PG Config Not Supported.\n"); +} + +/** + * i40e_dcbnl_get_pg_tc_cfg_tx - Get CEE PG Tx config + * @netdev: the corresponding netdev + * @prio: the corresponding user priority + * @prio_type: traffic priority type + * @pgid: the BW group ID the traffic class belongs to + * @bw_pct: BW percentage for the corresponding BWG + * @up_map: prio mapped to corresponding TC + * + * Get Tx PG settings for CEE mode + **/ +static void i40e_dcbnl_get_pg_tc_cfg_tx(struct net_device *netdev, int prio, + u8 __always_unused *prio_type, + u8 *pgid, + u8 __always_unused *bw_pct, + u8 __always_unused *up_map) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return; + + if (prio >= I40E_MAX_USER_PRIORITY) + return; + + *pgid = pf->hw.local_dcbx_config.etscfg.prioritytable[prio]; + dev_dbg(&pf->pdev->dev, "Get PG config prio=%d tc=%d\n", + prio, *pgid); +} + +/** + * i40e_dcbnl_get_pg_bwg_cfg_tx - Get CEE PG BW config + * @netdev: the corresponding netdev + * @pgid: the corresponding traffic class + * @bw_pct: the BW percentage for the corresponding TC + * + * Get Tx BW settings for given TC in CEE mode + **/ +static void i40e_dcbnl_get_pg_bwg_cfg_tx(struct net_device *netdev, int pgid, + u8 *bw_pct) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return; + + if (pgid >= I40E_MAX_TRAFFIC_CLASS) + return; + + *bw_pct = pf->hw.local_dcbx_config.etscfg.tcbwtable[pgid]; + dev_dbg(&pf->pdev->dev, "Get PG BW config tc=%d bw_pct=%d\n", + pgid, *bw_pct); +} + +/** + * i40e_dcbnl_get_pg_tc_cfg_rx - Get CEE PG Rx config + * @netdev: the corresponding netdev + * @prio: the corresponding user priority + * @prio_type: the traffic priority type + * @pgid: the PG ID + * @bw_pct: the BW percentage for the corresponding BWG + * @up_map: prio mapped to corresponding TC + * + * Get Rx PG settings for CEE mode. The UP2TC map is applied in same + * manner for Tx and Rx (symmetrical) so return the TC information for + * given priority accordingly. + **/ +static void i40e_dcbnl_get_pg_tc_cfg_rx(struct net_device *netdev, int prio, + u8 *prio_type, u8 *pgid, u8 *bw_pct, + u8 *up_map) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return; + + if (prio >= I40E_MAX_USER_PRIORITY) + return; + + *pgid = pf->hw.local_dcbx_config.etscfg.prioritytable[prio]; +} + +/** + * i40e_dcbnl_get_pg_bwg_cfg_rx - Get CEE PG BW Rx config + * @netdev: the corresponding netdev + * @pgid: the corresponding traffic class + * @bw_pct: the BW percentage for the corresponding TC + * + * Get Rx BW settings for given TC in CEE mode + * The adapter doesn't support Rx ETS and runs in strict priority + * mode in Rx path and hence just return 0. + **/ +static void i40e_dcbnl_get_pg_bwg_cfg_rx(struct net_device *netdev, int pgid, + u8 *bw_pct) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return; + *bw_pct = 0; +} + +/** + * i40e_dcbnl_set_pfc_cfg - Set CEE PFC configuration + * @netdev: the corresponding netdev + * @prio: the corresponding user priority + * @setting: the PFC setting for given priority + * + * Set the PFC enabled/disabled setting for given user priority + **/ +static void i40e_dcbnl_set_pfc_cfg(struct net_device *netdev, int prio, + u8 setting) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return; + + if (prio >= I40E_MAX_USER_PRIORITY) + return; + + pf->tmp_cfg.pfc.pfccap = I40E_MAX_TRAFFIC_CLASS; + if (setting) + pf->tmp_cfg.pfc.pfcenable |= BIT(prio); + else + pf->tmp_cfg.pfc.pfcenable &= ~BIT(prio); + dev_dbg(&pf->pdev->dev, + "Set PFC Config up=%d setting=%d pfcenable=0x%x\n", + prio, setting, pf->tmp_cfg.pfc.pfcenable); +} + +/** + * i40e_dcbnl_get_pfc_cfg - Get CEE PFC configuration + * @netdev: the corresponding netdev + * @prio: the corresponding user priority + * @setting: the PFC setting for given priority + * + * Get the PFC enabled/disabled setting for given user priority + **/ +static void i40e_dcbnl_get_pfc_cfg(struct net_device *netdev, int prio, + u8 *setting) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return; + + if (prio >= I40E_MAX_USER_PRIORITY) + return; + + *setting = (pf->hw.local_dcbx_config.pfc.pfcenable >> prio) & 0x1; + dev_dbg(&pf->pdev->dev, + "Get PFC Config up=%d setting=%d pfcenable=0x%x\n", + prio, *setting, pf->hw.local_dcbx_config.pfc.pfcenable); +} + +/** + * i40e_dcbnl_cee_set_all - Commit CEE DCB settings to hardware + * @netdev: the corresponding netdev + * + * Commit the current DCB configuration to hardware + **/ +static u8 i40e_dcbnl_cee_set_all(struct net_device *netdev) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + int err; + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return I40E_DCBNL_STATUS_ERROR; + + dev_dbg(&pf->pdev->dev, "Commit DCB Configuration to the hardware\n"); + err = i40e_hw_dcb_config(pf, &pf->tmp_cfg); + + return err ? I40E_DCBNL_STATUS_ERROR : I40E_DCBNL_STATUS_SUCCESS; +} + +/** + * i40e_dcbnl_get_cap - Get DCBX capabilities of adapter + * @netdev: the corresponding netdev + * @capid: the capability type + * @cap: the capability value + * + * Return the capability value for a given capability type + **/ +static u8 i40e_dcbnl_get_cap(struct net_device *netdev, int capid, u8 *cap) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + if (!(pf->flags & I40E_FLAG_DCB_CAPABLE)) + return I40E_DCBNL_STATUS_ERROR; + + switch (capid) { + case DCB_CAP_ATTR_PG: + case DCB_CAP_ATTR_PFC: + *cap = true; + break; + case DCB_CAP_ATTR_PG_TCS: + case DCB_CAP_ATTR_PFC_TCS: + *cap = 0x80; + break; + case DCB_CAP_ATTR_DCBX: + *cap = pf->dcbx_cap; + break; + case DCB_CAP_ATTR_UP2TC: + case DCB_CAP_ATTR_GSP: + case DCB_CAP_ATTR_BCN: + default: + *cap = false; + break; + } + + dev_dbg(&pf->pdev->dev, "Get Capability cap=%d capval=0x%x\n", + capid, *cap); + return I40E_DCBNL_STATUS_SUCCESS; +} + +/** + * i40e_dcbnl_getnumtcs - Get max number of traffic classes supported + * @netdev: the corresponding netdev + * @tcid: the TC id + * @num: total number of TCs supported by the device + * + * Return the total number of TCs supported by the adapter + **/ +static int i40e_dcbnl_getnumtcs(struct net_device *netdev, int tcid, u8 *num) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + if (!(pf->flags & I40E_FLAG_DCB_CAPABLE)) + return -EINVAL; + + *num = I40E_MAX_TRAFFIC_CLASS; + return 0; +} + +/** + * i40e_dcbnl_setnumtcs - Set CEE number of traffic classes + * @netdev: the corresponding netdev + * @tcid: the TC id + * @num: total number of TCs + * + * Set the total number of TCs (Unsupported) + **/ +static int i40e_dcbnl_setnumtcs(struct net_device *netdev, int tcid, u8 num) +{ + return -EINVAL; +} + +/** + * i40e_dcbnl_getpfcstate - Get CEE PFC mode + * @netdev: the corresponding netdev + * + * Get the current PFC enabled state + **/ +static u8 i40e_dcbnl_getpfcstate(struct net_device *netdev) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + /* Return enabled if any PFC enabled UP */ + if (pf->hw.local_dcbx_config.pfc.pfcenable) + return 1; + else + return 0; +} + +/** + * i40e_dcbnl_setpfcstate - Set CEE PFC mode + * @netdev: the corresponding netdev + * @state: required state + * + * The PFC state to be set; this is enabled/disabled based on the PFC + * priority settings and not via this call for i40e driver + **/ +static void i40e_dcbnl_setpfcstate(struct net_device *netdev, u8 state) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + dev_dbg(&pf->pdev->dev, "PFC State is modified via PFC config.\n"); +} + +/** + * i40e_dcbnl_getapp - Get CEE APP + * @netdev: the corresponding netdev + * @idtype: the App selector + * @id: the App ethtype or port number + * + * Return the CEE mode app for the given idtype and id + **/ +static int i40e_dcbnl_getapp(struct net_device *netdev, u8 idtype, u16 id) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + struct dcb_app app = { + .selector = idtype, + .protocol = id, + }; + + if (!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE) || + (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED)) + return -EINVAL; + + return dcb_getapp(netdev, &app); +} + +/** + * i40e_dcbnl_setdcbx - set required DCBx capability + * @netdev: the corresponding netdev + * @mode: new DCB mode managed or CEE+IEEE + * + * Set DCBx capability features + **/ +static u8 i40e_dcbnl_setdcbx(struct net_device *netdev, u8 mode) +{ + struct i40e_pf *pf = i40e_netdev_to_pf(netdev); + + /* Do not allow to set mode if managed by Firmware */ + if (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED) + return I40E_DCBNL_STATUS_ERROR; + + /* No support for LLD_MANAGED modes or CEE+IEEE */ + if ((mode & DCB_CAP_DCBX_LLD_MANAGED) || + ((mode & DCB_CAP_DCBX_VER_IEEE) && (mode & DCB_CAP_DCBX_VER_CEE)) || + !(mode & DCB_CAP_DCBX_HOST)) + return I40E_DCBNL_STATUS_ERROR; + + /* Already set to the given mode no change */ + if (mode == pf->dcbx_cap) + return I40E_DCBNL_STATUS_SUCCESS; + + pf->dcbx_cap = mode; + if (mode & DCB_CAP_DCBX_VER_CEE) + pf->hw.local_dcbx_config.dcbx_mode = I40E_DCBX_MODE_CEE; + else + pf->hw.local_dcbx_config.dcbx_mode = I40E_DCBX_MODE_IEEE; + + dev_dbg(&pf->pdev->dev, "mode=%d\n", mode); + return I40E_DCBNL_STATUS_SUCCESS; +} + /** * i40e_dcbnl_getdcbx - retrieve current DCBx capability * @dev: the corresponding netdev @@ -132,7 +842,31 @@ static const struct dcbnl_rtnl_ops dcbnl_ops = { .ieee_getets = i40e_dcbnl_ieee_getets, .ieee_getpfc = i40e_dcbnl_ieee_getpfc, .getdcbx = i40e_dcbnl_getdcbx, - .getpermhwaddr = i40e_dcbnl_get_perm_hw_addr, + .getpermhwaddr = i40e_dcbnl_get_perm_hw_addr, + .ieee_setets = i40e_dcbnl_ieee_setets, + .ieee_setpfc = i40e_dcbnl_ieee_setpfc, + .ieee_setapp = i40e_dcbnl_ieee_setapp, + .ieee_delapp = i40e_dcbnl_ieee_delapp, + .getstate = i40e_dcbnl_getstate, + .setstate = i40e_dcbnl_setstate, + .setpgtccfgtx = i40e_dcbnl_set_pg_tc_cfg_tx, + .setpgbwgcfgtx = i40e_dcbnl_set_pg_bwg_cfg_tx, + .setpgtccfgrx = i40e_dcbnl_set_pg_tc_cfg_rx, + .setpgbwgcfgrx = i40e_dcbnl_set_pg_bwg_cfg_rx, + .getpgtccfgtx = i40e_dcbnl_get_pg_tc_cfg_tx, + .getpgbwgcfgtx = i40e_dcbnl_get_pg_bwg_cfg_tx, + .getpgtccfgrx = i40e_dcbnl_get_pg_tc_cfg_rx, + .getpgbwgcfgrx = i40e_dcbnl_get_pg_bwg_cfg_rx, + .setpfccfg = i40e_dcbnl_set_pfc_cfg, + .getpfccfg = i40e_dcbnl_get_pfc_cfg, + .setall = i40e_dcbnl_cee_set_all, + .getcap = i40e_dcbnl_get_cap, + .getnumtcs = i40e_dcbnl_getnumtcs, + .setnumtcs = i40e_dcbnl_setnumtcs, + .getpfcstate = i40e_dcbnl_getpfcstate, + .setpfcstate = i40e_dcbnl_setpfcstate, + .getapp = i40e_dcbnl_getapp, + .setdcbx = i40e_dcbnl_setdcbx, }; /** @@ -152,12 +886,16 @@ void i40e_dcbnl_set_all(struct i40e_vsi *vsi) u8 prio, tc_map; int i; + /* SW DCB taken care by DCBNL set calls */ + if (pf->dcbx_cap & DCB_CAP_DCBX_HOST) + return; + /* DCB not enabled */ if (!(pf->flags & I40E_FLAG_DCB_ENABLED)) return; /* MFP mode but not an iSCSI PF so return */ - if ((pf->flags & I40E_FLAG_MFP_ENABLED) && !(pf->hw.func_caps.iscsi)) + if ((pf->flags & I40E_FLAG_MFP_ENABLED) && !(hw->func_caps.iscsi)) return; dcbxcfg = &hw->local_dcbx_config;