From patchwork Thu Jun 6 11:55:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jubran, Samih" X-Patchwork-Id: 1111078 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=amazon.com header.i=@amazon.com header.b="NM/prb/T"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45KPHf4Bkdz9s4V for ; Thu, 6 Jun 2019 21:55:42 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727757AbfFFLzl (ORCPT ); Thu, 6 Jun 2019 07:55:41 -0400 Received: from smtp-fw-9101.amazon.com ([207.171.184.25]:25477 "EHLO smtp-fw-9101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727555AbfFFLzl (ORCPT ); Thu, 6 Jun 2019 07:55:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1559822139; x=1591358139; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=5u0zvPPRJQxii5UOBaDZf41bNuPkaIm70qEkN9i+wy0=; b=NM/prb/Tp9qh/EEF63fNiq6YTbB9q9Vhc1bepNs8pWao80+JTrpTItcZ iQNw0dlfFupoy/dN523qB6KuhHIrVFsH5PaogTm/voCfgQ+uPSSpYAivb 3HTOmYXMlspm9oCMnXCYvnyqm7/dtJh7er5DPxUWy+EhUn8X7PKYWLLwB 4=; X-IronPort-AV: E=Sophos;i="5.60,559,1549929600"; d="scan'208";a="808940018" Received: from sea3-co-svc-lb6-vlan3.sea.amazon.com (HELO email-inbound-relay-2c-397e131e.us-west-2.amazon.com) ([10.47.22.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 06 Jun 2019 11:55:37 +0000 Received: from EX13MTAUWC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2c-397e131e.us-west-2.amazon.com (Postfix) with ESMTPS id C238BA2491; Thu, 6 Jun 2019 11:55:37 +0000 (UTC) Received: from EX13d09UWC004.ant.amazon.com (10.43.162.114) by EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:32 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13d09UWC004.ant.amazon.com (10.43.162.114) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:31 +0000 Received: from HFA16-8226Y22.hfa16.amazon.com (10.218.52.81) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 6 Jun 2019 11:55:28 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , Sameeh Jubran Subject: [PATCH V1 net-next 1/6] net: ena: add MAX_QUEUES_EXT get feature admin command Date: Thu, 6 Jun 2019 14:55:15 +0300 Message-ID: <20190606115520.20394-2-sameehj@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190606115520.20394-1-sameehj@amazon.com> References: <20190606115520.20394-1-sameehj@amazon.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Arthur Kiyanovski Add a new admin command to support different queue size for Tx/Rx queues (the change also support different SQ/CQ sizes) Signed-off-by: Arthur Kiyanovski Signed-off-by: Sameeh Jubran --- .../net/ethernet/amazon/ena/ena_admin_defs.h | 56 +++++++++++++- drivers/net/ethernet/amazon/ena/ena_com.c | 76 ++++++++++++------- drivers/net/ethernet/amazon/ena/ena_com.h | 3 + 3 files changed, 105 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_admin_defs.h b/drivers/net/ethernet/amazon/ena/ena_admin_defs.h index 414bae989..c8638f7b5 100644 --- a/drivers/net/ethernet/amazon/ena/ena_admin_defs.h +++ b/drivers/net/ethernet/amazon/ena/ena_admin_defs.h @@ -64,6 +64,7 @@ enum ena_admin_aq_feature_id { ENA_ADMIN_LLQ = 4, ENA_ADMIN_EXTRA_PROPERTIES_STRINGS = 5, ENA_ADMIN_EXTRA_PROPERTIES_FLAGS = 6, + ENA_ADMIN_MAX_QUEUES_EXT = 7, ENA_ADMIN_RSS_HASH_FUNCTION = 10, ENA_ADMIN_STATELESS_OFFLOAD_CONFIG = 11, ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG = 12, @@ -425,7 +426,13 @@ struct ena_admin_get_set_feature_common_desc { /* as appears in ena_admin_aq_feature_id */ u8 feature_id; - u16 reserved16; + /* The driver specifies the max feature version it supports and the + * device responds with the currently supported feature version. The + * field is zero based + */ + u8 feature_version; + + u8 reserved8; }; struct ena_admin_device_attr_feature_desc { @@ -535,6 +542,34 @@ struct ena_admin_feature_llq_desc { u32 max_tx_burst_size; }; +struct ena_admin_queue_ext_feature_fields { + u32 max_tx_sq_num; + + u32 max_tx_cq_num; + + u32 max_rx_sq_num; + + u32 max_rx_cq_num; + + u32 max_tx_sq_depth; + + u32 max_tx_cq_depth; + + u32 max_rx_sq_depth; + + u32 max_rx_cq_depth; + + u32 max_tx_header_size; + + /* Maximum Descriptors number, including meta descriptor, allowed for + * a single Tx packet + */ + u16 max_per_packet_tx_descs; + + /* Maximum Descriptors number allowed for a single Rx packet */ + u16 max_per_packet_rx_descs; +}; + struct ena_admin_queue_feature_desc { u32 max_sq_num; @@ -849,6 +884,19 @@ struct ena_admin_get_feat_cmd { u32 raw[11]; }; +struct ena_admin_queue_ext_feature_desc { + /* version */ + u8 version; + + u8 reserved1[3]; + + union { + struct ena_admin_queue_ext_feature_fields max_queue_ext; + + u32 raw[10]; + }; +}; + struct ena_admin_get_feat_resp { struct ena_admin_acq_common_desc acq_common_desc; @@ -861,6 +909,8 @@ struct ena_admin_get_feat_resp { struct ena_admin_queue_feature_desc max_queue; + struct ena_admin_queue_ext_feature_desc max_queue_ext; + struct ena_admin_feature_aenq_desc aenq; struct ena_admin_get_feature_link_desc link; @@ -929,7 +979,9 @@ struct ena_admin_aenq_common_desc { u16 syndrom; - /* 0 : phase */ + /* 0 : phase + * 7:1 : reserved - MBZ + */ u8 flags; u8 reserved1[3]; diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c index dbc12e383..9efb4be97 100644 --- a/drivers/net/ethernet/amazon/ena/ena_com.c +++ b/drivers/net/ethernet/amazon/ena/ena_com.c @@ -978,7 +978,8 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp *get_resp, enum ena_admin_aq_feature_id feature_id, dma_addr_t control_buf_dma_addr, - u32 control_buff_size) + u32 control_buff_size, + u8 feature_ver) { struct ena_com_admin_queue *admin_queue; struct ena_admin_get_feat_cmd get_cmd; @@ -1009,7 +1010,7 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev, } get_cmd.control_buffer.length = control_buff_size; - + get_cmd.feat_common.feature_version = feature_ver; get_cmd.feat_common.feature_id = feature_id; ret = ena_com_execute_admin_command(admin_queue, @@ -1029,13 +1030,15 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev, static int ena_com_get_feature(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp *get_resp, - enum ena_admin_aq_feature_id feature_id) + enum ena_admin_aq_feature_id feature_id, + u8 feature_ver) { return ena_com_get_feature_ex(ena_dev, get_resp, feature_id, 0, - 0); + 0, + feature_ver); } static int ena_com_hash_key_allocate(struct ena_com_dev *ena_dev) @@ -1095,7 +1098,7 @@ static int ena_com_indirect_table_allocate(struct ena_com_dev *ena_dev, int ret; ret = ena_com_get_feature(ena_dev, &get_resp, - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG); + ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG, 0); if (unlikely(ret)) return ret; @@ -1515,7 +1518,7 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag) struct ena_admin_get_feat_resp get_resp; int ret; - ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG); + ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG, 0); if (ret) { pr_info("Can't get aenq configuration\n"); return ret; @@ -1890,7 +1893,7 @@ void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid) int ena_com_get_link_params(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp *resp) { - return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG); + return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG, 0); } int ena_com_extra_properties_strings_init(struct ena_com_dev *ena_dev) @@ -1916,7 +1919,7 @@ int ena_com_extra_properties_strings_init(struct ena_com_dev *ena_dev) rc = ena_com_get_feature_ex(ena_dev, &resp, ENA_ADMIN_EXTRA_PROPERTIES_STRINGS, extra_properties_strings->dma_addr, - extra_properties_strings->size); + extra_properties_strings->size, 0); if (rc) { pr_debug("Failed to get extra properties strings\n"); goto err; @@ -1946,7 +1949,7 @@ int ena_com_get_extra_properties_flags(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp *resp) { return ena_com_get_feature(ena_dev, resp, - ENA_ADMIN_EXTRA_PROPERTIES_FLAGS); + ENA_ADMIN_EXTRA_PROPERTIES_FLAGS, 0); } int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, @@ -1956,7 +1959,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, int rc; rc = ena_com_get_feature(ena_dev, &get_resp, - ENA_ADMIN_DEVICE_ATTRIBUTES); + ENA_ADMIN_DEVICE_ATTRIBUTES, 0); if (rc) return rc; @@ -1964,17 +1967,34 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, sizeof(get_resp.u.dev_attr)); ena_dev->supported_features = get_resp.u.dev_attr.supported_features; - rc = ena_com_get_feature(ena_dev, &get_resp, - ENA_ADMIN_MAX_QUEUES_NUM); - if (rc) - return rc; + if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { + rc = ena_com_get_feature(ena_dev, &get_resp, + ENA_ADMIN_MAX_QUEUES_EXT, + ENA_FEATURE_MAX_QUEUE_EXT_VER); + if (rc) + return rc; - memcpy(&get_feat_ctx->max_queues, &get_resp.u.max_queue, - sizeof(get_resp.u.max_queue)); - ena_dev->tx_max_header_size = get_resp.u.max_queue.max_header_size; + if (get_resp.u.max_queue_ext.version != ENA_FEATURE_MAX_QUEUE_EXT_VER) + return -EINVAL; + + memcpy(&get_feat_ctx->max_queue_ext, &get_resp.u.max_queue_ext, + sizeof(get_resp.u.max_queue_ext)); + ena_dev->tx_max_header_size = + get_resp.u.max_queue_ext.max_queue_ext.max_tx_header_size; + } else { + rc = ena_com_get_feature(ena_dev, &get_resp, + ENA_ADMIN_MAX_QUEUES_NUM, 0); + memcpy(&get_feat_ctx->max_queues, &get_resp.u.max_queue, + sizeof(get_resp.u.max_queue)); + ena_dev->tx_max_header_size = + get_resp.u.max_queue.max_header_size; + + if (rc) + return rc; + } rc = ena_com_get_feature(ena_dev, &get_resp, - ENA_ADMIN_AENQ_CONFIG); + ENA_ADMIN_AENQ_CONFIG, 0); if (rc) return rc; @@ -1982,7 +2002,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, sizeof(get_resp.u.aenq)); rc = ena_com_get_feature(ena_dev, &get_resp, - ENA_ADMIN_STATELESS_OFFLOAD_CONFIG); + ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0); if (rc) return rc; @@ -1992,7 +2012,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, /* Driver hints isn't mandatory admin command. So in case the * command isn't supported set driver hints to 0 */ - rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_HW_HINTS); + rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_HW_HINTS, 0); if (!rc) memcpy(&get_feat_ctx->hw_hints, &get_resp.u.hw_hints, @@ -2003,7 +2023,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, else return rc; - rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_LLQ); + rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_LLQ, 0); if (!rc) memcpy(&get_feat_ctx->llq, &get_resp.u.llq, sizeof(get_resp.u.llq)); @@ -2240,7 +2260,7 @@ int ena_com_get_offload_settings(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp resp; ret = ena_com_get_feature(ena_dev, &resp, - ENA_ADMIN_STATELESS_OFFLOAD_CONFIG); + ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0); if (unlikely(ret)) { pr_err("Failed to get offload capabilities %d\n", ret); return ret; @@ -2269,7 +2289,7 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev) /* Validate hash function is supported */ ret = ena_com_get_feature(ena_dev, &get_resp, - ENA_ADMIN_RSS_HASH_FUNCTION); + ENA_ADMIN_RSS_HASH_FUNCTION, 0); if (unlikely(ret)) return ret; @@ -2329,7 +2349,7 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, rc = ena_com_get_feature_ex(ena_dev, &get_resp, ENA_ADMIN_RSS_HASH_FUNCTION, rss->hash_key_dma_addr, - sizeof(*rss->hash_key)); + sizeof(*rss->hash_key), 0); if (unlikely(rc)) return rc; @@ -2381,7 +2401,7 @@ int ena_com_get_hash_function(struct ena_com_dev *ena_dev, rc = ena_com_get_feature_ex(ena_dev, &get_resp, ENA_ADMIN_RSS_HASH_FUNCTION, rss->hash_key_dma_addr, - sizeof(*rss->hash_key)); + sizeof(*rss->hash_key), 0); if (unlikely(rc)) return rc; @@ -2406,7 +2426,7 @@ int ena_com_get_hash_ctrl(struct ena_com_dev *ena_dev, rc = ena_com_get_feature_ex(ena_dev, &get_resp, ENA_ADMIN_RSS_HASH_INPUT, rss->hash_ctrl_dma_addr, - sizeof(*rss->hash_ctrl)); + sizeof(*rss->hash_ctrl), 0); if (unlikely(rc)) return rc; @@ -2642,7 +2662,7 @@ int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl) rc = ena_com_get_feature_ex(ena_dev, &get_resp, ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG, rss->rss_ind_tbl_dma_addr, - tbl_size); + tbl_size, 0); if (unlikely(rc)) return rc; @@ -2857,7 +2877,7 @@ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev) int rc; rc = ena_com_get_feature(ena_dev, &get_resp, - ENA_ADMIN_INTERRUPT_MODERATION); + ENA_ADMIN_INTERRUPT_MODERATION, 0); if (rc) { if (rc == -EOPNOTSUPP) { diff --git a/drivers/net/ethernet/amazon/ena/ena_com.h b/drivers/net/ethernet/amazon/ena/ena_com.h index 6d356cb05..4700d92a3 100644 --- a/drivers/net/ethernet/amazon/ena/ena_com.h +++ b/drivers/net/ethernet/amazon/ena/ena_com.h @@ -101,6 +101,8 @@ #define ENA_HW_HINTS_NO_TIMEOUT 0xFFFF +#define ENA_FEATURE_MAX_QUEUE_EXT_VER 1 + enum ena_intr_moder_level { ENA_INTR_MODER_LOWEST = 0, ENA_INTR_MODER_LOW, @@ -389,6 +391,7 @@ struct ena_com_dev { struct ena_com_dev_get_features_ctx { struct ena_admin_queue_feature_desc max_queues; + struct ena_admin_queue_ext_feature_desc max_queue_ext; struct ena_admin_device_attr_feature_desc dev_attr; struct ena_admin_feature_aenq_desc aenq; struct ena_admin_feature_offload_desc offload; From patchwork Thu Jun 6 11:55:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jubran, Samih" X-Patchwork-Id: 1111079 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=amazon.com header.i=@amazon.com header.b="pjzaXPEZ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45KPHg6ynFz9sNT for ; Thu, 6 Jun 2019 21:55:43 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727693AbfFFLzl (ORCPT ); Thu, 6 Jun 2019 07:55:41 -0400 Received: from smtp-fw-33001.amazon.com ([207.171.190.10]:57153 "EHLO smtp-fw-33001.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726014AbfFFLzl (ORCPT ); Thu, 6 Jun 2019 07:55:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1559822138; x=1591358138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=QRa5OAsY6ZjfppGDtj4unz3qJIFYm48iU6nqWkZHvYE=; b=pjzaXPEZjRZM6xgExbNGdice1wLr7x0IOuvUeQIKmOGEHyFJMU0wslHe TPG87U9WPD0OUhKjQi1ixQ2DEE4QC3i89vZeNKF9DPLVFeDzPGvmO7llk ryeD3266aHDtB8tCRFxGOiVHtnxRS6ZX5W66WcO9ctAfpEeO9NmCknQGc 8=; X-IronPort-AV: E=Sophos;i="5.60,559,1549929600"; d="scan'208";a="803947784" Received: from sea3-co-svc-lb6-vlan2.sea.amazon.com (HELO email-inbound-relay-2a-c5104f52.us-west-2.amazon.com) ([10.47.22.34]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 06 Jun 2019 11:55:36 +0000 Received: from EX13MTAUWC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2a-c5104f52.us-west-2.amazon.com (Postfix) with ESMTPS id 82EC7A1F0E; Thu, 6 Jun 2019 11:55:36 +0000 (UTC) Received: from EX13D02UWC004.ant.amazon.com (10.43.162.236) by EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:36 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13D02UWC004.ant.amazon.com (10.43.162.236) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:35 +0000 Received: from HFA16-8226Y22.hfa16.amazon.com (10.218.52.81) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 6 Jun 2019 11:55:32 +0000 From: To: , CC: Sameeh Jubran , , , , , , , , , , , , Subject: [PATCH V1 net-next 2/6] net: ena: enable negotiating larger Rx ring size Date: Thu, 6 Jun 2019 14:55:16 +0300 Message-ID: <20190606115520.20394-3-sameehj@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190606115520.20394-1-sameehj@amazon.com> References: <20190606115520.20394-1-sameehj@amazon.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sameeh Jubran Use MAX_QUEUES_EXT get feature capability to query the device. Signed-off-by: Netanel Belgazal Signed-off-by: Sameeh Jubran --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 144 ++++++++++++------- drivers/net/ethernet/amazon/ena/ena_netdev.h | 15 ++ 2 files changed, 110 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 68bed2417..ba5d580e5 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -2465,13 +2465,6 @@ static int ena_device_validate_params(struct ena_adapter *adapter, return -EINVAL; } - if ((get_feat_ctx->max_queues.max_cq_num < adapter->num_queues) || - (get_feat_ctx->max_queues.max_sq_num < adapter->num_queues)) { - netif_err(adapter, drv, netdev, - "Error, device doesn't support enough queues\n"); - return -EINVAL; - } - if (get_feat_ctx->dev_attr.max_mtu < netdev->mtu) { netif_err(adapter, drv, netdev, "Error, device max mtu is smaller than netdev MTU\n"); @@ -3045,18 +3038,32 @@ static int ena_calc_io_queue_num(struct pci_dev *pdev, struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx) { - int io_sq_num, io_queue_num; + int io_tx_sq_num, io_tx_cq_num, io_rx_num, io_queue_num; + + if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { + struct ena_admin_queue_ext_feature_fields *max_queue_ext = + &get_feat_ctx->max_queue_ext.max_queue_ext; + io_rx_num = min_t(int, max_queue_ext->max_rx_sq_num, + max_queue_ext->max_rx_cq_num); - /* In case of LLQ use the llq number in the get feature cmd */ + io_tx_sq_num = max_queue_ext->max_tx_sq_num; + io_tx_cq_num = max_queue_ext->max_tx_cq_num; + } else { + struct ena_admin_queue_feature_desc *max_queues = + &get_feat_ctx->max_queues; + io_tx_sq_num = max_queues->max_sq_num; + io_tx_cq_num = max_queues->max_cq_num; + io_rx_num = min_t(int, io_tx_sq_num, io_tx_cq_num); + } + + /* In case of LLQ use the llq fields for the tx SQ/CQ */ if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) - io_sq_num = get_feat_ctx->llq.max_llq_num; - else - io_sq_num = get_feat_ctx->max_queues.max_sq_num; + io_tx_sq_num = get_feat_ctx->llq.max_llq_num; io_queue_num = min_t(int, num_online_cpus(), ENA_MAX_NUM_IO_QUEUES); - io_queue_num = min_t(int, io_queue_num, io_sq_num); - io_queue_num = min_t(int, io_queue_num, - get_feat_ctx->max_queues.max_cq_num); + io_queue_num = min_t(int, io_queue_num, io_rx_num); + io_queue_num = min_t(int, io_queue_num, io_tx_sq_num); + io_queue_num = min_t(int, io_queue_num, io_tx_cq_num); /* 1 IRQ for for mgmnt and 1 IRQs for each IO direction */ io_queue_num = min_t(int, io_queue_num, pci_msix_vec_count(pdev) - 1); if (unlikely(!io_queue_num)) { @@ -3239,36 +3246,73 @@ static inline void set_default_llq_configurations(struct ena_llq_configurations llq_config->llq_ring_entry_size_value = 128; } -static int ena_calc_queue_size(struct pci_dev *pdev, - struct ena_com_dev *ena_dev, - u16 *max_tx_sgl_size, - u16 *max_rx_sgl_size, - struct ena_com_dev_get_features_ctx *get_feat_ctx) +static int ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx) { - u32 queue_size = ENA_DEFAULT_RING_SIZE; + struct ena_admin_feature_llq_desc *llq = &ctx->get_feat_ctx->llq; + struct ena_com_dev *ena_dev = ctx->ena_dev; + u32 tx_queue_size = ENA_DEFAULT_RING_SIZE; + u32 rx_queue_size = ENA_DEFAULT_RING_SIZE; + u32 max_tx_queue_size; + u32 max_rx_queue_size; - queue_size = min_t(u32, queue_size, - get_feat_ctx->max_queues.max_cq_depth); - queue_size = min_t(u32, queue_size, - get_feat_ctx->max_queues.max_sq_depth); + if (ctx->ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { + struct ena_admin_queue_ext_feature_fields *max_queue_ext = + &ctx->get_feat_ctx->max_queue_ext.max_queue_ext; + max_rx_queue_size = min_t(u32, max_queue_ext->max_rx_cq_depth, + max_queue_ext->max_rx_sq_depth); + max_tx_queue_size = max_queue_ext->max_tx_cq_depth; - if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) - queue_size = min_t(u32, queue_size, - get_feat_ctx->llq.max_llq_depth); + if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) + max_tx_queue_size = min_t(u32, max_tx_queue_size, + llq->max_llq_depth); + else + max_tx_queue_size = min_t(u32, max_tx_queue_size, + max_queue_ext->max_tx_sq_depth); - queue_size = rounddown_pow_of_two(queue_size); + ctx->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS, + max_queue_ext->max_per_packet_tx_descs); + ctx->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS, + max_queue_ext->max_per_packet_rx_descs); + } else { + struct ena_admin_queue_feature_desc *max_queues = + &ctx->get_feat_ctx->max_queues; + max_rx_queue_size = min_t(u32, max_queues->max_cq_depth, + max_queues->max_sq_depth); + max_tx_queue_size = max_queues->max_cq_depth; + + if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) + max_tx_queue_size = min_t(u32, max_tx_queue_size, + llq->max_llq_depth); + else + max_tx_queue_size = min_t(u32, max_tx_queue_size, + max_queues->max_sq_depth); + + ctx->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS, + max_queues->max_packet_tx_descs); + ctx->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS, + max_queues->max_packet_rx_descs); + } + + max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size); + max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size); - if (unlikely(!queue_size)) { - dev_err(&pdev->dev, "Invalid queue size\n"); + tx_queue_size = min_t(u32, tx_queue_size, max_tx_queue_size); + rx_queue_size = min_t(u32, rx_queue_size, max_rx_queue_size); + + tx_queue_size = rounddown_pow_of_two(tx_queue_size); + rx_queue_size = rounddown_pow_of_two(rx_queue_size); + + if (unlikely(!rx_queue_size || !tx_queue_size)) { + dev_err(&ctx->pdev->dev, "Invalid queue size\n"); return -EFAULT; } - *max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS, - get_feat_ctx->max_queues.max_packet_tx_descs); - *max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS, - get_feat_ctx->max_queues.max_packet_rx_descs); + ctx->max_tx_queue_size = max_tx_queue_size; + ctx->max_rx_queue_size = max_rx_queue_size; + ctx->tx_queue_size = tx_queue_size; + ctx->rx_queue_size = rx_queue_size; - return queue_size; + return 0; } /* ena_probe - Device Initialization Routine @@ -3284,6 +3328,7 @@ static int ena_calc_queue_size(struct pci_dev *pdev, static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { struct ena_com_dev_get_features_ctx get_feat_ctx; + struct ena_calc_queue_size_ctx calc_queue_ctx = { 0 }; struct ena_llq_configurations llq_config; struct ena_com_dev *ena_dev = NULL; struct ena_adapter *adapter; @@ -3291,9 +3336,6 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) struct net_device *netdev; static int adapters_found; char *queue_type_str; - u16 tx_sgl_size = 0; - u16 rx_sgl_size = 0; - int queue_size; bool wd_state; dev_dbg(&pdev->dev, "%s\n", __func__); @@ -3350,20 +3392,25 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_device_destroy; } + calc_queue_ctx.ena_dev = ena_dev; + calc_queue_ctx.get_feat_ctx = &get_feat_ctx; + calc_queue_ctx.pdev = pdev; + /* initial Tx interrupt delay, Assumes 1 usec granularity. * Updated during device initialization with the real granularity */ ena_dev->intr_moder_tx_interval = ENA_INTR_INITIAL_TX_INTERVAL_USECS; io_queue_num = ena_calc_io_queue_num(pdev, ena_dev, &get_feat_ctx); - queue_size = ena_calc_queue_size(pdev, ena_dev, &tx_sgl_size, - &rx_sgl_size, &get_feat_ctx); - if ((queue_size <= 0) || (io_queue_num <= 0)) { + rc = ena_calc_queue_size(&calc_queue_ctx); + if (rc || io_queue_num <= 0) { rc = -EFAULT; goto err_device_destroy; } - dev_info(&pdev->dev, "creating %d io queues. queue size: %d. LLQ is %s\n", - io_queue_num, queue_size, + dev_info(&pdev->dev, "creating %d io queues. rx queue size: %d tx queue size. %d LLQ is %s\n", + io_queue_num, + calc_queue_ctx.rx_queue_size, + calc_queue_ctx.tx_queue_size, (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) ? "ENABLED" : "DISABLED"); @@ -3389,11 +3436,10 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); adapter->reset_reason = ENA_REGS_RESET_NORMAL; - adapter->tx_ring_size = queue_size; - adapter->rx_ring_size = queue_size; - - adapter->max_tx_sgl_size = tx_sgl_size; - adapter->max_rx_sgl_size = rx_sgl_size; + adapter->tx_ring_size = calc_queue_ctx.tx_queue_size; + adapter->rx_ring_size = calc_queue_ctx.rx_queue_size; + adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size; + adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size; adapter->num_queues = io_queue_num; adapter->last_monitored_tx_qid = 0; diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index ec111cfc5..afd2769f1 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -154,6 +154,18 @@ struct ena_napi { u32 qid; }; +struct ena_calc_queue_size_ctx { + struct ena_com_dev_get_features_ctx *get_feat_ctx; + struct ena_com_dev *ena_dev; + struct pci_dev *pdev; + u16 tx_queue_size; + u16 rx_queue_size; + u16 max_tx_queue_size; + u16 max_rx_queue_size; + u16 max_tx_sgl_size; + u16 max_rx_sgl_size; +}; + struct ena_tx_buffer { struct sk_buff *skb; /* num of ena desc for this specific skb @@ -322,6 +334,9 @@ struct ena_adapter { u32 tx_ring_size; u32 rx_ring_size; + u32 max_tx_ring_size; + u32 max_rx_ring_size; + u32 msg_enable; u16 max_tx_sgl_size; From patchwork Thu Jun 6 11:55:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jubran, Samih" X-Patchwork-Id: 1111080 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=amazon.com header.i=@amazon.com header.b="ntM/F9Wu"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45KPHl3w4Tz9s4V for ; Thu, 6 Jun 2019 21:55:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727779AbfFFLzq (ORCPT ); Thu, 6 Jun 2019 07:55:46 -0400 Received: from smtp-fw-9101.amazon.com ([207.171.184.25]:25477 "EHLO smtp-fw-9101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725267AbfFFLzq (ORCPT ); Thu, 6 Jun 2019 07:55:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1559822145; x=1591358145; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=rupOvGvjPYlMnmIuSW/iAdq3amBrZKOsrGqnC2/P1ss=; b=ntM/F9WuQfJQOTabSCiDvDCs/AwSlox46uI8baGh/ongEOqmSnHsQRQS CyIjZwOHVphaqOnCcoTjl7opFxd0ka3t/sDDDYzGEIUdFWeKTArDx0XLy eXZO6S/dpmYPotiy+mKoEdeIjRSXYIaiwnzAv8cb2qhhKllymPZED7dan g=; X-IronPort-AV: E=Sophos;i="5.60,559,1549929600"; d="scan'208";a="808940041" Received: from sea3-co-svc-lb6-vlan3.sea.amazon.com (HELO email-inbound-relay-2b-baacba05.us-west-2.amazon.com) ([10.47.22.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 06 Jun 2019 11:55:45 +0000 Received: from EX13MTAUWC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-2b-baacba05.us-west-2.amazon.com (Postfix) with ESMTPS id 2BE45A240F; Thu, 6 Jun 2019 11:55:45 +0000 (UTC) Received: from EX13d09UWC002.ant.amazon.com (10.43.162.102) by EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:40 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13d09UWC002.ant.amazon.com (10.43.162.102) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:39 +0000 Received: from HFA16-8226Y22.hfa16.amazon.com (10.218.52.81) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 6 Jun 2019 11:55:36 +0000 From: To: , CC: Sameeh Jubran , , , , , , , , , , , , Subject: [PATCH V1 net-next 3/6] net: ena: make ethtool show correct current and max queue sizes Date: Thu, 6 Jun 2019 14:55:17 +0300 Message-ID: <20190606115520.20394-4-sameehj@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190606115520.20394-1-sameehj@amazon.com> References: <20190606115520.20394-1-sameehj@amazon.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sameeh Jubran Currently ethtool -g shows the same size for current and max queue sizes. Signed-off-by: Arthur Kiyanovski Signed-off-by: Sameeh Jubran --- drivers/net/ethernet/amazon/ena/ena_ethtool.c | 10 ++++------ drivers/net/ethernet/amazon/ena/ena_netdev.c | 2 ++ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c index 878546a48..62915ad82 100644 --- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c +++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c @@ -488,13 +488,11 @@ static void ena_get_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring) { struct ena_adapter *adapter = netdev_priv(netdev); - struct ena_ring *tx_ring = &adapter->tx_ring[0]; - struct ena_ring *rx_ring = &adapter->rx_ring[0]; - ring->rx_max_pending = rx_ring->ring_size; - ring->tx_max_pending = tx_ring->ring_size; - ring->rx_pending = rx_ring->ring_size; - ring->tx_pending = tx_ring->ring_size; + ring->tx_max_pending = adapter->max_tx_ring_size; + ring->rx_max_pending = adapter->max_rx_ring_size; + ring->tx_pending = adapter->tx_ring_size; + ring->rx_pending = adapter->rx_ring_size; } static u32 ena_flow_hash_to_flow_type(u16 hash_fields) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index ba5d580e5..3fe4a9217 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -3438,6 +3438,8 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) adapter->tx_ring_size = calc_queue_ctx.tx_queue_size; adapter->rx_ring_size = calc_queue_ctx.rx_queue_size; + adapter->max_tx_ring_size = calc_queue_ctx.max_tx_queue_size; + adapter->max_rx_ring_size = calc_queue_ctx.max_rx_queue_size; adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size; adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size; From patchwork Thu Jun 6 11:55:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jubran, Samih" X-Patchwork-Id: 1111081 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=amazon.com header.i=@amazon.com header.b="ckmYufyt"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45KPHn4gC6z9s4V for ; Thu, 6 Jun 2019 21:55:49 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727821AbfFFLzt (ORCPT ); Thu, 6 Jun 2019 07:55:49 -0400 Received: from smtp-fw-33001.amazon.com ([207.171.190.10]:18173 "EHLO smtp-fw-33001.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725267AbfFFLzs (ORCPT ); Thu, 6 Jun 2019 07:55:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1559822147; x=1591358147; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=rNHn9VhNKi4K8JgtTzS/tZOAl4qHA4yXP6eohf7YZO8=; b=ckmYufytC+Be/CsYVLHJmq10oHWqKerDKw+ztkhH0iAqbkczahCye5uc ki/WzibZmz3X1R6xldUhs8S+lWhQ0xOb1aWRydoq4poQPuNlWcDD77lkd GZv1SXQl7ZmDwKX+OyPHePl2IlbUf1EqOoWP4G+d7Um0WOEKKBBkHt+t3 w=; X-IronPort-AV: E=Sophos;i="5.60,559,1549929600"; d="scan'208";a="803947818" Received: from sea3-co-svc-lb6-vlan2.sea.amazon.com (HELO email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com) ([10.47.22.34]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 06 Jun 2019 11:55:47 +0000 Received: from EX13MTAUWC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com (Postfix) with ESMTPS id E3EA2A214A; Thu, 6 Jun 2019 11:55:46 +0000 (UTC) Received: from EX13D02UWC001.ant.amazon.com (10.43.162.243) by EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:44 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13D02UWC001.ant.amazon.com (10.43.162.243) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:43 +0000 Received: from HFA16-8226Y22.hfa16.amazon.com (10.218.52.81) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 6 Jun 2019 11:55:40 +0000 From: To: , CC: Sameeh Jubran , , , , , , , , , , , , Subject: [PATCH V1 net-next 4/6] net: ena: allow queue allocation backoff when low on memory Date: Thu, 6 Jun 2019 14:55:18 +0300 Message-ID: <20190606115520.20394-5-sameehj@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190606115520.20394-1-sameehj@amazon.com> References: <20190606115520.20394-1-sameehj@amazon.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sameeh Jubran If there is not enough memory to allocate io queues the driver will try to allocate smaller queues. The backoff algorithm is as follows: 1. Try to allocate TX and RX and if successful. 1.1. return success 2. Divide by 2 the size of the larger of RX and TX queues (or both if their size is the same). 3. If TX or RX is smaller than 256 3.1. return failure. 4. else 4.1. go back to 1. Also change the tx_queue_size, rx_queue_size field names in struct adapter to requested_tx_queue_size and requested_rx_queue_size, and use RX and TX queue 0 for actual queue sizes. Explanation: The original fields were useless as they were simply used to assign values once from them to each of the queues in the adapter in ena_probe(). They could simply be deleted. However now that we have a backoff feature, we have use for them. In case of backoff there is a difference between the requested queue sizes and the actual sizes. Therefore there is a need to save the requested queue size for future retries of queue allocation (for example if allocation failed and then ifdown + ifup was called we want to start the allocation from the original requested size of the queues). Signed-off-by: Arthur Kiyanovski Signed-off-by: Sameeh Jubran --- drivers/net/ethernet/amazon/ena/ena_ethtool.c | 4 +- drivers/net/ethernet/amazon/ena/ena_netdev.c | 159 +++++++++++++----- drivers/net/ethernet/amazon/ena/ena_netdev.h | 6 +- 3 files changed, 127 insertions(+), 42 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c index 62915ad82..101d93f16 100644 --- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c +++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c @@ -491,8 +491,8 @@ static void ena_get_ringparam(struct net_device *netdev, ring->tx_max_pending = adapter->max_tx_ring_size; ring->rx_max_pending = adapter->max_rx_ring_size; - ring->tx_pending = adapter->tx_ring_size; - ring->rx_pending = adapter->rx_ring_size; + ring->tx_pending = adapter->tx_ring[0].ring_size; + ring->rx_pending = adapter->rx_ring[0].ring_size; } static u32 ena_flow_hash_to_flow_type(u16 hash_fields) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 3fe4a9217..938aca254 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -182,7 +182,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter) ena_init_io_rings_common(adapter, rxr, i); /* TX specific ring state */ - txr->ring_size = adapter->tx_ring_size; + txr->ring_size = adapter->requested_tx_ring_size; txr->tx_max_header_size = ena_dev->tx_max_header_size; txr->tx_mem_queue_type = ena_dev->tx_mem_queue_type; txr->sgl_size = adapter->max_tx_sgl_size; @@ -190,7 +190,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter) ena_com_get_nonadaptive_moderation_interval_tx(ena_dev); /* RX specific ring state */ - rxr->ring_size = adapter->rx_ring_size; + rxr->ring_size = adapter->requested_rx_ring_size; rxr->rx_copybreak = adapter->rx_copybreak; rxr->sgl_size = adapter->max_rx_sgl_size; rxr->smoothed_interval = @@ -594,7 +594,6 @@ static void ena_free_rx_bufs(struct ena_adapter *adapter, /* ena_refill_all_rx_bufs - allocate all queues Rx buffers * @adapter: board private structure - * */ static void ena_refill_all_rx_bufs(struct ena_adapter *adapter) { @@ -1638,7 +1637,7 @@ static int ena_create_io_tx_queue(struct ena_adapter *adapter, int qid) ctx.qid = ena_qid; ctx.mem_queue_type = ena_dev->tx_mem_queue_type; ctx.msix_vector = msix_vector; - ctx.queue_size = adapter->tx_ring_size; + ctx.queue_size = tx_ring->ring_size; ctx.numa_node = cpu_to_node(tx_ring->cpu); rc = ena_com_create_io_queue(ena_dev, &ctx); @@ -1705,7 +1704,7 @@ static int ena_create_io_rx_queue(struct ena_adapter *adapter, int qid) ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; ctx.mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; ctx.msix_vector = msix_vector; - ctx.queue_size = adapter->rx_ring_size; + ctx.queue_size = rx_ring->ring_size; ctx.numa_node = cpu_to_node(rx_ring->cpu); rc = ena_com_create_io_queue(ena_dev, &ctx); @@ -1752,6 +1751,112 @@ create_err: return rc; } +static inline void set_io_rings_size(struct ena_adapter *adapter, + int new_tx_size, int new_rx_size) +{ + int i; + + for (i = 0; i < adapter->num_queues; i++) { + adapter->tx_ring[i].ring_size = new_tx_size; + adapter->rx_ring[i].ring_size = new_rx_size; + } +} + +/* This function allows queue allocation to backoff when the system is + * low on memory. If there is not enough memory to allocate io queues + * the driver will try to allocate smaller queues. + * + * The backoff algorithm is as follows: + * 1. Try to allocate TX and RX and if successful. + * 1.1. return success + * + * 2. Divide by 2 the size of the larger of RX and TX queues (or both if their size is the same). + * + * 3. If TX or RX is smaller than 256 + * 3.1. return failure. + * 4. else + * 4.1. go back to 1. + */ +static int create_queues_with_size_backoff(struct ena_adapter *adapter) +{ + int rc, cur_rx_ring_size, cur_tx_ring_size; + int new_rx_ring_size, new_tx_ring_size; + + /* current queue sizes might be set to smaller than the requested + * ones due to past queue allocation failures. + */ + set_io_rings_size(adapter, adapter->requested_tx_ring_size, + adapter->requested_rx_ring_size); + + while (1) { + rc = ena_setup_all_tx_resources(adapter); + if (rc) + goto err_setup_tx; + + rc = ena_create_all_io_tx_queues(adapter); + if (rc) + goto err_create_tx_queues; + + rc = ena_setup_all_rx_resources(adapter); + if (rc) + goto err_setup_rx; + + rc = ena_create_all_io_rx_queues(adapter); + if (rc) + goto err_create_rx_queues; + + return 0; + +err_create_rx_queues: + ena_free_all_io_rx_resources(adapter); +err_setup_rx: + ena_destroy_all_tx_queues(adapter); +err_create_tx_queues: + ena_free_all_io_tx_resources(adapter); +err_setup_tx: + if (rc != -ENOMEM) { + netif_err(adapter, ifup, adapter->netdev, + "Queue creation failed with error code %d\n", + rc); + return rc; + } + + cur_tx_ring_size = adapter->tx_ring[0].ring_size; + cur_rx_ring_size = adapter->rx_ring[0].ring_size; + + netif_err(adapter, ifup, adapter->netdev, + "Not enough memory to create queues with sizes TX=%d, RX=%d\n", + cur_tx_ring_size, cur_rx_ring_size); + + new_tx_ring_size = cur_tx_ring_size; + new_rx_ring_size = cur_rx_ring_size; + + /* Decrease the size of the larger queue, or + * decrease both if they are the same size. + */ + if (cur_rx_ring_size <= cur_tx_ring_size) + new_tx_ring_size = cur_tx_ring_size / 2; + if (cur_rx_ring_size >= cur_tx_ring_size) + new_rx_ring_size = cur_rx_ring_size / 2; + + if (cur_tx_ring_size < ENA_MIN_RING_SIZE || + cur_rx_ring_size < ENA_MIN_RING_SIZE) { + netif_err(adapter, ifup, adapter->netdev, + "Queue creation failed with the smallest possible queue size of %d for both queues. Not retrying with smaller queues\n", + ENA_MIN_RING_SIZE); + return rc; + } + + netif_err(adapter, ifup, adapter->netdev, + "Retrying queue creation with sizes TX=%d, RX=%d\n", + new_tx_ring_size, + new_rx_ring_size); + + set_io_rings_size(adapter, new_tx_ring_size, + new_rx_ring_size); + } +} + static int ena_up(struct ena_adapter *adapter) { int rc, i; @@ -1771,25 +1876,9 @@ static int ena_up(struct ena_adapter *adapter) if (rc) goto err_req_irq; - /* allocate transmit descriptors */ - rc = ena_setup_all_tx_resources(adapter); + rc = create_queues_with_size_backoff(adapter); if (rc) - goto err_setup_tx; - - /* allocate receive descriptors */ - rc = ena_setup_all_rx_resources(adapter); - if (rc) - goto err_setup_rx; - - /* Create TX queues */ - rc = ena_create_all_io_tx_queues(adapter); - if (rc) - goto err_create_tx_queues; - - /* Create RX queues */ - rc = ena_create_all_io_rx_queues(adapter); - if (rc) - goto err_create_rx_queues; + goto err_create_queues_with_backoff; rc = ena_up_complete(adapter); if (rc) @@ -1818,14 +1907,11 @@ static int ena_up(struct ena_adapter *adapter) return rc; err_up: - ena_destroy_all_rx_queues(adapter); -err_create_rx_queues: ena_destroy_all_tx_queues(adapter); -err_create_tx_queues: - ena_free_all_io_rx_resources(adapter); -err_setup_rx: ena_free_all_io_tx_resources(adapter); -err_setup_tx: + ena_destroy_all_rx_queues(adapter); + ena_free_all_io_rx_resources(adapter); +err_create_queues_with_backoff: ena_free_io_irq(adapter); err_req_irq: ena_del_napi(adapter); @@ -3296,17 +3382,14 @@ static int ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx) max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size); max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size); - tx_queue_size = min_t(u32, tx_queue_size, max_tx_queue_size); - rx_queue_size = min_t(u32, rx_queue_size, max_rx_queue_size); + tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE, + max_tx_queue_size); + rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE, + max_rx_queue_size); tx_queue_size = rounddown_pow_of_two(tx_queue_size); rx_queue_size = rounddown_pow_of_two(rx_queue_size); - if (unlikely(!rx_queue_size || !tx_queue_size)) { - dev_err(&ctx->pdev->dev, "Invalid queue size\n"); - return -EFAULT; - } - ctx->max_tx_queue_size = max_tx_queue_size; ctx->max_rx_queue_size = max_rx_queue_size; ctx->tx_queue_size = tx_queue_size; @@ -3436,8 +3519,8 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); adapter->reset_reason = ENA_REGS_RESET_NORMAL; - adapter->tx_ring_size = calc_queue_ctx.tx_queue_size; - adapter->rx_ring_size = calc_queue_ctx.rx_queue_size; + adapter->requested_tx_ring_size = calc_queue_ctx.tx_queue_size; + adapter->requested_rx_ring_size = calc_queue_ctx.rx_queue_size; adapter->max_tx_ring_size = calc_queue_ctx.max_tx_queue_size; adapter->max_rx_ring_size = calc_queue_ctx.max_rx_queue_size; adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size; diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index afd2769f1..e8fe08cb7 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -79,6 +79,8 @@ #define ENA_BAR_MASK (BIT(ENA_REG_BAR) | BIT(ENA_MEM_BAR)) #define ENA_DEFAULT_RING_SIZE (1024) +#define ENA_MIN_RING_SIZE (256) + #define ENA_TX_WAKEUP_THRESH (MAX_SKB_FRAGS + 2) #define ENA_DEFAULT_RX_COPYBREAK (256 - NET_IP_ALIGN) @@ -331,8 +333,8 @@ struct ena_adapter { u32 tx_usecs, rx_usecs; /* interrupt moderation */ u32 tx_frames, rx_frames; /* interrupt moderation */ - u32 tx_ring_size; - u32 rx_ring_size; + u32 requested_tx_ring_size; + u32 requested_rx_ring_size; u32 max_tx_ring_size; u32 max_rx_ring_size; From patchwork Thu Jun 6 11:55:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jubran, Samih" X-Patchwork-Id: 1111082 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=amazon.com header.i=@amazon.com header.b="bfGLSzsK"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45KPJ16sl8z9sNT for ; Thu, 6 Jun 2019 21:56:01 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728405AbfFFL4B (ORCPT ); Thu, 6 Jun 2019 07:56:01 -0400 Received: from smtp-fw-4101.amazon.com ([72.21.198.25]:6929 "EHLO smtp-fw-4101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727783AbfFFL4A (ORCPT ); Thu, 6 Jun 2019 07:56:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1559822159; x=1591358159; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=BTWzwmF+1ZanN43HpWh4WzzT4gf9XKfyD39EOpG7fBM=; b=bfGLSzsKWO1XZgCpQdAuMvF3XVvRcBgjow81lmL4q2Z4QgE6+mbS9Vp/ P3CMseGXzZg8cftEE4JAzCA7e5txo/or+ae4pigIbccz888cbDRx7zKcj Rn67FKyMgTBd1WDzRTkRp4dsB05cIK2s+5T9HBD1f6Xb8qWtcPHW+qz5v g=; X-IronPort-AV: E=Sophos;i="5.60,559,1549929600"; d="scan'208";a="769249822" Received: from iad6-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-69849ee2.us-west-2.amazon.com) ([10.124.125.6]) by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP; 06 Jun 2019 11:55:57 +0000 Received: from EX13MTAUWC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-2a-69849ee2.us-west-2.amazon.com (Postfix) with ESMTPS id 96597A2420; Thu, 6 Jun 2019 11:55:57 +0000 (UTC) Received: from EX13d09UWC001.ant.amazon.com (10.43.162.60) by EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:47 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13d09UWC001.ant.amazon.com (10.43.162.60) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:47 +0000 Received: from HFA16-8226Y22.hfa16.amazon.com (10.218.52.81) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 6 Jun 2019 11:55:44 +0000 From: To: , CC: Sameeh Jubran , , , , , , , , , , , , Subject: [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes Date: Thu, 6 Jun 2019 14:55:19 +0300 Message-ID: <20190606115520.20394-6-sameehj@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190606115520.20394-1-sameehj@amazon.com> References: <20190606115520.20394-1-sameehj@amazon.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sameeh Jubran Implement the set_ringparam() function of the ethtool interface to enable the changing of io queue sizes. Signed-off-by: Arthur Kiyanovski Signed-off-by: Sameeh Jubran --- drivers/net/ethernet/amazon/ena/ena_ethtool.c | 25 +++++++++++++++++++ drivers/net/ethernet/amazon/ena/ena_netdev.c | 14 +++++++++++ drivers/net/ethernet/amazon/ena/ena_netdev.h | 5 +++- 3 files changed, 43 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c index 101d93f16..33e28ad71 100644 --- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c +++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c @@ -495,6 +495,30 @@ static void ena_get_ringparam(struct net_device *netdev, ring->rx_pending = adapter->rx_ring[0].ring_size; } +static int ena_set_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring) +{ + struct ena_adapter *adapter = netdev_priv(netdev); + u32 new_tx_size, new_rx_size; + + if (ring->rx_mini_pending || ring->rx_jumbo_pending) + return -EINVAL; + + new_tx_size = clamp_val(ring->tx_pending, ENA_MIN_RING_SIZE, + adapter->max_tx_ring_size); + new_tx_size = rounddown_pow_of_two(new_tx_size); + + new_rx_size = clamp_val(ring->rx_pending, ENA_MIN_RING_SIZE, + adapter->max_rx_ring_size); + new_rx_size = rounddown_pow_of_two(new_rx_size); + + if (new_tx_size == adapter->requested_tx_ring_size && + new_rx_size == adapter->requested_rx_ring_size) + return 0; + + return ena_update_queue_sizes(adapter, new_tx_size, new_rx_size); +} + static u32 ena_flow_hash_to_flow_type(u16 hash_fields) { u32 data = 0; @@ -860,6 +884,7 @@ static const struct ethtool_ops ena_ethtool_ops = { .get_coalesce = ena_get_coalesce, .set_coalesce = ena_set_coalesce, .get_ringparam = ena_get_ringparam, + .set_ringparam = ena_set_ringparam, .get_sset_count = ena_get_sset_count, .get_strings = ena_get_strings, .get_ethtool_stats = ena_get_ethtool_stats, diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 938aca254..7d3837c13 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -2031,6 +2031,20 @@ static int ena_close(struct net_device *netdev) return 0; } +int ena_update_queue_sizes(struct ena_adapter *adapter, + int new_tx_size, + int new_rx_size) +{ + bool dev_up; + + dev_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags); + ena_close(adapter->netdev); + adapter->requested_tx_ring_size = new_tx_size; + adapter->requested_rx_ring_size = new_rx_size; + ena_init_io_rings(adapter); + return dev_up ? ena_up(adapter) : 0; +} + static void ena_tx_csum(struct ena_com_tx_ctx *ena_tx_ctx, struct sk_buff *skb) { u32 mss = skb_shinfo(skb)->gso_size; diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index e8fe08cb7..2da20ad5a 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -81,7 +81,6 @@ #define ENA_DEFAULT_RING_SIZE (1024) #define ENA_MIN_RING_SIZE (256) - #define ENA_TX_WAKEUP_THRESH (MAX_SKB_FRAGS + 2) #define ENA_DEFAULT_RX_COPYBREAK (256 - NET_IP_ALIGN) @@ -389,6 +388,10 @@ void ena_dump_stats_to_dmesg(struct ena_adapter *adapter); void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf); +int ena_update_queue_sizes(struct ena_adapter *adapter, + int new_tx_size, + int new_rx_size); + int ena_get_sset_count(struct net_device *netdev, int sset); #endif /* !(ENA_H) */ From patchwork Thu Jun 6 11:55:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jubran, Samih" X-Patchwork-Id: 1111083 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=amazon.com header.i=@amazon.com header.b="nlxobOCW"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45KPJ41JPWz9s4V for ; Thu, 6 Jun 2019 21:56:04 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728659AbfFFL4D (ORCPT ); Thu, 6 Jun 2019 07:56:03 -0400 Received: from smtp-fw-33001.amazon.com ([207.171.190.10]:56420 "EHLO smtp-fw-33001.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727783AbfFFL4B (ORCPT ); Thu, 6 Jun 2019 07:56:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1559822161; x=1591358161; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=3WqT5C86Rb+8AGSkMvWBSRND5JwsBf24M0cQ0ilrRfc=; b=nlxobOCWINDjP7h67HwYn8BgLxJnhwx/cgmgCbl11Veh28sxyiz/VfZg 3UTKCIbluQUnFmDWUyw7v4mPObA4fir87L10K6HQZ45RhSrSaSAqOmwRj 6L+4PdV3UUrajtp8FHt8EjGMJuMdpBZ/7zd812v2j7cuRPG3M/oYSZHCg M=; X-IronPort-AV: E=Sophos;i="5.60,559,1549929600"; d="scan'208";a="803947851" Received: from sea3-co-svc-lb6-vlan2.sea.amazon.com (HELO email-inbound-relay-2a-22cc717f.us-west-2.amazon.com) ([10.47.22.34]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 06 Jun 2019 11:56:01 +0000 Received: from EX13MTAUWC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2a-22cc717f.us-west-2.amazon.com (Postfix) with ESMTPS id 07792A1AC2; Thu, 6 Jun 2019 11:56:00 +0000 (UTC) Received: from EX13d09UWC002.ant.amazon.com (10.43.162.102) by EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:51 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13d09UWC002.ant.amazon.com (10.43.162.102) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 6 Jun 2019 11:55:51 +0000 Received: from HFA16-8226Y22.hfa16.amazon.com (10.218.52.81) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 6 Jun 2019 11:55:48 +0000 From: To: , CC: Sameeh Jubran , , , , , , , , , , , , Subject: [PATCH V1 net-next 6/6] net: ena: update driver version from 2.0.3 to 2.1.0 Date: Thu, 6 Jun 2019 14:55:20 +0300 Message-ID: <20190606115520.20394-7-sameehj@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190606115520.20394-1-sameehj@amazon.com> References: <20190606115520.20394-1-sameehj@amazon.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sameeh Jubran Update driver version to match device specification. Signed-off-by: Sameeh Jubran --- drivers/net/ethernet/amazon/ena/ena_netdev.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index 2da20ad5a..e83a5e041 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -44,8 +44,8 @@ #include "ena_eth_com.h" #define DRV_MODULE_VER_MAJOR 2 -#define DRV_MODULE_VER_MINOR 0 -#define DRV_MODULE_VER_SUBMINOR 3 +#define DRV_MODULE_VER_MINOR 1 +#define DRV_MODULE_VER_SUBMINOR 0 #define DRV_MODULE_NAME "ena" #ifndef DRV_MODULE_VERSION