From patchwork Thu May 8 21:50:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Kumar Linga X-Patchwork-Id: 2083148 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=xwWomxBW; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ztm5w2Lffz1yPv for ; Fri, 9 May 2025 07:50:44 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 983D483CAD; Thu, 8 May 2025 21:50:57 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id TlTtUwRtHc7p; Thu, 8 May 2025 21:50:54 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 6800383CA6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1746741054; bh=iZ4mTAsBOTjDan9scdObGvnzAmO/okfTs09izbgBwHg=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=xwWomxBWiQcbB4+p2ruvZUoHKKTf13tpGPxDviNQ0VckCgNwaNFm7qtcSYTJkL8ME Cd+YRyhKuD25DnoYPz9A5dI61wkw8JhYCww3cLphRNttt8PZk6DzyPTQfZQVmkM9w9 TPm7kwQI6+p7Lp9S0RFD34szjOBcU40YWhKrDVFKSXUj+rohxP+pY1O/itkP3MX39m i2/czHs7qNfSYZCSi0FyPuyB8DcQsEWvVrYCTOJ+UniO5cUP66YYN5DpilWeWbIaGK YCu0Nsii2laqjBn0plZFiZBQeFowwSNTHYKYOSTB7lh7qEDR6MYXsz7l9Dpfh8L7Si KT1Dd7y208l9Q== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp1.osuosl.org (Postfix) with ESMTP id 6800383CA6; Thu, 8 May 2025 21:50:54 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) by lists1.osuosl.org (Postfix) with ESMTP id 19FF31A9 for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id D794341545 for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id bo-ERoGx2rln for ; Thu, 8 May 2025 21:50:50 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18; helo=mgamail.intel.com; envelope-from=pavan.kumar.linga@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org A3CEF41523 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org A3CEF41523 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by smtp4.osuosl.org (Postfix) with ESMTPS id A3CEF41523 for ; Thu, 8 May 2025 21:50:50 +0000 (UTC) X-CSE-ConnectionGUID: bgIwvqdJTOmyxiEMlUmEOQ== X-CSE-MsgGUID: By4FEvnGS2elhD2OwCpKHw== X-IronPort-AV: E=McAfee;i="6700,10204,11427"; a="47808316" X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="47808316" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2025 14:50:49 -0700 X-CSE-ConnectionGUID: mTaV/56MRhGIT+zAQ3oexg== X-CSE-MsgGUID: 7jcxdXCgR9eWAyoiyCcucw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="141534268" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.80.55]) by orviesa005.jf.intel.com with ESMTP; 08 May 2025 14:50:49 -0700 From: Pavan Kumar Linga To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, milena.olech@intel.com, anton.nadezhdin@intel.com, Pavan Kumar Linga Date: Thu, 8 May 2025 14:50:05 -0700 Message-ID: <20250508215013.32668-2-pavan.kumar.linga@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250508215013.32668-1-pavan.kumar.linga@intel.com> References: <20250508215013.32668-1-pavan.kumar.linga@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746741051; x=1778277051; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0uiBZISWVXhPqQWzpe/GRXBOlLQSBirz/uh7c2gVbiM=; b=VJgF0amB4t5m7BKNUtnTgD202SU+2xCv8db0dP/pj+0b1FX7KZnJetXN pEOlinyiEmLzRj5tFhZgkkdbS7TrfimoZnvmyuoKdnYsbX3j/XQK5qYam GzJRJP6ZCdFJXQVY7DDGrhuXSlPIhhC6hh/DIAeJICIigRqhHxGbQKEze iXOOp4iiIVunYqIYRuorqztGhCY7va12ww0RRPzUShHqRPFPxc0OVom/C SxwMx0CgMZw/tWTVTil+T06acmznjQo0AVqJtatNMcw/qGgsPlgCifZyU j1AvnZzv6z2hwirVeFPfL9vSWKXC+KzP1N9mGwRsQHGMiv/3/LVC2tRH5 Q==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=VJgF0amB Subject: [Intel-wired-lan] [PATCH iwl-next v4 1/9] idpf: introduce local idpf structure to store virtchnl queue chunks X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Queue ID and register info received from device Control Plane is stored locally in the same little endian format. As the queue chunks are retrieved in 3 functions, lexx_to_cpu conversions are done each time. Instead introduce a new idpf structure to store the received queue info. It also avoids conditional check to retrieve queue chunks. With this change, there is no need to store the queue chunks in 'req_qs_chunks' field. So remove that. Suggested-by: Milena Olech Reviewed-by: Anton Nadezhdin Signed-off-by: Pavan Kumar Linga Tested-by: Samuel Salin --- drivers/net/ethernet/intel/idpf/idpf.h | 31 +++- drivers/net/ethernet/intel/idpf/idpf_lib.c | 35 ++-- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 155 +++++++++--------- .../net/ethernet/intel/idpf/idpf_virtchnl.h | 11 +- 4 files changed, 140 insertions(+), 92 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 44a6c23cd560..1f7f56a4773a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -497,18 +497,45 @@ struct idpf_vector_lifo { u16 *vec_idx; }; +/** + * idpf_queue_id_reg_chunk - individual queue ID and register chunk + * @qtail_reg_start: queue tail register offset + * @qtail_reg_spacing: queue tail register spacing + * @type: queue type of the queues in the chunk + * @start_queue_id: starting queue ID in the chunk + * @num_queues: number of queues in the chunk + */ +struct idpf_queue_id_reg_chunk { + u64 qtail_reg_start; + u32 qtail_reg_spacing; + u32 type; + u32 start_queue_id; + u32 num_queues; +}; + +/** + * idpf_queue_id_reg_info - struct to store the queue ID and register chunk + * info received over the mailbox + * @num_chunks: number of chunks + * @queue_chunks: array of chunks + */ +struct idpf_queue_id_reg_info { + u16 num_chunks; + struct idpf_queue_id_reg_chunk *queue_chunks; +}; + /** * struct idpf_vport_config - Vport configuration data * @user_config: see struct idpf_vport_user_config_data * @max_q: Maximum possible queues - * @req_qs_chunks: Queue chunk data for requested queues + * @qid_reg_info: Struct to store the queue ID and register info * @mac_filter_list_lock: Lock to protect mac filters * @flags: See enum idpf_vport_config_flags */ struct idpf_vport_config { struct idpf_vport_user_config_data user_config; struct idpf_vport_max_q max_q; - struct virtchnl2_add_queues *req_qs_chunks; + struct idpf_queue_id_reg_info qid_reg_info; spinlock_t mac_filter_list_lock; DECLARE_BITMAP(flags, IDPF_VPORT_CONFIG_FLAGS_NBITS); }; diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 7d42f21c86b6..a11097e98517 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -838,6 +838,7 @@ static void idpf_remove_features(struct idpf_vport *vport) static void idpf_vport_stop(struct idpf_vport *vport) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); + struct idpf_queue_id_reg_info *chunks; if (np->state <= __IDPF_VPORT_DOWN) return; @@ -845,6 +846,8 @@ static void idpf_vport_stop(struct idpf_vport *vport) netif_carrier_off(vport->netdev); netif_tx_disable(vport->netdev); + chunks = &vport->adapter->vport_config[vport->idx]->qid_reg_info; + idpf_send_disable_vport_msg(vport); idpf_send_disable_queues_msg(vport); idpf_send_map_unmap_queue_vector_msg(vport, false); @@ -854,7 +857,7 @@ static void idpf_vport_stop(struct idpf_vport *vport) * instead of deleting and reallocating the vport. */ if (test_and_clear_bit(IDPF_VPORT_DEL_QUEUES, vport->flags)) - idpf_send_delete_queues_msg(vport); + idpf_send_delete_queues_msg(vport, chunks); idpf_remove_features(vport); @@ -952,15 +955,14 @@ static void idpf_vport_rel(struct idpf_vport *vport) kfree(vport->q_vector_idxs); vport->q_vector_idxs = NULL; + kfree(vport_config->qid_reg_info.queue_chunks); + vport_config->qid_reg_info.queue_chunks = NULL; kfree(adapter->vport_params_recvd[idx]); adapter->vport_params_recvd[idx] = NULL; kfree(adapter->vport_params_reqd[idx]); adapter->vport_params_reqd[idx] = NULL; - if (adapter->vport_config[idx]) { - kfree(adapter->vport_config[idx]->req_qs_chunks); - adapter->vport_config[idx]->req_qs_chunks = NULL; - } + kfree(vport); adapter->num_alloc_vports--; } @@ -1075,6 +1077,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, u16 idx = adapter->next_vport; struct idpf_vport *vport; u16 num_max_q; + int err; if (idx == IDPF_NO_FREE_SLOT) return NULL; @@ -1107,7 +1110,9 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, if (!vport->q_vector_idxs) goto free_vport; - idpf_vport_init(vport, max_q); + err = idpf_vport_init(vport, max_q); + if (err) + goto free_vector_idxs; /* This alloc is done separate from the LUT because it's not strictly * dependent on how many queues we have. If we change number of queues @@ -1117,7 +1122,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, rss_data = &adapter->vport_config[idx]->user_config.rss_data; rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL); if (!rss_data->rss_key) - goto free_vector_idxs; + goto free_qreg_chunks; /* Initialize default rss key */ netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size); @@ -1132,6 +1137,8 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, return vport; +free_qreg_chunks: + kfree(adapter->vport_config[idx]->qid_reg_info.queue_chunks); free_vector_idxs: kfree(vport->q_vector_idxs); free_vport: @@ -1308,6 +1315,7 @@ static int idpf_vport_open(struct idpf_vport *vport) struct idpf_netdev_priv *np = netdev_priv(vport->netdev); struct idpf_adapter *adapter = vport->adapter; struct idpf_vport_config *vport_config; + struct idpf_queue_id_reg_info *chunks; int err; if (np->state != __IDPF_VPORT_DOWN) @@ -1327,7 +1335,10 @@ static int idpf_vport_open(struct idpf_vport *vport) if (err) goto intr_rel; - err = idpf_vport_queue_ids_init(vport); + vport_config = adapter->vport_config[vport->idx]; + chunks = &vport_config->qid_reg_info; + + err = idpf_vport_queue_ids_init(vport, chunks); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n", vport->vport_id, err); @@ -1348,7 +1359,7 @@ static int idpf_vport_open(struct idpf_vport *vport) goto queues_rel; } - err = idpf_queue_reg_init(vport); + err = idpf_queue_reg_init(vport, chunks); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n", vport->vport_id, err); @@ -1389,7 +1400,6 @@ static int idpf_vport_open(struct idpf_vport *vport) idpf_restore_features(vport); - vport_config = adapter->vport_config[vport->idx]; if (vport_config->user_config.rss_data.rss_lut) err = idpf_config_rss(vport); else @@ -1827,6 +1837,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, struct idpf_netdev_priv *np = netdev_priv(vport->netdev); enum idpf_vport_state current_state = np->state; struct idpf_adapter *adapter = vport->adapter; + struct idpf_vport_config *vport_config; struct idpf_vport *new_vport; int err; @@ -1873,8 +1884,10 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, goto free_vport; } + vport_config = adapter->vport_config[vport->idx]; + if (current_state <= __IDPF_VPORT_DOWN) { - idpf_send_delete_queues_msg(vport); + idpf_send_delete_queues_msg(vport, &vport_config->qid_reg_info); } else { set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags); idpf_vport_stop(vport); diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 8b8c5415e418..91fc908e5e20 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1007,6 +1007,42 @@ static void idpf_init_avail_queues(struct idpf_adapter *adapter) avail_queues->avail_complq = le16_to_cpu(caps->max_tx_complq); } +/** + * idpf_vport_init_queue_reg_chunks - initialize queue register chunks + * @vport_config: persistent vport structure to store the queue register info + * @schunks: source chunks to copy data from + * + * Return: %0 on success, -%errno on failure. + */ +static int +idpf_vport_init_queue_reg_chunks(struct idpf_vport_config *vport_config, + struct virtchnl2_queue_reg_chunks *schunks) +{ + struct idpf_queue_id_reg_info *q_info = &vport_config->qid_reg_info; + u16 num_chunks = le16_to_cpu(schunks->num_chunks); + + kfree(q_info->queue_chunks); + + q_info->num_chunks = num_chunks; + q_info->queue_chunks = kcalloc(num_chunks, sizeof(*q_info->queue_chunks), + GFP_KERNEL); + if (!q_info->queue_chunks) + return -ENOMEM; + + for (u16 i = 0; i < num_chunks; i++) { + struct idpf_queue_id_reg_chunk *dchunk = &q_info->queue_chunks[i]; + struct virtchnl2_queue_reg_chunk *schunk = &schunks->chunks[i]; + + dchunk->qtail_reg_start = le64_to_cpu(schunk->qtail_reg_start); + dchunk->qtail_reg_spacing = le32_to_cpu(schunk->qtail_reg_spacing); + dchunk->type = le32_to_cpu(schunk->type); + dchunk->start_queue_id = le32_to_cpu(schunk->start_queue_id); + dchunk->num_queues = le32_to_cpu(schunk->num_queues); + } + + return 0; +} + /** * idpf_get_reg_intr_vecs - Get vector queue register offset * @vport: virtual port structure @@ -1067,25 +1103,25 @@ int idpf_get_reg_intr_vecs(struct idpf_vport *vport, * are filled. */ static int idpf_vport_get_q_reg(u32 *reg_vals, int num_regs, u32 q_type, - struct virtchnl2_queue_reg_chunks *chunks) + struct idpf_queue_id_reg_info *chunks) { - u16 num_chunks = le16_to_cpu(chunks->num_chunks); + u16 num_chunks = chunks->num_chunks; int reg_filled = 0, i; u32 reg_val; while (num_chunks--) { - struct virtchnl2_queue_reg_chunk *chunk; + struct idpf_queue_id_reg_chunk *chunk; u16 num_q; - chunk = &chunks->chunks[num_chunks]; - if (le32_to_cpu(chunk->type) != q_type) + chunk = &chunks->queue_chunks[num_chunks]; + if (chunk->type != q_type) continue; - num_q = le32_to_cpu(chunk->num_queues); - reg_val = le64_to_cpu(chunk->qtail_reg_start); + num_q = chunk->num_queues; + reg_val = chunk->qtail_reg_start; for (i = 0; i < num_q && reg_filled < num_regs ; i++) { reg_vals[reg_filled++] = reg_val; - reg_val += le32_to_cpu(chunk->qtail_reg_spacing); + reg_val += chunk->qtail_reg_spacing; } } @@ -1155,15 +1191,13 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, /** * idpf_queue_reg_init - initialize queue registers * @vport: virtual port structure + * @chunks: queue registers received over mailbox * * Return 0 on success, negative on failure */ -int idpf_queue_reg_init(struct idpf_vport *vport) +int idpf_queue_reg_init(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks) { - struct virtchnl2_create_vport *vport_params; - struct virtchnl2_queue_reg_chunks *chunks; - struct idpf_vport_config *vport_config; - u16 vport_idx = vport->idx; int num_regs, ret = 0; u32 *reg_vals; @@ -1172,16 +1206,6 @@ int idpf_queue_reg_init(struct idpf_vport *vport) if (!reg_vals) return -ENOMEM; - vport_config = vport->adapter->vport_config[vport_idx]; - if (vport_config->req_qs_chunks) { - struct virtchnl2_add_queues *vc_aq = - (struct virtchnl2_add_queues *)vport_config->req_qs_chunks; - chunks = &vc_aq->chunks; - } else { - vport_params = vport->adapter->vport_params_recvd[vport_idx]; - chunks = &vport_params->chunks; - } - /* Initialize Tx queue tail register address */ num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q, VIRTCHNL2_QUEUE_TYPE_TX, @@ -2029,46 +2053,36 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport) * @num_chunks: number of chunks to copy */ static void idpf_convert_reg_to_queue_chunks(struct virtchnl2_queue_chunk *dchunks, - struct virtchnl2_queue_reg_chunk *schunks, + struct idpf_queue_id_reg_chunk *schunks, u16 num_chunks) { u16 i; for (i = 0; i < num_chunks; i++) { - dchunks[i].type = schunks[i].type; - dchunks[i].start_queue_id = schunks[i].start_queue_id; - dchunks[i].num_queues = schunks[i].num_queues; + dchunks[i].type = cpu_to_le32(schunks[i].type); + dchunks[i].start_queue_id = cpu_to_le32(schunks[i].start_queue_id); + dchunks[i].num_queues = cpu_to_le32(schunks[i].num_queues); } } /** * idpf_send_delete_queues_msg - send delete queues virtchnl message - * @vport: Virtual port private data structure + * @vport: virtual port private data structure + * @chunks: queue ids received over mailbox * * Will send delete queues virtchnl message. Return 0 on success, negative on * failure. */ -int idpf_send_delete_queues_msg(struct idpf_vport *vport) +int idpf_send_delete_queues_msg(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks) { struct virtchnl2_del_ena_dis_queues *eq __free(kfree) = NULL; - struct virtchnl2_create_vport *vport_params; - struct virtchnl2_queue_reg_chunks *chunks; struct idpf_vc_xn_params xn_params = {}; - struct idpf_vport_config *vport_config; - u16 vport_idx = vport->idx; ssize_t reply_sz; u16 num_chunks; int buf_size; - vport_config = vport->adapter->vport_config[vport_idx]; - if (vport_config->req_qs_chunks) { - chunks = &vport_config->req_qs_chunks->chunks; - } else { - vport_params = vport->adapter->vport_params_recvd[vport_idx]; - chunks = &vport_params->chunks; - } - - num_chunks = le16_to_cpu(chunks->num_chunks); + num_chunks = chunks->num_chunks; buf_size = struct_size(eq, chunks.chunks, num_chunks); eq = kzalloc(buf_size, GFP_KERNEL); @@ -2078,7 +2092,7 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport) eq->vport_id = cpu_to_le32(vport->vport_id); eq->chunks.num_chunks = cpu_to_le16(num_chunks); - idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->chunks, + idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->queue_chunks, num_chunks); xn_params.vc_op = VIRTCHNL2_OP_DEL_QUEUES; @@ -2135,8 +2149,6 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, return -ENOMEM; vport_config = vport->adapter->vport_config[vport_idx]; - kfree(vport_config->req_qs_chunks); - vport_config->req_qs_chunks = NULL; aq.vport_id = cpu_to_le32(vport->vport_id); aq.num_tx_q = cpu_to_le16(num_tx_q); @@ -2166,11 +2178,7 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, if (reply_sz < size) return -EIO; - vport_config->req_qs_chunks = kmemdup(vc_msg, size, GFP_KERNEL); - if (!vport_config->req_qs_chunks) - return -ENOMEM; - - return 0; + return idpf_vport_init_queue_reg_chunks(vport_config, &vc_msg->chunks); } /** @@ -3160,8 +3168,10 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport) * @max_q: vport max queue info * * Will initialize vport with the info received through MB earlier + * + * Return: %0 on success, -%errno on failure. */ -void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) +int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) { struct idpf_adapter *adapter = vport->adapter; struct virtchnl2_create_vport *vport_msg; @@ -3176,6 +3186,11 @@ void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) rss_data = &vport_config->user_config.rss_data; vport_msg = adapter->vport_params_recvd[idx]; + err = idpf_vport_init_queue_reg_chunks(vport_config, + &vport_msg->chunks); + if (err) + return err; + vport_config->max_q.max_txq = max_q->max_txq; vport_config->max_q.max_rxq = max_q->max_rxq; vport_config->max_q.max_complq = max_q->max_complq; @@ -3208,15 +3223,17 @@ void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) if (!(vport_msg->vport_flags & cpu_to_le16(VIRTCHNL2_VPORT_UPLINK_PORT))) - return; + return 0; err = idpf_ptp_get_vport_tstamps_caps(vport); if (err) { pci_dbg(vport->adapter->pdev, "Tx timestamping not supported\n"); - return; + return err == -EOPNOTSUPP ? 0 : err; } INIT_WORK(&vport->tstamp_task, idpf_tstamp_task); + + return 0; } /** @@ -3275,21 +3292,21 @@ int idpf_get_vec_ids(struct idpf_adapter *adapter, * Returns number of ids filled */ static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type, - struct virtchnl2_queue_reg_chunks *chunks) + struct idpf_queue_id_reg_info *chunks) { - u16 num_chunks = le16_to_cpu(chunks->num_chunks); + u16 num_chunks = chunks->num_chunks; u32 num_q_id_filled = 0, i; u32 start_q_id, num_q; while (num_chunks--) { - struct virtchnl2_queue_reg_chunk *chunk; + struct idpf_queue_id_reg_chunk *chunk; - chunk = &chunks->chunks[num_chunks]; - if (le32_to_cpu(chunk->type) != q_type) + chunk = &chunks->queue_chunks[num_chunks]; + if (chunk->type != q_type) continue; - num_q = le32_to_cpu(chunk->num_queues); - start_q_id = le32_to_cpu(chunk->start_queue_id); + num_q = chunk->num_queues; + start_q_id = chunk->start_queue_id; for (i = 0; i < num_q; i++) { if ((num_q_id_filled + i) < num_qids) { @@ -3382,30 +3399,18 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, /** * idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters * @vport: virtual port for which the queues ids are initialized + * @chunks: queue ids received over mailbox * * Will initialize all queue ids with ids received as mailbox parameters. * Returns 0 on success, negative if all the queues are not initialized. */ -int idpf_vport_queue_ids_init(struct idpf_vport *vport) +int idpf_vport_queue_ids_init(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks) { - struct virtchnl2_create_vport *vport_params; - struct virtchnl2_queue_reg_chunks *chunks; - struct idpf_vport_config *vport_config; - u16 vport_idx = vport->idx; int num_ids, err = 0; u16 q_type; u32 *qids; - vport_config = vport->adapter->vport_config[vport_idx]; - if (vport_config->req_qs_chunks) { - struct virtchnl2_add_queues *vc_aq = - (struct virtchnl2_add_queues *)vport_config->req_qs_chunks; - chunks = &vc_aq->chunks; - } else { - vport_params = vport->adapter->vport_params_recvd[vport_idx]; - chunks = &vport_params->chunks; - } - qids = kcalloc(IDPF_MAX_QIDS, sizeof(u32), GFP_KERNEL); if (!qids) return -ENOMEM; diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index 165767705469..6823a3814d2b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -102,8 +102,10 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter); int idpf_get_reg_intr_vecs(struct idpf_vport *vport, struct idpf_vec_regs *reg_vals); -int idpf_queue_reg_init(struct idpf_vport *vport); -int idpf_vport_queue_ids_init(struct idpf_vport *vport); +int idpf_queue_reg_init(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks); +int idpf_vport_queue_ids_init(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks); bool idpf_vport_is_cap_ena(struct idpf_vport *vport, u16 flag); bool idpf_sideband_flow_type_ena(struct idpf_vport *vport, u32 flow_type); @@ -115,7 +117,7 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter); int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, u16 msg_size, u8 *msg, u16 cookie); -void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q); +int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q); u32 idpf_get_vport_id(struct idpf_vport *vport); int idpf_send_create_vport_msg(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); @@ -130,7 +132,8 @@ void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, u16 num_complq, u16 num_rx_q, u16 num_rx_bufq); -int idpf_send_delete_queues_msg(struct idpf_vport *vport); +int idpf_send_delete_queues_msg(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks); int idpf_send_enable_queues_msg(struct idpf_vport *vport); int idpf_send_disable_queues_msg(struct idpf_vport *vport); int idpf_send_config_queues_msg(struct idpf_vport *vport); From patchwork Thu May 8 21:50:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Kumar Linga X-Patchwork-Id: 2083146 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=aY2bMzhF; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::137; helo=smtp4.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ztm5s38Z4z1yPv for ; Fri, 9 May 2025 07:50:41 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id EFC4541524; Thu, 8 May 2025 21:50:55 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id jpRL13xJL8rk; Thu, 8 May 2025 21:50:53 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org BB81B41528 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1746741053; bh=QKgbjGIOfpXOofxtm8p4QYHBPGeKrJSfnPoWibWuxQc=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=aY2bMzhFU76urjRZNBToDmw802czmAS8WdnxHNCpbUoI9Q1N9IUNLALg9HaOALnNg g8gyOSZrMYeAPKAFZ8jFlkHAZT8Y79bMvwOzPA/ydaeL3zwaUcAgFmSdE0xfbv7UW8 71dNJjfpvqHtRmwQ+ReapcDwt+wESSp/eAhFfNPuwjVX972iBkyUmbFdPidCBsHQEq FppHoqOWp7G0rREavjNBPIL3/Gc948wtRn+Ikipr3Fe3YKU9oDD5RrRqALQhDyif0k lz5OHzFrCWe9aQBRYfX65hrN33gUaGl85/hO8vV+kcxlTL+TEclaB9YLMLS/EW6c66 Fbm1m9xJVTpQg== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp4.osuosl.org (Postfix) with ESMTP id BB81B41528; Thu, 8 May 2025 21:50:53 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists1.osuosl.org (Postfix) with ESMTP id EE6201A9 for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id DB4AC422E7 for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 87HTyYx9sCbv for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18; helo=mgamail.intel.com; envelope-from=pavan.kumar.linga@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp2.osuosl.org D46CE401AD DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org D46CE401AD Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by smtp2.osuosl.org (Postfix) with ESMTPS id D46CE401AD for ; Thu, 8 May 2025 21:50:50 +0000 (UTC) X-CSE-ConnectionGUID: l3uuNY+fRq66y9a/7PbM4g== X-CSE-MsgGUID: VQcnvz0zSC6r76l94pzQrg== X-IronPort-AV: E=McAfee;i="6700,10204,11427"; a="47808318" X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="47808318" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2025 14:50:49 -0700 X-CSE-ConnectionGUID: M/EqzSGOQcOnt6q8dWoYTQ== X-CSE-MsgGUID: 9PFfukReQQqG/wqrSrhhzw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="141534271" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.80.55]) by orviesa005.jf.intel.com with ESMTP; 08 May 2025 14:50:49 -0700 From: Pavan Kumar Linga To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, milena.olech@intel.com, anton.nadezhdin@intel.com, Pavan Kumar Linga Date: Thu, 8 May 2025 14:50:06 -0700 Message-ID: <20250508215013.32668-3-pavan.kumar.linga@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250508215013.32668-1-pavan.kumar.linga@intel.com> References: <20250508215013.32668-1-pavan.kumar.linga@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746741051; x=1778277051; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=00Ov8hBbTZa2c65/Zl5O8qqe1vzfppQVRNVXJ2Qn9f0=; b=gzBzdjnAkyHRTahx+siy/xyC3PWGeCwY/5E3WsoVSFwpNInXDwXqjuBB 2N3vQ5bz+G/yoCgw7v78lPCIctQd/dH7SdOLwXlrzTr1maT8OUvfUf5RD KNkxA24mLAn22xGET+yL/O13vR/2tSKVDDF4aG1kSM33q+E4EfO3jHmCi i7oQfoH02z54ENx63OswqC5t9WTMN1xI6pA+ias8OPGiA32dPTpNkI90H RJiAg3zno/HHnCYy5bfTuhTT8yKalkRm9fWthnxzmLYzHf56/9SJkz/+n kpLmn3gqkdLLRt43SzgLaIRlJE/L+U529ARgiQXoPfTnN6ieAIz6P3hU2 w==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key, unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=gzBzdjnA Subject: [Intel-wired-lan] [PATCH iwl-next v4 2/9] idpf: use existing queue chunk info instead of preparing it X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Queue chunk info received from the device control plane is stored in the persistent data section. Necessary info from these chunks is parsed and stored in the queue structure. While sending the enable/disable queues virtchnl message, queue chunk info is prepared using the stored queue info. Instead of that, use the stored queue chunks directly which has info about all the queues that needs to be enabled/disabled. Reviewed-by: Anton Nadezhdin Signed-off-by: Pavan Kumar Linga Tested-by: Samuel Salin --- drivers/net/ethernet/intel/idpf/idpf_lib.c | 6 +- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 188 +++++------------- .../net/ethernet/intel/idpf/idpf_virtchnl.h | 6 +- 3 files changed, 52 insertions(+), 148 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index a11097e98517..bc342e79addd 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -849,7 +849,7 @@ static void idpf_vport_stop(struct idpf_vport *vport) chunks = &vport->adapter->vport_config[vport->idx]->qid_reg_info; idpf_send_disable_vport_msg(vport); - idpf_send_disable_queues_msg(vport); + idpf_send_disable_queues_msg(vport, chunks); idpf_send_map_unmap_queue_vector_msg(vport, false); /* Normally we ask for queues in create_vport, but if the number of * initially requested queues have changed, for example via ethtool @@ -1383,7 +1383,7 @@ static int idpf_vport_open(struct idpf_vport *vport) goto intr_deinit; } - err = idpf_send_enable_queues_msg(vport); + err = idpf_send_enable_queues_msg(vport, chunks); if (err) { dev_err(&adapter->pdev->dev, "Failed to enable queues for vport %u: %d\n", vport->vport_id, err); @@ -1424,7 +1424,7 @@ static int idpf_vport_open(struct idpf_vport *vport) disable_vport: idpf_send_disable_vport_msg(vport); disable_queues: - idpf_send_disable_queues_msg(vport); + idpf_send_disable_queues_msg(vport, chunks); unmap_queue_vectors: idpf_send_map_unmap_queue_vector_msg(vport, false); intr_deinit: diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 91fc908e5e20..732d99f66842 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1007,6 +1007,24 @@ static void idpf_init_avail_queues(struct idpf_adapter *adapter) avail_queues->avail_complq = le16_to_cpu(caps->max_tx_complq); } +/** + * idpf_convert_reg_to_queue_chunks - copy queue chunk information to the right + * structure + * @dchunks: destination chunks to store data to + * @schunks: source chunks to copy data from + * @num_chunks: number of chunks to copy + */ +static void idpf_convert_reg_to_queue_chunks(struct virtchnl2_queue_chunk *dchunks, + struct idpf_queue_id_reg_chunk *schunks, + u16 num_chunks) +{ + for (u16 i = 0; i < num_chunks; i++) { + dchunks[i].type = cpu_to_le32(schunks[i].type); + dchunks[i].start_queue_id = cpu_to_le32(schunks[i].start_queue_id); + dchunks[i].num_queues = cpu_to_le32(schunks[i].num_queues); + } +} + /** * idpf_vport_init_queue_reg_chunks - initialize queue register chunks * @vport_config: persistent vport structure to store the queue register info @@ -1734,116 +1752,20 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) * idpf_send_ena_dis_queues_msg - Send virtchnl enable or disable * queues message * @vport: virtual port data structure + * @chunks: queue register info * @ena: if true enable, false disable * * Send enable or disable queues virtchnl message. Returns 0 on success, * negative on failure. */ -static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool ena) +static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks, + bool ena) { struct virtchnl2_del_ena_dis_queues *eq __free(kfree) = NULL; - struct virtchnl2_queue_chunk *qc __free(kfree) = NULL; - u32 num_msgs, num_chunks, num_txq, num_rxq, num_q; struct idpf_vc_xn_params xn_params = {}; - struct virtchnl2_queue_chunks *qcs; - u32 config_sz, chunk_sz, buf_sz; + u32 num_chunks, buf_sz; ssize_t reply_sz; - int i, j, k = 0; - - num_txq = vport->num_txq + vport->num_complq; - num_rxq = vport->num_rxq + vport->num_bufq; - num_q = num_txq + num_rxq; - buf_sz = sizeof(struct virtchnl2_queue_chunk) * num_q; - qc = kzalloc(buf_sz, GFP_KERNEL); - if (!qc) - return -ENOMEM; - - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; - - for (j = 0; j < tx_qgrp->num_txq; j++, k++) { - qc[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX); - qc[k].start_queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id); - qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK); - } - } - if (vport->num_txq != k) - return -EINVAL; - - if (!idpf_is_queue_model_split(vport->txq_model)) - goto setup_rx; - - for (i = 0; i < vport->num_txq_grp; i++, k++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; - - qc[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION); - qc[k].start_queue_id = cpu_to_le32(tx_qgrp->complq->q_id); - qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK); - } - if (vport->num_complq != (k - vport->num_txq)) - return -EINVAL; - -setup_rx: - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - - if (idpf_is_queue_model_split(vport->rxq_model)) - num_rxq = rx_qgrp->splitq.num_rxq_sets; - else - num_rxq = rx_qgrp->singleq.num_rxq; - - for (j = 0; j < num_rxq; j++, k++) { - if (idpf_is_queue_model_split(vport->rxq_model)) { - qc[k].start_queue_id = - cpu_to_le32(rx_qgrp->splitq.rxq_sets[j]->rxq.q_id); - qc[k].type = - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX); - } else { - qc[k].start_queue_id = - cpu_to_le32(rx_qgrp->singleq.rxqs[j]->q_id); - qc[k].type = - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX); - } - qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK); - } - } - if (vport->num_rxq != k - (vport->num_txq + vport->num_complq)) - return -EINVAL; - - if (!idpf_is_queue_model_split(vport->rxq_model)) - goto send_msg; - - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - - for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) { - const struct idpf_buf_queue *q; - - q = &rx_qgrp->splitq.bufq_sets[j].bufq; - qc[k].type = - cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); - qc[k].start_queue_id = cpu_to_le32(q->q_id); - qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK); - } - } - if (vport->num_bufq != k - (vport->num_txq + - vport->num_complq + - vport->num_rxq)) - return -EINVAL; - -send_msg: - /* Chunk up the queue info into multiple messages */ - config_sz = sizeof(struct virtchnl2_del_ena_dis_queues); - chunk_sz = sizeof(struct virtchnl2_queue_chunk); - - num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz), - num_q); - num_msgs = DIV_ROUND_UP(num_q, num_chunks); - - buf_sz = struct_size(eq, chunks.chunks, num_chunks); - eq = kzalloc(buf_sz, GFP_KERNEL); - if (!eq) - return -ENOMEM; if (ena) { xn_params.vc_op = VIRTCHNL2_OP_ENABLE_QUEUES; @@ -1853,27 +1775,23 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool ena) xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC; } - for (i = 0, k = 0; i < num_msgs; i++) { - memset(eq, 0, buf_sz); - eq->vport_id = cpu_to_le32(vport->vport_id); - eq->chunks.num_chunks = cpu_to_le16(num_chunks); - qcs = &eq->chunks; - memcpy(qcs->chunks, &qc[k], chunk_sz * num_chunks); + num_chunks = chunks->num_chunks; + buf_sz = struct_size(eq, chunks.chunks, num_chunks); + eq = kzalloc(buf_sz, GFP_KERNEL); + if (!eq) + return -ENOMEM; - xn_params.send_buf.iov_base = eq; - xn_params.send_buf.iov_len = buf_sz; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); - if (reply_sz < 0) - return reply_sz; + eq->vport_id = cpu_to_le32(vport->vport_id); + eq->chunks.num_chunks = cpu_to_le16(num_chunks); - k += num_chunks; - num_q -= num_chunks; - num_chunks = min(num_chunks, num_q); - /* Recalculate buffer size */ - buf_sz = struct_size(eq, chunks.chunks, num_chunks); - } + idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->queue_chunks, + num_chunks); - return 0; + xn_params.send_buf.iov_base = eq; + xn_params.send_buf.iov_len = buf_sz; + reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + + return reply_sz < 0 ? reply_sz : 0; } /** @@ -2006,27 +1924,31 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) /** * idpf_send_enable_queues_msg - send enable queues virtchnl message * @vport: Virtual port private data structure + * @chunks: queue ids received over mailbox * * Will send enable queues virtchnl message. Returns 0 on success, negative on * failure. */ -int idpf_send_enable_queues_msg(struct idpf_vport *vport) +int idpf_send_enable_queues_msg(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks) { - return idpf_send_ena_dis_queues_msg(vport, true); + return idpf_send_ena_dis_queues_msg(vport, chunks, true); } /** * idpf_send_disable_queues_msg - send disable queues virtchnl message * @vport: Virtual port private data structure + * @chunks: queue ids received over mailbox * * Will send disable queues virtchnl message. Returns 0 on success, negative * on failure. */ -int idpf_send_disable_queues_msg(struct idpf_vport *vport) +int idpf_send_disable_queues_msg(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks) { int err, i; - err = idpf_send_ena_dis_queues_msg(vport, false); + err = idpf_send_ena_dis_queues_msg(vport, chunks, false); if (err) return err; @@ -2045,26 +1967,6 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport) return idpf_wait_for_marker_event(vport); } -/** - * idpf_convert_reg_to_queue_chunks - Copy queue chunk information to the right - * structure - * @dchunks: Destination chunks to store data to - * @schunks: Source chunks to copy data from - * @num_chunks: number of chunks to copy - */ -static void idpf_convert_reg_to_queue_chunks(struct virtchnl2_queue_chunk *dchunks, - struct idpf_queue_id_reg_chunk *schunks, - u16 num_chunks) -{ - u16 i; - - for (i = 0; i < num_chunks; i++) { - dchunks[i].type = cpu_to_le32(schunks[i].type); - dchunks[i].start_queue_id = cpu_to_le32(schunks[i].start_queue_id); - dchunks[i].num_queues = cpu_to_le32(schunks[i].num_queues); - } -} - /** * idpf_send_delete_queues_msg - send delete queues virtchnl message * @vport: virtual port private data structure diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index 6823a3814d2b..2251560426df 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -134,8 +134,10 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, u16 num_complq, u16 num_rx_q, u16 num_rx_bufq); int idpf_send_delete_queues_msg(struct idpf_vport *vport, struct idpf_queue_id_reg_info *chunks); -int idpf_send_enable_queues_msg(struct idpf_vport *vport); -int idpf_send_disable_queues_msg(struct idpf_vport *vport); +int idpf_send_enable_queues_msg(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks); +int idpf_send_disable_queues_msg(struct idpf_vport *vport, + struct idpf_queue_id_reg_info *chunks); int idpf_send_config_queues_msg(struct idpf_vport *vport); int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport); From patchwork Thu May 8 21:50:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Kumar Linga X-Patchwork-Id: 2083151 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=g/MEikC7; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::137; helo=smtp4.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ztm5z642cz1yYV for ; Fri, 9 May 2025 07:50:47 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id A7B6C4152B; Thu, 8 May 2025 21:51:02 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id Lewh3b7wS9Vr; Thu, 8 May 2025 21:51:01 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 3513C4154A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1746741061; bh=sxbzGwJ/ONOtuKoxBIlF5bWCcXRAP5KkaH4Fvt1PwUA=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=g/MEikC7snNWvffEDnJZ7GrqE4eXAp7U9kBj0ypKj0Jw3iyskugfNy7tiMALok8A+ UCWBK/RJtuxScwlCFqtJUiZOE1+4Qi20V5A1ewMzKNIQhrWGgPg9HEwh7v9Axe5CMU IQS6LfgBfDTX+3hizcQ1hRvU5BIkWteNWtQNwJQoJtt6dJQ8iUQj/MouALh9KZFq91 UwV88qHTyyUXKdHc7KRaref8zZ8Qg2D7cHO4JGC4wDBGAaNRUgbCK5Ac+Wx0oo7iqc vY5nZ4fJq9DXgFEjBlvCPIBl0qbcNqXP2VnRH4vhnfrhvzTEmCz9ItpVUBsJ4TsjNZ hSQkfUgXsrY/A== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp4.osuosl.org (Postfix) with ESMTP id 3513C4154A; Thu, 8 May 2025 21:51:01 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) by lists1.osuosl.org (Postfix) with ESMTP id 31B0FD94 for ; Thu, 8 May 2025 21:50:54 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 7857841524 for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id sRA2DUeYyPtD for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18; helo=mgamail.intel.com; envelope-from=pavan.kumar.linga@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 0C9CD4151D DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 0C9CD4151D Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by smtp4.osuosl.org (Postfix) with ESMTPS id 0C9CD4151D for ; Thu, 8 May 2025 21:50:50 +0000 (UTC) X-CSE-ConnectionGUID: lhMy9emrQumDjbdnzliR6g== X-CSE-MsgGUID: nry0rLC8RzmUZZKRQgfhxQ== X-IronPort-AV: E=McAfee;i="6700,10204,11427"; a="47808320" X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="47808320" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2025 14:50:49 -0700 X-CSE-ConnectionGUID: HdylYhaTQLmNrxAO+fExCg== X-CSE-MsgGUID: EcY3qfIbQ5S/7B8K86gNew== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="141534274" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.80.55]) by orviesa005.jf.intel.com with ESMTP; 08 May 2025 14:50:49 -0700 From: Pavan Kumar Linga To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, milena.olech@intel.com, anton.nadezhdin@intel.com, Pavan Kumar Linga Date: Thu, 8 May 2025 14:50:07 -0700 Message-ID: <20250508215013.32668-4-pavan.kumar.linga@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250508215013.32668-1-pavan.kumar.linga@intel.com> References: <20250508215013.32668-1-pavan.kumar.linga@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746741051; x=1778277051; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Yn5Qi2mRIBoCNQO+GaqWp6tPZGzUfBWNGydMzpKtl5I=; b=n2X/GI24uiaAiCMRlwXVhhqzWw/u6T1NO4cRoksm/CTVIaE//dqYOdSX H+XrjNeny+qiijfqme/n2sI/Dnqroi6+272jSUvQyGNHOf2E5Sz/N9LRm zQlpnvRJjOGkIwoQL+7l+RU8lscrpb9B0kGhKpqk5x7do6J2zA90MVJNR FFvSJoyXWnK4isqdLYRH84PZYiJ6Qv5iF2ruL39SPTSGcA74BAOAc0HOg hrS7N7Esy/XyytMdRMip6PatCpgANP10QiBZwC7G145XfWfkG5YQAzbWD 5ta8RkJ5kgX1UgOOu64ZItRUraWAVb39mMQxLWgw7ubJFf7UdjWdA29m3 w==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=n2X/GI24 Subject: [Intel-wired-lan] [PATCH iwl-next v4 3/9] idpf: introduce idpf_q_vec_rsrc struct and move vector resources to it X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" To group all the vector and queue resources, introduce idpf_q_vec_rsrc structure. This helps to reuse the same config path functions by other features. For example, PTP implementation can use the existing config infrastructure to configure secondary mailbox by passing its queue and vector info. It also helps to avoid any duplication of code. Existing queue and vector resources are grouped as default resources. This patch moves vector info to the newly introduced structure. Following patch moves the queue resources. While at it, declare the loop iterator for 'num_q_vectors' in loop and use the correct type. Reviewed-by: Anton Nadezhdin Signed-off-by: Pavan Kumar Linga Tested-by: Samuel Salin --- drivers/net/ethernet/intel/idpf/idpf.h | 24 ++- drivers/net/ethernet/intel/idpf/idpf_dev.c | 10 +- drivers/net/ethernet/intel/idpf/idpf_lib.c | 40 ++-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 187 +++++++++--------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 14 +- drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 10 +- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 27 +-- .../net/ethernet/intel/idpf/idpf_virtchnl.h | 4 +- 8 files changed, 177 insertions(+), 139 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 1f7f56a4773a..1ff0ef78a07f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -8,6 +8,7 @@ struct idpf_adapter; struct idpf_vport; struct idpf_vport_max_q; +struct idpf_q_vec_rsrc; #include #include @@ -195,7 +196,8 @@ struct idpf_vport_max_q { */ struct idpf_reg_ops { void (*ctlq_reg_init)(struct idpf_ctlq_create_info *cq); - int (*intr_reg_init)(struct idpf_vport *vport); + int (*intr_reg_init)(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); void (*mb_intr_reg_init)(struct idpf_adapter *adapter); void (*reset_reg_init)(struct idpf_adapter *adapter); void (*trigger_reset)(struct idpf_adapter *adapter, @@ -271,8 +273,21 @@ struct idpf_tx_tstamp_stats { u32 tx_hwtstamp_flushed; }; +/** + * struct idpf_q_vec_rsrc - handle for queue and vector resources + * @q_vectors: array of queue vectors + * @q_vector_idxs: starting index of queue vectors + * @num_q_vectors: number of IRQ vectors allocated + */ +struct idpf_q_vec_rsrc { + struct idpf_q_vector *q_vectors; + u16 *q_vector_idxs; + u16 num_q_vectors; +}; + /** * struct idpf_vport - Handle for netdevices and queue resources + * @dflt_qv_rsrc: contains default queue and vector resources * @num_txq: Number of allocated TX queues * @num_complq: Number of allocated completion queues * @txq_desc_count: TX queue descriptor count @@ -304,9 +319,6 @@ struct idpf_tx_tstamp_stats { * @idx: Software index in adapter vports struct * @default_vport: Use this vport if one isn't specified * @base_rxd: True if the driver should use base descriptors instead of flex - * @num_q_vectors: Number of IRQ vectors allocated - * @q_vectors: Array of queue vectors - * @q_vector_idxs: Starting index of queue vectors * @max_mtu: device given max possible MTU * @default_mac_addr: device will give a default MAC to use * @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation @@ -320,6 +332,7 @@ struct idpf_tx_tstamp_stats { * @tstamp_stats: Tx timestamping statistics */ struct idpf_vport { + struct idpf_q_vec_rsrc dflt_qv_rsrc; u16 num_txq; u16 num_complq; u32 txq_desc_count; @@ -350,9 +363,6 @@ struct idpf_vport { bool default_vport; bool base_rxd; - u16 num_q_vectors; - struct idpf_q_vector *q_vectors; - u16 *q_vector_idxs; u16 max_mtu; u8 default_mac_addr[ETH_ALEN]; u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS]; diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c index 3fae81f1f988..3d358030b809 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c @@ -67,11 +67,13 @@ static void idpf_mb_intr_reg_init(struct idpf_adapter *adapter) /** * idpf_intr_reg_init - Initialize interrupt registers * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources */ -static int idpf_intr_reg_init(struct idpf_vport *vport) +static int idpf_intr_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; - int num_vecs = vport->num_q_vectors; + u16 num_vecs = rsrc->num_q_vectors; struct idpf_vec_regs *reg_vals; int num_regs, i, err = 0; u32 rx_itr, tx_itr; @@ -90,8 +92,8 @@ static int idpf_intr_reg_init(struct idpf_vport *vport) } for (i = 0; i < num_vecs; i++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[i]; - u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC; + struct idpf_q_vector *q_vector = &rsrc->q_vectors[i]; + u16 vec_id = rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC; struct idpf_intr_reg *intr = &q_vector->intr_reg; u32 spacing; diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index bc342e79addd..c79a5fbe7138 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -838,6 +838,7 @@ static void idpf_remove_features(struct idpf_vport *vport) static void idpf_vport_stop(struct idpf_vport *vport) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_queue_id_reg_info *chunks; if (np->state <= __IDPF_VPORT_DOWN) @@ -849,7 +850,7 @@ static void idpf_vport_stop(struct idpf_vport *vport) chunks = &vport->adapter->vport_config[vport->idx]->qid_reg_info; idpf_send_disable_vport_msg(vport); - idpf_send_disable_queues_msg(vport, chunks); + idpf_send_disable_queues_msg(vport, rsrc, chunks); idpf_send_map_unmap_queue_vector_msg(vport, false); /* Normally we ask for queues in create_vport, but if the number of * initially requested queues have changed, for example via ethtool @@ -862,9 +863,9 @@ static void idpf_vport_stop(struct idpf_vport *vport) idpf_remove_features(vport); vport->link_up = false; - idpf_vport_intr_deinit(vport); + idpf_vport_intr_deinit(vport, rsrc); idpf_vport_queues_rel(vport); - idpf_vport_intr_rel(vport); + idpf_vport_intr_rel(rsrc); np->state = __IDPF_VPORT_DOWN; } @@ -924,6 +925,7 @@ static void idpf_decfg_netdev(struct idpf_vport *vport) */ static void idpf_vport_rel(struct idpf_vport *vport) { + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_adapter *adapter = vport->adapter; struct idpf_vport_config *vport_config; struct idpf_vector_info vec_info; @@ -948,13 +950,13 @@ static void idpf_vport_rel(struct idpf_vport *vport) /* Release all the allocated vectors on the stack */ vec_info.num_req_vecs = 0; - vec_info.num_curr_vecs = vport->num_q_vectors; + vec_info.num_curr_vecs = rsrc->num_q_vectors; vec_info.default_vport = vport->default_vport; - idpf_req_rel_vector_indexes(adapter, vport->q_vector_idxs, &vec_info); + idpf_req_rel_vector_indexes(adapter, rsrc->q_vector_idxs, &vec_info); - kfree(vport->q_vector_idxs); - vport->q_vector_idxs = NULL; + kfree(rsrc->q_vector_idxs); + rsrc->q_vector_idxs = NULL; kfree(vport_config->qid_reg_info.queue_chunks); vport_config->qid_reg_info.queue_chunks = NULL; @@ -1075,6 +1077,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, { struct idpf_rss_data *rss_data; u16 idx = adapter->next_vport; + struct idpf_q_vec_rsrc *rsrc; struct idpf_vport *vport; u16 num_max_q; int err; @@ -1106,8 +1109,10 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, idpf_get_default_vports(adapter); num_max_q = max(max_q->max_txq, max_q->max_rxq); - vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL); - if (!vport->q_vector_idxs) + + rsrc = &vport->dflt_qv_rsrc; + rsrc->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL); + if (!rsrc->q_vector_idxs) goto free_vport; err = idpf_vport_init(vport, max_q); @@ -1140,7 +1145,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, free_qreg_chunks: kfree(adapter->vport_config[idx]->qid_reg_info.queue_chunks); free_vector_idxs: - kfree(vport->q_vector_idxs); + kfree(rsrc->q_vector_idxs); free_vport: kfree(vport); @@ -1313,6 +1318,7 @@ static void idpf_rx_init_buf_tail(struct idpf_vport *vport) static int idpf_vport_open(struct idpf_vport *vport) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_adapter *adapter = vport->adapter; struct idpf_vport_config *vport_config; struct idpf_queue_id_reg_info *chunks; @@ -1324,7 +1330,7 @@ static int idpf_vport_open(struct idpf_vport *vport) /* we do not allow interface up just yet */ netif_carrier_off(vport->netdev); - err = idpf_vport_intr_alloc(vport); + err = idpf_vport_intr_alloc(vport, rsrc); if (err) { dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u: %d\n", vport->vport_id, err); @@ -1345,7 +1351,7 @@ static int idpf_vport_open(struct idpf_vport *vport) goto queues_rel; } - err = idpf_vport_intr_init(vport); + err = idpf_vport_intr_init(vport, rsrc); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize interrupts for vport %u: %d\n", vport->vport_id, err); @@ -1367,7 +1373,7 @@ static int idpf_vport_open(struct idpf_vport *vport) } idpf_rx_init_buf_tail(vport); - idpf_vport_intr_ena(vport); + idpf_vport_intr_ena(vport, rsrc); err = idpf_send_config_queues_msg(vport); if (err) { @@ -1424,15 +1430,15 @@ static int idpf_vport_open(struct idpf_vport *vport) disable_vport: idpf_send_disable_vport_msg(vport); disable_queues: - idpf_send_disable_queues_msg(vport, chunks); + idpf_send_disable_queues_msg(vport, rsrc, chunks); unmap_queue_vectors: idpf_send_map_unmap_queue_vector_msg(vport, false); intr_deinit: - idpf_vport_intr_deinit(vport); + idpf_vport_intr_deinit(vport, rsrc); queues_rel: idpf_vport_queues_rel(vport); intr_rel: - idpf_vport_intr_rel(vport); + idpf_vport_intr_rel(rsrc); return err; } @@ -1913,7 +1919,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, memcpy(vport, new_vport, offsetof(struct idpf_vport, link_up)); if (reset_cause == IDPF_SR_Q_CHANGE) - idpf_vport_alloc_vec_indexes(vport); + idpf_vport_alloc_vec_indexes(vport, &vport->dflt_qv_rsrc); err = idpf_set_real_num_queues(vport); if (err) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 7cd7bb0ff365..7e40dd39f81c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -3659,39 +3659,34 @@ static irqreturn_t idpf_vport_intr_clean_queues(int __always_unused irq, /** * idpf_vport_intr_napi_del_all - Unregister napi for all q_vectors in vport - * @vport: virtual port structure - * + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_napi_del_all(struct idpf_vport *vport) +static void idpf_vport_intr_napi_del_all(struct idpf_q_vec_rsrc *rsrc) { - u16 v_idx; - - for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) - netif_napi_del(&vport->q_vectors[v_idx].napi); + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) + netif_napi_del(&rsrc->q_vectors[v_idx].napi); } /** * idpf_vport_intr_napi_dis_all - Disable NAPI for all q_vectors in the vport - * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_napi_dis_all(struct idpf_vport *vport) +static void idpf_vport_intr_napi_dis_all(struct idpf_q_vec_rsrc *rsrc) { - int v_idx; - - for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) - napi_disable(&vport->q_vectors[v_idx].napi); + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) + napi_disable(&rsrc->q_vectors[v_idx].napi); } /** * idpf_vport_intr_rel - Free memory allocated for interrupt vectors - * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * Free the memory allocated for interrupt vectors associated to a vport */ -void idpf_vport_intr_rel(struct idpf_vport *vport) +void idpf_vport_intr_rel(struct idpf_q_vec_rsrc *rsrc) { - for (u32 v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx]; + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[v_idx]; kfree(q_vector->complq); q_vector->complq = NULL; @@ -3703,28 +3698,29 @@ void idpf_vport_intr_rel(struct idpf_vport *vport) q_vector->rx = NULL; } - kfree(vport->q_vectors); - vport->q_vectors = NULL; + kfree(rsrc->q_vectors); + rsrc->q_vectors = NULL; } /** * idpf_vport_intr_rel_irq - Free the IRQ association with the OS * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_rel_irq(struct idpf_vport *vport) +static void idpf_vport_intr_rel_irq(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; - int vector; - for (vector = 0; vector < vport->num_q_vectors; vector++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[vector]; + for (u16 vector = 0; vector < rsrc->num_q_vectors; vector++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[vector]; int irq_num, vidx; /* free only the irqs that were actually requested */ if (!q_vector) continue; - vidx = vport->q_vector_idxs[vector]; + vidx = rsrc->q_vector_idxs[vector]; irq_num = adapter->msix_entries[vidx].vector; kfree(free_irq(irq_num, q_vector)); @@ -3733,14 +3729,13 @@ static void idpf_vport_intr_rel_irq(struct idpf_vport *vport) /** * idpf_vport_intr_dis_irq_all - Disable all interrupt - * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_dis_irq_all(struct idpf_vport *vport) +static void idpf_vport_intr_dis_irq_all(struct idpf_q_vec_rsrc *rsrc) { - struct idpf_q_vector *q_vector = vport->q_vectors; - int q_idx; + struct idpf_q_vector *q_vector = rsrc->q_vectors; - for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) + for (u16 q_idx = 0; q_idx < rsrc->num_q_vectors; q_idx++) writel(0, q_vector[q_idx].intr_reg.dyn_ctl); } @@ -3878,8 +3873,10 @@ void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector) /** * idpf_vport_intr_req_irq - get MSI-X vectors from the OS for the vport * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static int idpf_vport_intr_req_irq(struct idpf_vport *vport) +static int idpf_vport_intr_req_irq(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; const char *drv_name, *if_name, *vec_name; @@ -3888,11 +3885,11 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport) drv_name = dev_driver_string(&adapter->pdev->dev); if_name = netdev_name(vport->netdev); - for (vector = 0; vector < vport->num_q_vectors; vector++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[vector]; + for (vector = 0; vector < rsrc->num_q_vectors; vector++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[vector]; char *name; - vidx = vport->q_vector_idxs[vector]; + vidx = rsrc->q_vector_idxs[vector]; irq_num = adapter->msix_entries[vidx].vector; if (q_vector->num_rxq && q_vector->num_txq) @@ -3920,9 +3917,9 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport) free_q_irqs: while (--vector >= 0) { - vidx = vport->q_vector_idxs[vector]; + vidx = rsrc->q_vector_idxs[vector]; irq_num = adapter->msix_entries[vidx].vector; - kfree(free_irq(irq_num, &vport->q_vectors[vector])); + kfree(free_irq(irq_num, &rsrc->q_vectors[vector])); } return err; @@ -3951,15 +3948,16 @@ void idpf_vport_intr_write_itr(struct idpf_q_vector *q_vector, u16 itr, bool tx) /** * idpf_vport_intr_ena_irq_all - Enable IRQ for the given vport * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport) +static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { bool dynamic; - int q_idx; u16 itr; - for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { - struct idpf_q_vector *qv = &vport->q_vectors[q_idx]; + for (u16 q_idx = 0; q_idx < rsrc->num_q_vectors; q_idx++) { + struct idpf_q_vector *qv = &rsrc->q_vectors[q_idx]; /* Set the initial ITR values */ if (qv->num_txq) { @@ -3986,13 +3984,15 @@ static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport) /** * idpf_vport_intr_deinit - Release all vector associations for the vport * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -void idpf_vport_intr_deinit(struct idpf_vport *vport) +void idpf_vport_intr_deinit(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { - idpf_vport_intr_dis_irq_all(vport); - idpf_vport_intr_napi_dis_all(vport); - idpf_vport_intr_napi_del_all(vport); - idpf_vport_intr_rel_irq(vport); + idpf_vport_intr_dis_irq_all(rsrc); + idpf_vport_intr_napi_dis_all(rsrc); + idpf_vport_intr_napi_del_all(rsrc); + idpf_vport_intr_rel_irq(vport, rsrc); } /** @@ -4064,14 +4064,12 @@ static void idpf_init_dim(struct idpf_q_vector *qv) /** * idpf_vport_intr_napi_ena_all - Enable NAPI for all q_vectors in the vport - * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_napi_ena_all(struct idpf_vport *vport) +static void idpf_vport_intr_napi_ena_all(struct idpf_q_vec_rsrc *rsrc) { - int q_idx; - - for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[q_idx]; + for (u16 q_idx = 0; q_idx < rsrc->num_q_vectors; q_idx++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[q_idx]; idpf_init_dim(q_vector); napi_enable(&q_vector->napi); @@ -4198,10 +4196,12 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget) /** * idpf_vport_intr_map_vector_to_qs - Map vectors to queues * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * Mapping for vectors to queues */ -static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) +static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { bool split = idpf_is_queue_model_split(vport->rxq_model); u16 num_txq_grp = vport->num_txq_grp; @@ -4212,7 +4212,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) for (i = 0, qv_idx = 0; i < vport->num_rxq_grp; i++) { u16 num_rxq; - if (qv_idx >= vport->num_q_vectors) + if (qv_idx >= rsrc->num_q_vectors) qv_idx = 0; rx_qgrp = &vport->rxq_grps[i]; @@ -4228,7 +4228,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) q = &rx_qgrp->splitq.rxq_sets[j]->rxq; else q = rx_qgrp->singleq.rxqs[j]; - q->q_vector = &vport->q_vectors[qv_idx]; + q->q_vector = &rsrc->q_vectors[qv_idx]; q_index = q->q_vector->num_rxq; q->q_vector->rx[q_index] = q; q->q_vector->num_rxq++; @@ -4242,7 +4242,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) struct idpf_buf_queue *bufq; bufq = &rx_qgrp->splitq.bufq_sets[j].bufq; - bufq->q_vector = &vport->q_vectors[qv_idx]; + bufq->q_vector = &rsrc->q_vectors[qv_idx]; q_index = bufq->q_vector->num_bufq; bufq->q_vector->bufq[q_index] = bufq; bufq->q_vector->num_bufq++; @@ -4257,7 +4257,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) for (i = 0, qv_idx = 0; i < num_txq_grp; i++) { u16 num_txq; - if (qv_idx >= vport->num_q_vectors) + if (qv_idx >= rsrc->num_q_vectors) qv_idx = 0; tx_qgrp = &vport->txq_grps[i]; @@ -4267,14 +4267,14 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) struct idpf_tx_queue *q; q = tx_qgrp->txqs[j]; - q->q_vector = &vport->q_vectors[qv_idx]; + q->q_vector = &rsrc->q_vectors[qv_idx]; q->q_vector->tx[q->q_vector->num_txq++] = q; } if (split) { struct idpf_compl_queue *q = tx_qgrp->complq; - q->q_vector = &vport->q_vectors[qv_idx]; + q->q_vector = &rsrc->q_vectors[qv_idx]; q->q_vector->complq[q->q_vector->num_complq++] = q; } @@ -4285,20 +4285,21 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) /** * idpf_vport_intr_init_vec_idx - Initialize the vector indexes * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * Initialize vector indexes with values returened over mailbox */ -static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport) +static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; struct virtchnl2_alloc_vectors *ac; u16 *vecids, total_vecs; - int i; ac = adapter->req_vec_chunks; if (!ac) { - for (i = 0; i < vport->num_q_vectors; i++) - vport->q_vectors[i].v_idx = vport->q_vector_idxs[i]; + for (u16 i = 0; i < rsrc->num_q_vectors; i++) + rsrc->q_vectors[i].v_idx = rsrc->q_vector_idxs[i]; return 0; } @@ -4310,8 +4311,8 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport) idpf_get_vec_ids(adapter, vecids, total_vecs, &ac->vchunks); - for (i = 0; i < vport->num_q_vectors; i++) - vport->q_vectors[i].v_idx = vecids[vport->q_vector_idxs[i]]; + for (u16 i = 0; i < rsrc->num_q_vectors; i++) + rsrc->q_vectors[i].v_idx = vecids[rsrc->q_vector_idxs[i]]; kfree(vecids); @@ -4321,21 +4322,24 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport) /** * idpf_vport_intr_napi_add_all- Register napi handler for all qvectors * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport) +static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { int (*napi_poll)(struct napi_struct *napi, int budget); - u16 v_idx, qv_idx; int irq_num; + u16 qv_idx; if (idpf_is_queue_model_split(vport->txq_model)) napi_poll = idpf_vport_splitq_napi_poll; else napi_poll = idpf_vport_singleq_napi_poll; - for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx]; - qv_idx = vport->q_vector_idxs[v_idx]; + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[v_idx]; + + qv_idx = rsrc->q_vector_idxs[v_idx]; irq_num = vport->adapter->msix_entries[qv_idx].vector; netif_napi_add_config(vport->netdev, &q_vector->napi, @@ -4347,33 +4351,35 @@ static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport) /** * idpf_vport_intr_alloc - Allocate memory for interrupt vectors * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * We allocate one q_vector per queue interrupt. If allocation fails we * return -ENOMEM. */ -int idpf_vport_intr_alloc(struct idpf_vport *vport) +int idpf_vport_intr_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { u16 txqs_per_vector, rxqs_per_vector, bufqs_per_vector; struct idpf_q_vector *q_vector; - u32 complqs_per_vector, v_idx; + u32 complqs_per_vector; - vport->q_vectors = kcalloc(vport->num_q_vectors, - sizeof(struct idpf_q_vector), GFP_KERNEL); - if (!vport->q_vectors) + rsrc->q_vectors = kcalloc(rsrc->num_q_vectors, + sizeof(struct idpf_q_vector), GFP_KERNEL); + if (!rsrc->q_vectors) return -ENOMEM; txqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp, - vport->num_q_vectors); + rsrc->num_q_vectors); rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq_grp, - vport->num_q_vectors); + rsrc->num_q_vectors); bufqs_per_vector = vport->num_bufqs_per_qgrp * DIV_ROUND_UP(vport->num_rxq_grp, - vport->num_q_vectors); + rsrc->num_q_vectors); complqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp, - vport->num_q_vectors); + rsrc->num_q_vectors); - for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { - q_vector = &vport->q_vectors[v_idx]; + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) { + q_vector = &rsrc->q_vectors[v_idx]; q_vector->vport = vport; q_vector->tx_itr_value = IDPF_ITR_TX_DEF; @@ -4413,7 +4419,7 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport) return 0; error: - idpf_vport_intr_rel(vport); + idpf_vport_intr_rel(rsrc); return -ENOMEM; } @@ -4421,40 +4427,41 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport) /** * idpf_vport_intr_init - Setup all vectors for the given vport * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * Returns 0 on success or negative on failure */ -int idpf_vport_intr_init(struct idpf_vport *vport) +int idpf_vport_intr_init(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc) { int err; - err = idpf_vport_intr_init_vec_idx(vport); + err = idpf_vport_intr_init_vec_idx(vport, rsrc); if (err) return err; - idpf_vport_intr_map_vector_to_qs(vport); - idpf_vport_intr_napi_add_all(vport); + idpf_vport_intr_map_vector_to_qs(vport, rsrc); + idpf_vport_intr_napi_add_all(vport, rsrc); - err = vport->adapter->dev_ops.reg_ops.intr_reg_init(vport); + err = vport->adapter->dev_ops.reg_ops.intr_reg_init(vport, rsrc); if (err) goto unroll_vectors_alloc; - err = idpf_vport_intr_req_irq(vport); + err = idpf_vport_intr_req_irq(vport, rsrc); if (err) goto unroll_vectors_alloc; return 0; unroll_vectors_alloc: - idpf_vport_intr_napi_del_all(vport); + idpf_vport_intr_napi_del_all(rsrc); return err; } -void idpf_vport_intr_ena(struct idpf_vport *vport) +void idpf_vport_intr_ena(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc) { - idpf_vport_intr_napi_ena_all(vport); - idpf_vport_intr_ena_irq_all(vport); + idpf_vport_intr_napi_ena_all(rsrc); + idpf_vport_intr_ena_irq_all(vport, rsrc); } /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 36a0f828a6f8..1d67ec1f1b3f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -1020,12 +1020,16 @@ int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_index, void idpf_vport_calc_num_q_groups(struct idpf_vport *vport); int idpf_vport_queues_alloc(struct idpf_vport *vport); void idpf_vport_queues_rel(struct idpf_vport *vport); -void idpf_vport_intr_rel(struct idpf_vport *vport); -int idpf_vport_intr_alloc(struct idpf_vport *vport); +void idpf_vport_intr_rel(struct idpf_q_vec_rsrc *rsrc); +int idpf_vport_intr_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector); -void idpf_vport_intr_deinit(struct idpf_vport *vport); -int idpf_vport_intr_init(struct idpf_vport *vport); -void idpf_vport_intr_ena(struct idpf_vport *vport); +void idpf_vport_intr_deinit(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); +int idpf_vport_intr_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); +void idpf_vport_intr_ena(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); int idpf_config_rss(struct idpf_vport *vport); int idpf_init_rss(struct idpf_vport *vport); void idpf_deinit_rss(struct idpf_vport *vport); diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c index aba828abcb17..61d6f774e2f6 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c @@ -66,11 +66,13 @@ static void idpf_vf_mb_intr_reg_init(struct idpf_adapter *adapter) /** * idpf_vf_intr_reg_init - Initialize interrupt registers * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources */ -static int idpf_vf_intr_reg_init(struct idpf_vport *vport) +static int idpf_vf_intr_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; - int num_vecs = vport->num_q_vectors; + u16 num_vecs = rsrc->num_q_vectors; struct idpf_vec_regs *reg_vals; int num_regs, i, err = 0; u32 rx_itr, tx_itr; @@ -89,8 +91,8 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport) } for (i = 0; i < num_vecs; i++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[i]; - u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC; + struct idpf_q_vector *q_vector = &rsrc->q_vectors[i]; + u16 vec_id = rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC; struct idpf_intr_reg *intr = &q_vector->intr_reg; u32 spacing; diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 732d99f66842..8544c2963763 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1923,7 +1923,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) /** * idpf_send_enable_queues_msg - send enable queues virtchnl message - * @vport: Virtual port private data structure + * @vport: virtual port private data structure * @chunks: queue ids received over mailbox * * Will send enable queues virtchnl message. Returns 0 on success, negative on @@ -1937,16 +1937,18 @@ int idpf_send_enable_queues_msg(struct idpf_vport *vport, /** * idpf_send_disable_queues_msg - send disable queues virtchnl message - * @vport: Virtual port private data structure + * @vport: virtual port private data structure + * @rsrc: pointer to queue and vector resources * @chunks: queue ids received over mailbox * * Will send disable queues virtchnl message. Returns 0 on success, negative * on failure. */ int idpf_send_disable_queues_msg(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, struct idpf_queue_id_reg_info *chunks) { - int err, i; + int err; err = idpf_send_ena_dis_queues_msg(vport, chunks, false); if (err) @@ -1955,13 +1957,13 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport, /* switch to poll mode as interrupts will be disabled after disable * queues virtchnl message is sent */ - for (i = 0; i < vport->num_txq; i++) + for (u16 i = 0; i < vport->num_txq; i++) idpf_queue_set(POLL_MODE, vport->txqs[i]); /* schedule the napi to receive all the marker packets */ local_bh_disable(); - for (i = 0; i < vport->num_q_vectors; i++) - napi_schedule(&vport->q_vectors[i].napi); + for (u16 i = 0; i < rsrc->num_q_vectors; i++) + napi_schedule(&rsrc->q_vectors[i].napi); local_bh_enable(); return idpf_wait_for_marker_event(vport); @@ -3033,6 +3035,7 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter) /** * idpf_vport_alloc_vec_indexes - Get relative vector indexes * @vport: virtual port data struct + * @rsrc: pointer to queue and vector resources * * This function requests the vector information required for the vport and * stores the vector indexes received from the 'global vector distribution' @@ -3040,18 +3043,19 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter) * * Return 0 on success, error on failure */ -int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport) +int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_vector_info vec_info; int num_alloc_vecs; - vec_info.num_curr_vecs = vport->num_q_vectors; + vec_info.num_curr_vecs = rsrc->num_q_vectors; vec_info.num_req_vecs = max(vport->num_txq, vport->num_rxq); vec_info.default_vport = vport->default_vport; vec_info.index = vport->idx; num_alloc_vecs = idpf_req_rel_vector_indexes(vport->adapter, - vport->q_vector_idxs, + rsrc->q_vector_idxs, &vec_info); if (num_alloc_vecs <= 0) { dev_err(&vport->adapter->pdev->dev, "Vector distribution failed: %d\n", @@ -3059,7 +3063,7 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport) return -EINVAL; } - vport->num_q_vectors = num_alloc_vecs; + rsrc->num_q_vectors = num_alloc_vecs; return 0; } @@ -3075,6 +3079,7 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport) */ int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) { + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_adapter *adapter = vport->adapter; struct virtchnl2_create_vport *vport_msg; struct idpf_vport_config *vport_config; @@ -3119,7 +3124,7 @@ int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) idpf_vport_init_num_qs(vport, vport_msg); idpf_vport_calc_num_q_desc(vport); idpf_vport_calc_num_q_groups(vport); - idpf_vport_alloc_vec_indexes(vport); + idpf_vport_alloc_vec_indexes(vport, rsrc); vport->crc_enable = adapter->crc_enable; diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index 2251560426df..048b1653dfcd 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -137,10 +137,12 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport, int idpf_send_enable_queues_msg(struct idpf_vport *vport, struct idpf_queue_id_reg_info *chunks); int idpf_send_disable_queues_msg(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, struct idpf_queue_id_reg_info *chunks); int idpf_send_config_queues_msg(struct idpf_vport *vport); -int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport); +int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); int idpf_get_vec_ids(struct idpf_adapter *adapter, u16 *vecids, int num_vecids, struct virtchnl2_vector_chunks *chunks); From patchwork Thu May 8 21:50:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Kumar Linga X-Patchwork-Id: 2083154 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=kwIIOH7R; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ztm641H7Kz1yPv for ; Fri, 9 May 2025 07:50:52 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 58C0A83CAC; Thu, 8 May 2025 21:51:02 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id xYN3rI2d931k; Thu, 8 May 2025 21:50:59 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org D1D4D83CA5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1746741059; bh=2BmF8QLTuGsPGkiIMCUT6y08Q6xh4boSQVUbggOxGOQ=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=kwIIOH7R86uJkQxjzH2Hp2yX69PvTUc7MsavTZ5Ctk2ZLW4Xmo2CXlFV5qkPr1RO5 XgK5uZn4P8xTSMpQTqegEeK9n9S2HVIdzYfZ/KEOnuLxA7ACUFn2m6rVqFsL5cFglw NIfWjZzVbFeZGyKo0ikoIa72FwHD9kFINXADQ3alimemQ7chm987RvCOz3qvFkL2a7 czHSANPJ3a9D+5vzFQRtcGBS3g3pOLGxa80Rz3FKe7h3ecAcLRoZC3ICy46apcwTtT k44RpGp2+xP745FHqMObcr6hU5EKwpMBdukBnDm6SlMi5D6B3WcV69whxPRclzoOM5 mr6mJU/hK6r3g== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp1.osuosl.org (Postfix) with ESMTP id D1D4D83CA5; Thu, 8 May 2025 21:50:59 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists1.osuosl.org (Postfix) with ESMTP id 2B71D1A9 for ; Thu, 8 May 2025 21:50:54 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 2D1C4422E6 for ; Thu, 8 May 2025 21:50:53 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 22vahYqKz4cp for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18; helo=mgamail.intel.com; envelope-from=pavan.kumar.linga@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp2.osuosl.org 3F673422D4 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 3F673422D4 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by smtp2.osuosl.org (Postfix) with ESMTPS id 3F673422D4 for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) X-CSE-ConnectionGUID: Mo7Y1UN4RJODQCtvKEkbUQ== X-CSE-MsgGUID: VS5j5DVYRNiAXHl/aEqYaw== X-IronPort-AV: E=McAfee;i="6700,10204,11427"; a="47808321" X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="47808321" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2025 14:50:50 -0700 X-CSE-ConnectionGUID: dRuTLM36QgGYNrF6x2mhNQ== X-CSE-MsgGUID: nILWBPHORZqak88vYx9vUQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="141534277" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.80.55]) by orviesa005.jf.intel.com with ESMTP; 08 May 2025 14:50:49 -0700 From: Pavan Kumar Linga To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, milena.olech@intel.com, anton.nadezhdin@intel.com, Pavan Kumar Linga Date: Thu, 8 May 2025 14:50:08 -0700 Message-ID: <20250508215013.32668-5-pavan.kumar.linga@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250508215013.32668-1-pavan.kumar.linga@intel.com> References: <20250508215013.32668-1-pavan.kumar.linga@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746741051; x=1778277051; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lJzfm9HCwKRFHcdGbHPHR8ikrPu9xtv5RXvSWLM8UyU=; b=AbxHvoQE7Rem5VZ1Pz7FOfJcsnPGuB8D1fxgsWnWJWlJtWe+mYgApY1V fRxreaFaKYB+xCg5fYBH1Gpe1rJSIEYEu4vWmAnOZxE4LuaV9mubmN2yB JdsF9WcsYgVM6xQOilUPLFcNhEnvrM5K2J/VzOUzmgEob//SanIZ0U5cz dT0yB7C7k3leW0I+tMykwY7gAFHgVJDZhckMzrVIW7ZaLAOQDMBK9qiRH hN9G8OIIIPr/IYe+YbFbPz/1oWDmiKjhcIlsrFvVazJ9NFBRKHgUn0Dpd vNH5HrXlm6b1VgL2vjnkGpaz68/IaskW19VqQ9smVeX1dY+w0m8/Iryr3 g==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=AbxHvoQE Subject: [Intel-wired-lan] [PATCH iwl-next v4 4/9] idpf: move queue resources to idpf_q_vec_rsrc structure X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Move both TX and RX queue resources to the newly introduced idpf_q_vec_rsrc structure. While at it, declare the loop iterator in loop and use the correct type. Reviewed-by: Anton Nadezhdin Signed-off-by: Pavan Kumar Linga Tested-by: Samuel Salin --- drivers/net/ethernet/intel/idpf/idpf.h | 70 +-- .../net/ethernet/intel/idpf/idpf_ethtool.c | 85 ++-- drivers/net/ethernet/intel/idpf/idpf_lib.c | 66 +-- drivers/net/ethernet/intel/idpf/idpf_ptp.c | 17 +- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 410 +++++++++--------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 16 +- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 234 +++++----- .../net/ethernet/intel/idpf/idpf_virtchnl.h | 12 +- 8 files changed, 486 insertions(+), 424 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 1ff0ef78a07f..8e13cf29dec7 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -278,37 +278,57 @@ struct idpf_tx_tstamp_stats { * @q_vectors: array of queue vectors * @q_vector_idxs: starting index of queue vectors * @num_q_vectors: number of IRQ vectors allocated + * @num_txq: number of allocated TX queues + * @num_complq: number of allocated completion queues + * @num_txq_grp: number of TX queue groups + * @txq_grps: array of TX queue groups + * @txq_desc_count: TX queue descriptor count + * @complq_desc_count: nompletion queue descriptor count + * @txq_model: split queue or single queue queuing model + * @num_rxq: number of allocated RX queues + * @num_rxq_grp: number of RX queues in a group + * @rxq_grps: total number of RX groups. Number of groups * number of RX per + * group will yield total number of RX queues. + * @rxq_model: splitq queue or single queue queuing model + * @rxq_desc_count: RX queue descriptor count. *MUST* have enough descriptors + * to complete all buffer descriptors for all buffer queues in + * the worst case. + * @bufq_desc_count: buffer queue descriptor count + * @num_bufq: number of allocated buffer queues + * @num_bufqs_per_qgrp: buffer queues per RX queue in a given grouping + * @base_rxd: true if the driver should use base descriptors instead of flex */ struct idpf_q_vec_rsrc { struct idpf_q_vector *q_vectors; u16 *q_vector_idxs; u16 num_q_vectors; + + u16 num_txq; + u16 num_complq; + u16 num_txq_grp; + struct idpf_txq_group *txq_grps; + u32 txq_desc_count; + u32 complq_desc_count; + u32 txq_model; + + u16 num_rxq; + u16 num_rxq_grp; + struct idpf_rxq_group *rxq_grps; + u32 rxq_model; + u32 rxq_desc_count; + u32 bufq_desc_count[IDPF_MAX_BUFQS_PER_RXQ_GRP]; + u16 num_bufq; + u8 num_bufqs_per_qgrp; + bool base_rxd; }; /** * struct idpf_vport - Handle for netdevices and queue resources * @dflt_qv_rsrc: contains default queue and vector resources * @num_txq: Number of allocated TX queues - * @num_complq: Number of allocated completion queues - * @txq_desc_count: TX queue descriptor count - * @complq_desc_count: Completion queue descriptor count * @compln_clean_budget: Work budget for completion clean - * @num_txq_grp: Number of TX queue groups - * @txq_grps: Array of TX queue groups - * @txq_model: Split queue or single queue queuing model * @txqs: Used only in hotpath to get to the right queue very fast * @crc_enable: Enable CRC insertion offload - * @num_rxq: Number of allocated RX queues - * @num_bufq: Number of allocated buffer queues - * @rxq_desc_count: RX queue descriptor count. *MUST* have enough descriptors - * to complete all buffer descriptors for all buffer queues in - * the worst case. - * @num_bufqs_per_qgrp: Buffer queues per RX queue in a given grouping - * @bufq_desc_count: Buffer queue descriptor count - * @num_rxq_grp: Number of RX queues in a group - * @rxq_grps: Total number of RX groups. Number of groups * number of RX per - * group will yield total number of RX queues. - * @rxq_model: Splitq queue or single queue queuing model * @rx_ptype_lkup: Lookup table for ptypes on RX * @adapter: back pointer to associated adapter * @netdev: Associated net_device. Each vport should have one and only one @@ -318,7 +338,6 @@ struct idpf_q_vec_rsrc { * @vport_id: Device given vport identifier * @idx: Software index in adapter vports struct * @default_vport: Use this vport if one isn't specified - * @base_rxd: True if the driver should use base descriptors instead of flex * @max_mtu: device given max possible MTU * @default_mac_addr: device will give a default MAC to use * @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation @@ -334,24 +353,10 @@ struct idpf_q_vec_rsrc { struct idpf_vport { struct idpf_q_vec_rsrc dflt_qv_rsrc; u16 num_txq; - u16 num_complq; - u32 txq_desc_count; - u32 complq_desc_count; u32 compln_clean_budget; - u16 num_txq_grp; - struct idpf_txq_group *txq_grps; - u32 txq_model; struct idpf_tx_queue **txqs; bool crc_enable; - u16 num_rxq; - u16 num_bufq; - u32 rxq_desc_count; - u8 num_bufqs_per_qgrp; - u32 bufq_desc_count[IDPF_MAX_BUFQS_PER_RXQ_GRP]; - u16 num_rxq_grp; - struct idpf_rxq_group *rxq_grps; - u32 rxq_model; struct libeth_rx_pt *rx_ptype_lkup; struct idpf_adapter *adapter; @@ -361,7 +366,6 @@ struct idpf_vport { u32 vport_id; u16 idx; bool default_vport; - bool base_rxd; u16 max_mtu; u8 default_mac_addr[ETH_ALEN]; diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c index 8a7145dd912b..607ec4462031 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -29,7 +29,7 @@ static int idpf_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd, switch (cmd->cmd) { case ETHTOOL_GRXRINGS: - cmd->data = vport->num_rxq; + cmd->data = vport->dflt_qv_rsrc.num_rxq; break; case ETHTOOL_GRXCLSRLCNT: cmd->rule_cnt = user_config->num_fsteer_fltrs; @@ -594,8 +594,8 @@ static void idpf_get_ringparam(struct net_device *netdev, ring->rx_max_pending = IDPF_MAX_RXQ_DESC; ring->tx_max_pending = IDPF_MAX_TXQ_DESC; - ring->rx_pending = vport->rxq_desc_count; - ring->tx_pending = vport->txq_desc_count; + ring->rx_pending = vport->dflt_qv_rsrc.rxq_desc_count; + ring->tx_pending = vport->dflt_qv_rsrc.txq_desc_count; kring->tcp_data_split = idpf_vport_get_hsplit(vport); @@ -619,8 +619,9 @@ static int idpf_set_ringparam(struct net_device *netdev, { struct idpf_vport_user_config_data *config_data; u32 new_rx_count, new_tx_count; + struct idpf_q_vec_rsrc *rsrc; struct idpf_vport *vport; - int i, err = 0; + int err = 0; u16 idx; idpf_vport_ctrl_lock(netdev); @@ -654,8 +655,9 @@ static int idpf_set_ringparam(struct net_device *netdev, netdev_info(netdev, "Requested Tx descriptor count rounded up to %u\n", new_tx_count); - if (new_tx_count == vport->txq_desc_count && - new_rx_count == vport->rxq_desc_count && + rsrc = &vport->dflt_qv_rsrc; + if (new_tx_count == rsrc->txq_desc_count && + new_rx_count == rsrc->rxq_desc_count && kring->tcp_data_split == idpf_vport_get_hsplit(vport)) goto unlock_mutex; @@ -674,10 +676,10 @@ static int idpf_set_ringparam(struct net_device *netdev, /* Since we adjusted the RX completion queue count, the RX buffer queue * descriptor count needs to be adjusted as well */ - for (i = 0; i < vport->num_bufqs_per_qgrp; i++) - vport->bufq_desc_count[i] = + for (u8 i = 0; i < rsrc->num_bufqs_per_qgrp; i++) + rsrc->bufq_desc_count[i] = IDPF_RX_BUFQ_DESC_COUNT(new_rx_count, - vport->num_bufqs_per_qgrp); + rsrc->num_bufqs_per_qgrp); err = idpf_initiate_soft_reset(vport, IDPF_SR_Q_DESC_CHANGE); @@ -1057,7 +1059,7 @@ static void idpf_add_port_stats(struct idpf_vport *vport, u64 **data) static void idpf_collect_queue_stats(struct idpf_vport *vport) { struct idpf_port_stats *pstats = &vport->port_stats; - int i, j; + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; /* zero out port stats since they're actually tracked in per * queue stats; this is only for reporting @@ -1074,22 +1076,22 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport) u64_stats_set(&pstats->tx_hwtstamp_skipped, 0); u64_stats_update_end(&pstats->stats_sync); - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rxq_grp = &rsrc->rxq_grps[i]; u16 num_rxq; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) num_rxq = rxq_grp->splitq.num_rxq_sets; else num_rxq = rxq_grp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++) { + for (u16 j = 0; j < num_rxq; j++) { u64 hw_csum_err, hsplit, hsplit_hbo, bad_descs; struct idpf_rx_queue_stats *stats; struct idpf_rx_queue *rxq; unsigned int start; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) rxq = &rxq_grp->splitq.rxq_sets[j]->rxq; else rxq = rxq_grp->singleq.rxqs[j]; @@ -1116,10 +1118,10 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport) } } - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; - for (j = 0; j < txq_grp->num_txq; j++) { + for (u16 j = 0; j < txq_grp->num_txq; j++) { u64 linearize, qbusy, skb_drops, dma_map_errs, tstamp; struct idpf_tx_queue *txq = txq_grp->txqs[j]; struct idpf_tx_queue_stats *stats; @@ -1164,9 +1166,9 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, { struct idpf_netdev_priv *np = netdev_priv(netdev); struct idpf_vport_config *vport_config; + struct idpf_q_vec_rsrc *rsrc; struct idpf_vport *vport; unsigned int total = 0; - unsigned int i, j; bool is_splitq; u16 qtype; @@ -1184,12 +1186,13 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, idpf_collect_queue_stats(vport); idpf_add_port_stats(vport, &data); - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; + rsrc = &vport->dflt_qv_rsrc; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; qtype = VIRTCHNL2_QUEUE_TYPE_TX; - for (j = 0; j < txq_grp->num_txq; j++, total++) { + for (u16 j = 0; j < txq_grp->num_txq; j++, total++) { struct idpf_tx_queue *txq = txq_grp->txqs[j]; if (!txq) @@ -1209,10 +1212,10 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_TX); total = 0; - is_splitq = idpf_is_queue_model_split(vport->rxq_model); + is_splitq = idpf_is_queue_model_split(rsrc->rxq_model); - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rxq_grp = &rsrc->rxq_grps[i]; u16 num_rxq; qtype = VIRTCHNL2_QUEUE_TYPE_RX; @@ -1222,7 +1225,7 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, else num_rxq = rxq_grp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++, total++) { + for (u16 j = 0; j < num_rxq; j++, total++) { struct idpf_rx_queue *rxq; if (is_splitq) @@ -1254,15 +1257,16 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, static struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport, int q_num) { + const struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; int q_grp, q_idx; - if (!idpf_is_queue_model_split(vport->rxq_model)) - return vport->rxq_grps->singleq.rxqs[q_num]->q_vector; + if (!idpf_is_queue_model_split(rsrc->rxq_model)) + return rsrc->rxq_grps->singleq.rxqs[q_num]->q_vector; q_grp = q_num / IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; q_idx = q_num % IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; - return vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq.q_vector; + return rsrc->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq.q_vector; } /** @@ -1275,14 +1279,15 @@ static struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport, static struct idpf_q_vector *idpf_find_txq_vec(const struct idpf_vport *vport, int q_num) { + const struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; int q_grp; - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) return vport->txqs[q_num]->q_vector; q_grp = q_num / IDPF_DFLT_SPLITQ_TXQ_PER_GROUP; - return vport->txq_grps[q_grp].complq->q_vector; + return rsrc->txq_grps[q_grp].complq->q_vector; } /** @@ -1319,7 +1324,8 @@ static int idpf_get_q_coalesce(struct net_device *netdev, u32 q_num) { const struct idpf_netdev_priv *np = netdev_priv(netdev); - const struct idpf_vport *vport; + struct idpf_q_vec_rsrc *rsrc; + struct idpf_vport *vport; int err = 0; idpf_vport_ctrl_lock(netdev); @@ -1328,16 +1334,17 @@ static int idpf_get_q_coalesce(struct net_device *netdev, if (np->state != __IDPF_VPORT_UP) goto unlock_mutex; - if (q_num >= vport->num_rxq && q_num >= vport->num_txq) { + rsrc = &vport->dflt_qv_rsrc; + if (q_num >= rsrc->num_rxq && q_num >= rsrc->num_txq) { err = -EINVAL; goto unlock_mutex; } - if (q_num < vport->num_rxq) + if (q_num < rsrc->num_rxq) __idpf_get_q_coalesce(ec, idpf_find_rxq_vec(vport, q_num), VIRTCHNL2_QUEUE_TYPE_RX); - if (q_num < vport->num_txq) + if (q_num < rsrc->num_txq) __idpf_get_q_coalesce(ec, idpf_find_txq_vec(vport, q_num), VIRTCHNL2_QUEUE_TYPE_TX); @@ -1494,8 +1501,9 @@ static int idpf_set_coalesce(struct net_device *netdev, struct netlink_ext_ack *extack) { struct idpf_netdev_priv *np = netdev_priv(netdev); + struct idpf_q_vec_rsrc *rsrc; struct idpf_vport *vport; - int i, err = 0; + int err = 0; idpf_vport_ctrl_lock(netdev); vport = idpf_netdev_to_vport(netdev); @@ -1503,13 +1511,14 @@ static int idpf_set_coalesce(struct net_device *netdev, if (np->state != __IDPF_VPORT_UP) goto unlock_mutex; - for (i = 0; i < vport->num_txq; i++) { + rsrc = &vport->dflt_qv_rsrc; + for (u16 i = 0; i < rsrc->num_txq; i++) { err = idpf_set_q_coalesce(vport, ec, i, false); if (err) goto unlock_mutex; } - for (i = 0; i < vport->num_rxq; i++) { + for (u16 i = 0; i < rsrc->num_rxq; i++) { err = idpf_set_q_coalesce(vport, ec, i, true); if (err) goto unlock_mutex; diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index c79a5fbe7138..7d97990fd626 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -851,7 +851,7 @@ static void idpf_vport_stop(struct idpf_vport *vport) idpf_send_disable_vport_msg(vport); idpf_send_disable_queues_msg(vport, rsrc, chunks); - idpf_send_map_unmap_queue_vector_msg(vport, false); + idpf_send_map_unmap_queue_vector_msg(vport, rsrc, false); /* Normally we ask for queues in create_vport, but if the number of * initially requested queues have changed, for example via ethtool * set channels, we do delete queues and then add the queues back @@ -864,7 +864,7 @@ static void idpf_vport_stop(struct idpf_vport *vport) vport->link_up = false; idpf_vport_intr_deinit(vport, rsrc); - idpf_vport_queues_rel(vport); + idpf_vport_queues_rel(vport, rsrc); idpf_vport_intr_rel(rsrc); np->state = __IDPF_VPORT_DOWN; } @@ -1008,7 +1008,7 @@ static void idpf_vport_dealloc(struct idpf_vport *vport) */ static bool idpf_is_hsplit_supported(const struct idpf_vport *vport) { - return idpf_is_queue_model_split(vport->rxq_model) && + return idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model) && idpf_is_cap_ena_all(vport->adapter, IDPF_HSPLIT_CAPS, IDPF_CAP_HSPLIT); } @@ -1255,11 +1255,13 @@ static int idpf_set_real_num_queues(struct idpf_vport *vport) { int err; - err = netif_set_real_num_rx_queues(vport->netdev, vport->num_rxq); + err = netif_set_real_num_rx_queues(vport->netdev, + vport->dflt_qv_rsrc.num_rxq); if (err) return err; - return netif_set_real_num_tx_queues(vport->netdev, vport->num_txq); + return netif_set_real_num_tx_queues(vport->netdev, + vport->dflt_qv_rsrc.num_txq); } /** @@ -1284,24 +1286,22 @@ static int idpf_up_complete(struct idpf_vport *vport) /** * idpf_rx_init_buf_tail - Write initial buffer ring tail value - * @vport: virtual port struct + * @rsrc: pointer to queue and vector resources */ -static void idpf_rx_init_buf_tail(struct idpf_vport *vport) +static void idpf_rx_init_buf_tail(struct idpf_q_vec_rsrc *rsrc) { - int i, j; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *grp = &rsrc->rxq_grps[i]; - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *grp = &vport->rxq_grps[i]; - - if (idpf_is_queue_model_split(vport->rxq_model)) { - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + if (idpf_is_queue_model_split(rsrc->rxq_model)) { + for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { const struct idpf_buf_queue *q = &grp->splitq.bufq_sets[j].bufq; writel(q->next_to_alloc, q->tail); } } else { - for (j = 0; j < grp->singleq.num_rxq; j++) { + for (u16 j = 0; j < grp->singleq.num_rxq; j++) { const struct idpf_rx_queue *q = grp->singleq.rxqs[j]; @@ -1337,14 +1337,14 @@ static int idpf_vport_open(struct idpf_vport *vport) return err; } - err = idpf_vport_queues_alloc(vport); + err = idpf_vport_queues_alloc(vport, rsrc); if (err) goto intr_rel; vport_config = adapter->vport_config[vport->idx]; chunks = &vport_config->qid_reg_info; - err = idpf_vport_queue_ids_init(vport, chunks); + err = idpf_vport_queue_ids_init(vport, rsrc, chunks); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n", vport->vport_id, err); @@ -1358,31 +1358,31 @@ static int idpf_vport_open(struct idpf_vport *vport) goto queues_rel; } - err = idpf_rx_bufs_init_all(vport); + err = idpf_rx_bufs_init_all(rsrc); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n", vport->vport_id, err); goto queues_rel; } - err = idpf_queue_reg_init(vport, chunks); + err = idpf_queue_reg_init(vport, rsrc, chunks); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n", vport->vport_id, err); goto queues_rel; } - idpf_rx_init_buf_tail(vport); + idpf_rx_init_buf_tail(rsrc); idpf_vport_intr_ena(vport, rsrc); - err = idpf_send_config_queues_msg(vport); + err = idpf_send_config_queues_msg(vport, rsrc); if (err) { dev_err(&adapter->pdev->dev, "Failed to configure queues for vport %u, %d\n", vport->vport_id, err); goto intr_deinit; } - err = idpf_send_map_unmap_queue_vector_msg(vport, true); + err = idpf_send_map_unmap_queue_vector_msg(vport, rsrc, true); if (err) { dev_err(&adapter->pdev->dev, "Failed to map queue vectors for vport %u: %d\n", vport->vport_id, err); @@ -1432,11 +1432,11 @@ static int idpf_vport_open(struct idpf_vport *vport) disable_queues: idpf_send_disable_queues_msg(vport, rsrc, chunks); unmap_queue_vectors: - idpf_send_map_unmap_queue_vector_msg(vport, false); + idpf_send_map_unmap_queue_vector_msg(vport, rsrc, false); intr_deinit: idpf_vport_intr_deinit(vport, rsrc); queues_rel: - idpf_vport_queues_rel(vport); + idpf_vport_queues_rel(vport, rsrc); intr_rel: idpf_vport_intr_rel(rsrc); @@ -1841,9 +1841,11 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, enum idpf_vport_reset_cause reset_cause) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; enum idpf_vport_state current_state = np->state; struct idpf_adapter *adapter = vport->adapter; struct idpf_vport_config *vport_config; + struct idpf_q_vec_rsrc *new_rsrc; struct idpf_vport *new_vport; int err; @@ -1870,16 +1872,18 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, */ memcpy(new_vport, vport, offsetof(struct idpf_vport, link_up)); + new_rsrc = &new_vport->dflt_qv_rsrc; + /* Adjust resource parameters prior to reallocating resources */ switch (reset_cause) { case IDPF_SR_Q_CHANGE: - err = idpf_vport_adjust_qs(new_vport); + err = idpf_vport_adjust_qs(new_vport, new_rsrc); if (err) goto free_vport; break; case IDPF_SR_Q_DESC_CHANGE: /* Update queue parameters before allocating resources */ - idpf_vport_calc_num_q_desc(new_vport); + idpf_vport_calc_num_q_desc(new_vport, new_rsrc); break; case IDPF_SR_MTU_CHANGE: case IDPF_SR_RSC_CHANGE: @@ -1906,10 +1910,10 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, * to add code to add_queues to change the vport config within * vport itself as it will be wiped with a memcpy later. */ - err = idpf_send_add_queues_msg(vport, new_vport->num_txq, - new_vport->num_complq, - new_vport->num_rxq, - new_vport->num_bufq); + err = idpf_send_add_queues_msg(vport, new_rsrc->num_txq, + new_rsrc->num_complq, + new_rsrc->num_rxq, + new_rsrc->num_bufq); if (err) goto err_reset; @@ -1933,8 +1937,8 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, return err; err_reset: - idpf_send_add_queues_msg(vport, vport->num_txq, vport->num_complq, - vport->num_rxq, vport->num_bufq); + idpf_send_add_queues_msg(vport, rsrc->num_txq, rsrc->num_complq, + rsrc->num_rxq, rsrc->num_bufq); err_open: if (current_state == __IDPF_VPORT_UP) diff --git a/drivers/net/ethernet/intel/idpf/idpf_ptp.c b/drivers/net/ethernet/intel/idpf/idpf_ptp.c index ba05709cda24..945ef949aad1 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ptp.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ptp.c @@ -386,15 +386,17 @@ static int idpf_ptp_update_cached_phctime(struct idpf_adapter *adapter) WRITE_ONCE(adapter->ptp->cached_phc_jiffies, jiffies); idpf_for_each_vport(adapter, vport) { + struct idpf_q_vec_rsrc *rsrc; bool split; - if (!vport || !vport->rxq_grps) + if (!vport || !vport->dflt_qv_rsrc.rxq_grps) continue; - split = idpf_is_queue_model_split(vport->rxq_model); + rsrc = &vport->dflt_qv_rsrc; + split = idpf_is_queue_model_split(rsrc->rxq_model); - for (u16 i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *grp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *grp = &rsrc->rxq_grps[i]; idpf_ptp_update_phctime_rxq_grp(grp, split, systime); } @@ -680,9 +682,10 @@ int idpf_ptp_request_ts(struct idpf_tx_queue *tx_q, struct sk_buff *skb, */ static void idpf_ptp_set_rx_tstamp(struct idpf_vport *vport, int rx_filter) { + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; bool enable = true, splitq; - splitq = idpf_is_queue_model_split(vport->rxq_model); + splitq = idpf_is_queue_model_split(rsrc->rxq_model); if (rx_filter == HWTSTAMP_FILTER_NONE) { enable = false; @@ -691,8 +694,8 @@ static void idpf_ptp_set_rx_tstamp(struct idpf_vport *vport, int rx_filter) vport->tstamp_config.rx_filter = HWTSTAMP_FILTER_ALL; } - for (u16 i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *grp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *grp = &rsrc->rxq_grps[i]; struct idpf_rx_queue *rx_queue; u16 j, num_rxq; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 7e40dd39f81c..56793be3953f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -165,24 +165,22 @@ static void idpf_compl_desc_rel(struct idpf_compl_queue *complq) /** * idpf_tx_desc_rel_all - Free Tx Resources for All Queues - * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources * * Free all transmit software resources */ -static void idpf_tx_desc_rel_all(struct idpf_vport *vport) +static void idpf_tx_desc_rel_all(struct idpf_q_vec_rsrc *rsrc) { - int i, j; - - if (!vport->txq_grps) + if (!rsrc->txq_grps) return; - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; - for (j = 0; j < txq_grp->num_txq; j++) + for (u16 j = 0; j < txq_grp->num_txq; j++) idpf_tx_desc_rel(txq_grp->txqs[j]); - if (idpf_is_queue_model_split(vport->txq_model)) + if (idpf_is_queue_model_split(rsrc->txq_model)) idpf_compl_desc_rel(txq_grp->complq); } } @@ -235,13 +233,11 @@ static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q) /** * idpf_tx_desc_alloc - Allocate the Tx descriptors - * @vport: vport to allocate resources for * @tx_q: the tx ring to set up * * Returns 0 on success, negative on failure */ -static int idpf_tx_desc_alloc(const struct idpf_vport *vport, - struct idpf_tx_queue *tx_q) +static int idpf_tx_desc_alloc(struct idpf_tx_queue *tx_q) { struct device *dev = tx_q->dev; int err; @@ -277,13 +273,11 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport, /** * idpf_compl_desc_alloc - allocate completion descriptors - * @vport: vport to allocate resources for * @complq: completion queue to set up * * Return: 0 on success, -errno on failure. */ -static int idpf_compl_desc_alloc(const struct idpf_vport *vport, - struct idpf_compl_queue *complq) +static int idpf_compl_desc_alloc(struct idpf_compl_queue *complq) { complq->size = array_size(complq->desc_count, sizeof(*complq->comp)); @@ -303,24 +297,25 @@ static int idpf_compl_desc_alloc(const struct idpf_vport *vport, /** * idpf_tx_desc_alloc_all - allocate all queues Tx resources * @vport: virtual port private structure + * @rsrc: pointer to queue and vector resources * * Returns 0 on success, negative on failure */ -static int idpf_tx_desc_alloc_all(struct idpf_vport *vport) +static int idpf_tx_desc_alloc_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { int err = 0; - int i, j; /* Setup buffer queues. In single queue model buffer queues and * completion queues will be same */ - for (i = 0; i < vport->num_txq_grp; i++) { - for (j = 0; j < vport->txq_grps[i].num_txq; j++) { - struct idpf_tx_queue *txq = vport->txq_grps[i].txqs[j]; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + for (u16 j = 0; j < rsrc->txq_grps[i].num_txq; j++) { + struct idpf_tx_queue *txq = rsrc->txq_grps[i].txqs[j]; u8 gen_bits = 0; u16 bufidx_mask; - err = idpf_tx_desc_alloc(vport, txq); + err = idpf_tx_desc_alloc(txq); if (err) { pci_err(vport->adapter->pdev, "Allocation for Tx Queue %u failed\n", @@ -328,7 +323,7 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport) goto err_out; } - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) continue; txq->compl_tag_cur_gen = 0; @@ -357,11 +352,11 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport) GETMAXVAL(txq->compl_tag_gen_s); } - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) continue; /* Setup completion queues */ - err = idpf_compl_desc_alloc(vport, vport->txq_grps[i].complq); + err = idpf_compl_desc_alloc(rsrc->txq_grps[i].complq); if (err) { pci_err(vport->adapter->pdev, "Allocation for Tx Completion Queue %u failed\n", @@ -372,7 +367,7 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport) err_out: if (err) - idpf_tx_desc_rel_all(vport); + idpf_tx_desc_rel_all(rsrc); return err; } @@ -519,38 +514,39 @@ static void idpf_rx_desc_rel_bufq(struct idpf_buf_queue *bufq, /** * idpf_rx_desc_rel_all - Free Rx Resources for All Queues * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources * * Free all rx queues resources */ -static void idpf_rx_desc_rel_all(struct idpf_vport *vport) +static void idpf_rx_desc_rel_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct device *dev = &vport->adapter->pdev->dev; struct idpf_rxq_group *rx_qgrp; u16 num_rxq; - int i, j; - if (!vport->rxq_grps) + if (!rsrc->rxq_grps) return; - for (i = 0; i < vport->num_rxq_grp; i++) { - rx_qgrp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + rx_qgrp = &rsrc->rxq_grps[i]; - if (!idpf_is_queue_model_split(vport->rxq_model)) { - for (j = 0; j < rx_qgrp->singleq.num_rxq; j++) + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { + for (u16 j = 0; j < rx_qgrp->singleq.num_rxq; j++) idpf_rx_desc_rel(rx_qgrp->singleq.rxqs[j], dev, VIRTCHNL2_QUEUE_MODEL_SINGLE); continue; } num_rxq = rx_qgrp->splitq.num_rxq_sets; - for (j = 0; j < num_rxq; j++) + for (u16 j = 0; j < num_rxq; j++) idpf_rx_desc_rel(&rx_qgrp->splitq.rxq_sets[j]->rxq, dev, VIRTCHNL2_QUEUE_MODEL_SPLIT); if (!rx_qgrp->splitq.bufq_sets) continue; - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[j]; @@ -803,24 +799,24 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq, /** * idpf_rx_bufs_init_all - Initialize all RX bufs - * @vport: virtual port struct + * @rsrc: pointer to queue and vector resources * * Returns 0 on success, negative on failure */ -int idpf_rx_bufs_init_all(struct idpf_vport *vport) +int idpf_rx_bufs_init_all(struct idpf_q_vec_rsrc *rsrc) { - bool split = idpf_is_queue_model_split(vport->rxq_model); - int i, j, err; + bool split = idpf_is_queue_model_split(rsrc->rxq_model); + int err; - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u32 truesize = 0; /* Allocate bufs for the rxq itself in singleq */ if (!split) { int num_rxq = rx_qgrp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++) { + for (u16 j = 0; j < num_rxq; j++) { struct idpf_rx_queue *q; q = rx_qgrp->singleq.rxqs[j]; @@ -833,7 +829,7 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport) } /* Otherwise, allocate bufs for the buffer queues */ - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { enum libeth_fqe_type type; struct idpf_buf_queue *q; @@ -916,26 +912,28 @@ static int idpf_bufq_desc_alloc(const struct idpf_vport *vport, /** * idpf_rx_desc_alloc_all - allocate all RX queues resources * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources * * Returns 0 on success, negative on failure */ -static int idpf_rx_desc_alloc_all(struct idpf_vport *vport) +static int idpf_rx_desc_alloc_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_rxq_group *rx_qgrp; - int i, j, err; u16 num_rxq; + int err; - for (i = 0; i < vport->num_rxq_grp; i++) { - rx_qgrp = &vport->rxq_grps[i]; - if (idpf_is_queue_model_split(vport->rxq_model)) + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + rx_qgrp = &rsrc->rxq_grps[i]; + if (idpf_is_queue_model_split(rsrc->rxq_model)) num_rxq = rx_qgrp->splitq.num_rxq_sets; else num_rxq = rx_qgrp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++) { + for (u16 j = 0; j < num_rxq; j++) { struct idpf_rx_queue *q; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) q = &rx_qgrp->splitq.rxq_sets[j]->rxq; else q = rx_qgrp->singleq.rxqs[j]; @@ -949,10 +947,10 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport) } } - if (!idpf_is_queue_model_split(vport->rxq_model)) + if (!idpf_is_queue_model_split(rsrc->rxq_model)) continue; - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { struct idpf_buf_queue *q; q = &rx_qgrp->splitq.bufq_sets[j].bufq; @@ -970,7 +968,7 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport) return 0; err_out: - idpf_rx_desc_rel_all(vport); + idpf_rx_desc_rel_all(vport, rsrc); return err; } @@ -978,23 +976,24 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport) /** * idpf_txq_group_rel - Release all resources for txq groups * @vport: vport to release txq groups on + * @rsrc: pointer to queue and vector resources */ -static void idpf_txq_group_rel(struct idpf_vport *vport) +static void idpf_txq_group_rel(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { bool split, flow_sch_en; - int i, j; - if (!vport->txq_grps) + if (!rsrc->txq_grps) return; - split = idpf_is_queue_model_split(vport->txq_model); + split = idpf_is_queue_model_split(rsrc->txq_model); flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_SPLITQ_QSCHED); - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; - for (j = 0; j < txq_grp->num_txq; j++) { + for (u16 j = 0; j < txq_grp->num_txq; j++) { kfree(txq_grp->txqs[j]); txq_grp->txqs[j] = NULL; } @@ -1008,8 +1007,8 @@ static void idpf_txq_group_rel(struct idpf_vport *vport) if (flow_sch_en) kfree(txq_grp->stashes); } - kfree(vport->txq_grps); - vport->txq_grps = NULL; + kfree(rsrc->txq_grps); + rsrc->txq_grps = NULL; } /** @@ -1018,12 +1017,10 @@ static void idpf_txq_group_rel(struct idpf_vport *vport) */ static void idpf_rxq_sw_queue_rel(struct idpf_rxq_group *rx_qgrp) { - int i, j; - - for (i = 0; i < rx_qgrp->vport->num_bufqs_per_qgrp; i++) { + for (u8 i = 0; i < rx_qgrp->vport->dflt_qv_rsrc.num_bufqs_per_qgrp; i++) { struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[i]; - for (j = 0; j < bufq_set->num_refillqs; j++) { + for (u16 j = 0; j < bufq_set->num_refillqs; j++) { kfree(bufq_set->refillqs[j].ring); bufq_set->refillqs[j].ring = NULL; } @@ -1034,23 +1031,20 @@ static void idpf_rxq_sw_queue_rel(struct idpf_rxq_group *rx_qgrp) /** * idpf_rxq_group_rel - Release all resources for rxq groups - * @vport: vport to release rxq groups on + * @rsrc: pointer to queue and vector resources */ -static void idpf_rxq_group_rel(struct idpf_vport *vport) +static void idpf_rxq_group_rel(struct idpf_q_vec_rsrc *rsrc) { - int i; - - if (!vport->rxq_grps) + if (!rsrc->rxq_grps) return; - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u16 num_rxq; - int j; - if (idpf_is_queue_model_split(vport->rxq_model)) { + if (idpf_is_queue_model_split(rsrc->rxq_model)) { num_rxq = rx_qgrp->splitq.num_rxq_sets; - for (j = 0; j < num_rxq; j++) { + for (u16 j = 0; j < num_rxq; j++) { kfree(rx_qgrp->splitq.rxq_sets[j]); rx_qgrp->splitq.rxq_sets[j] = NULL; } @@ -1060,37 +1054,41 @@ static void idpf_rxq_group_rel(struct idpf_vport *vport) rx_qgrp->splitq.bufq_sets = NULL; } else { num_rxq = rx_qgrp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++) { + for (u16 j = 0; j < num_rxq; j++) { kfree(rx_qgrp->singleq.rxqs[j]); rx_qgrp->singleq.rxqs[j] = NULL; } } } - kfree(vport->rxq_grps); - vport->rxq_grps = NULL; + kfree(rsrc->rxq_grps); + rsrc->rxq_grps = NULL; } /** * idpf_vport_queue_grp_rel_all - Release all queue groups * @vport: vport to release queue groups for + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_queue_grp_rel_all(struct idpf_vport *vport) +static void idpf_vport_queue_grp_rel_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { - idpf_txq_group_rel(vport); - idpf_rxq_group_rel(vport); + idpf_txq_group_rel(vport, rsrc); + idpf_rxq_group_rel(rsrc); } /** * idpf_vport_queues_rel - Free memory for all queues * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * Free the memory allocated for queues associated to a vport */ -void idpf_vport_queues_rel(struct idpf_vport *vport) +void idpf_vport_queues_rel(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { - idpf_tx_desc_rel_all(vport); - idpf_rx_desc_rel_all(vport); - idpf_vport_queue_grp_rel_all(vport); + idpf_tx_desc_rel_all(rsrc); + idpf_rx_desc_rel_all(vport, rsrc); + idpf_vport_queue_grp_rel_all(vport, rsrc); kfree(vport->txqs); vport->txqs = NULL; @@ -1099,6 +1097,7 @@ void idpf_vport_queues_rel(struct idpf_vport *vport) /** * idpf_vport_init_fast_path_txqs - Initialize fast path txq array * @vport: vport to init txqs on + * @rsrc: pointer to queue and vector resources * * We get a queue index from skb->queue_mapping and we need a fast way to * dereference the queue from queue groups. This allows us to quickly pull a @@ -1106,22 +1105,23 @@ void idpf_vport_queues_rel(struct idpf_vport *vport) * * Returns 0 on success, negative on failure */ -static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport) +static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_ptp_vport_tx_tstamp_caps *caps = vport->tx_tstamp_caps; struct work_struct *tstamp_task = &vport->tstamp_task; - int i, j, k = 0; + int k = 0; - vport->txqs = kcalloc(vport->num_txq, sizeof(*vport->txqs), + vport->txqs = kcalloc(rsrc->num_txq, sizeof(*vport->txqs), GFP_KERNEL); - if (!vport->txqs) return -ENOMEM; - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_grp = &vport->txq_grps[i]; + vport->num_txq = rsrc->num_txq; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_grp = &rsrc->txq_grps[i]; - for (j = 0; j < tx_grp->num_txq; j++, k++) { + for (u16 j = 0; j < tx_grp->num_txq; j++, k++) { vport->txqs[k] = tx_grp->txqs[j]; vport->txqs[k]->idx = k; @@ -1140,16 +1140,18 @@ static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport) * idpf_vport_init_num_qs - Initialize number of queues * @vport: vport to initialize queues * @vport_msg: data to be filled into vport + * @rsrc: pointer to queue and vector resources */ void idpf_vport_init_num_qs(struct idpf_vport *vport, - struct virtchnl2_create_vport *vport_msg) + struct virtchnl2_create_vport *vport_msg, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_vport_user_config_data *config_data; u16 idx = vport->idx; config_data = &vport->adapter->vport_config[idx]->user_config; - vport->num_txq = le16_to_cpu(vport_msg->num_tx_q); - vport->num_rxq = le16_to_cpu(vport_msg->num_rx_q); + rsrc->num_txq = le16_to_cpu(vport_msg->num_tx_q); + rsrc->num_rxq = le16_to_cpu(vport_msg->num_rx_q); /* number of txqs and rxqs in config data will be zeros only in the * driver load path and we dont update them there after */ @@ -1158,62 +1160,63 @@ void idpf_vport_init_num_qs(struct idpf_vport *vport, config_data->num_req_rx_qs = le16_to_cpu(vport_msg->num_rx_q); } - if (idpf_is_queue_model_split(vport->txq_model)) - vport->num_complq = le16_to_cpu(vport_msg->num_tx_complq); - if (idpf_is_queue_model_split(vport->rxq_model)) - vport->num_bufq = le16_to_cpu(vport_msg->num_rx_bufq); + if (idpf_is_queue_model_split(rsrc->txq_model)) + rsrc->num_complq = le16_to_cpu(vport_msg->num_tx_complq); + if (idpf_is_queue_model_split(rsrc->rxq_model)) + rsrc->num_bufq = le16_to_cpu(vport_msg->num_rx_bufq); /* Adjust number of buffer queues per Rx queue group. */ - if (!idpf_is_queue_model_split(vport->rxq_model)) { - vport->num_bufqs_per_qgrp = 0; + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { + rsrc->num_bufqs_per_qgrp = 0; return; } - vport->num_bufqs_per_qgrp = IDPF_MAX_BUFQS_PER_RXQ_GRP; + rsrc->num_bufqs_per_qgrp = IDPF_MAX_BUFQS_PER_RXQ_GRP; } /** * idpf_vport_calc_num_q_desc - Calculate number of queue groups * @vport: vport to calculate q groups for + * @rsrc: pointer to queue and vector resources */ -void idpf_vport_calc_num_q_desc(struct idpf_vport *vport) +void idpf_vport_calc_num_q_desc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_vport_user_config_data *config_data; - int num_bufqs = vport->num_bufqs_per_qgrp; + u8 num_bufqs = rsrc->num_bufqs_per_qgrp; u32 num_req_txq_desc, num_req_rxq_desc; u16 idx = vport->idx; - int i; config_data = &vport->adapter->vport_config[idx]->user_config; num_req_txq_desc = config_data->num_req_txq_desc; num_req_rxq_desc = config_data->num_req_rxq_desc; - vport->complq_desc_count = 0; + rsrc->complq_desc_count = 0; if (num_req_txq_desc) { - vport->txq_desc_count = num_req_txq_desc; - if (idpf_is_queue_model_split(vport->txq_model)) { - vport->complq_desc_count = num_req_txq_desc; - if (vport->complq_desc_count < IDPF_MIN_TXQ_COMPLQ_DESC) - vport->complq_desc_count = + rsrc->txq_desc_count = num_req_txq_desc; + if (idpf_is_queue_model_split(rsrc->txq_model)) { + rsrc->complq_desc_count = num_req_txq_desc; + if (rsrc->complq_desc_count < IDPF_MIN_TXQ_COMPLQ_DESC) + rsrc->complq_desc_count = IDPF_MIN_TXQ_COMPLQ_DESC; } } else { - vport->txq_desc_count = IDPF_DFLT_TX_Q_DESC_COUNT; - if (idpf_is_queue_model_split(vport->txq_model)) - vport->complq_desc_count = + rsrc->txq_desc_count = IDPF_DFLT_TX_Q_DESC_COUNT; + if (idpf_is_queue_model_split(rsrc->txq_model)) + rsrc->complq_desc_count = IDPF_DFLT_TX_COMPLQ_DESC_COUNT; } if (num_req_rxq_desc) - vport->rxq_desc_count = num_req_rxq_desc; + rsrc->rxq_desc_count = num_req_rxq_desc; else - vport->rxq_desc_count = IDPF_DFLT_RX_Q_DESC_COUNT; + rsrc->rxq_desc_count = IDPF_DFLT_RX_Q_DESC_COUNT; - for (i = 0; i < num_bufqs; i++) { - if (!vport->bufq_desc_count[i]) - vport->bufq_desc_count[i] = - IDPF_RX_BUFQ_DESC_COUNT(vport->rxq_desc_count, + for (u8 i = 0; i < num_bufqs; i++) { + if (!rsrc->bufq_desc_count[i]) + rsrc->bufq_desc_count[i] = + IDPF_RX_BUFQ_DESC_COUNT(rsrc->rxq_desc_count, num_bufqs); } } @@ -1289,54 +1292,54 @@ int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_idx, /** * idpf_vport_calc_num_q_groups - Calculate number of queue groups - * @vport: vport to calculate q groups for + * @rsrc: pointer to queue and vector resources */ -void idpf_vport_calc_num_q_groups(struct idpf_vport *vport) +void idpf_vport_calc_num_q_groups(struct idpf_q_vec_rsrc *rsrc) { - if (idpf_is_queue_model_split(vport->txq_model)) - vport->num_txq_grp = vport->num_txq; + if (idpf_is_queue_model_split(rsrc->txq_model)) + rsrc->num_txq_grp = rsrc->num_txq; else - vport->num_txq_grp = IDPF_DFLT_SINGLEQ_TX_Q_GROUPS; + rsrc->num_txq_grp = IDPF_DFLT_SINGLEQ_TX_Q_GROUPS; - if (idpf_is_queue_model_split(vport->rxq_model)) - vport->num_rxq_grp = vport->num_rxq; + if (idpf_is_queue_model_split(rsrc->rxq_model)) + rsrc->num_rxq_grp = rsrc->num_rxq; else - vport->num_rxq_grp = IDPF_DFLT_SINGLEQ_RX_Q_GROUPS; + rsrc->num_rxq_grp = IDPF_DFLT_SINGLEQ_RX_Q_GROUPS; } /** * idpf_vport_calc_numq_per_grp - Calculate number of queues per group - * @vport: vport to calculate queues for + * @rsrc: pointer to queue and vector resources * @num_txq: return parameter for number of TX queues * @num_rxq: return parameter for number of RX queues */ -static void idpf_vport_calc_numq_per_grp(struct idpf_vport *vport, +static void idpf_vport_calc_numq_per_grp(struct idpf_q_vec_rsrc *rsrc, u16 *num_txq, u16 *num_rxq) { - if (idpf_is_queue_model_split(vport->txq_model)) + if (idpf_is_queue_model_split(rsrc->txq_model)) *num_txq = IDPF_DFLT_SPLITQ_TXQ_PER_GROUP; else - *num_txq = vport->num_txq; + *num_txq = rsrc->num_txq; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) *num_rxq = IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; else - *num_rxq = vport->num_rxq; + *num_rxq = rsrc->num_rxq; } /** * idpf_rxq_set_descids - set the descids supported by this queue - * @vport: virtual port data structure + * @rsrc: pointer to queue and vector resources * @q: rx queue for which descids are set * */ -static void idpf_rxq_set_descids(const struct idpf_vport *vport, +static void idpf_rxq_set_descids(struct idpf_q_vec_rsrc *rsrc, struct idpf_rx_queue *q) { - if (idpf_is_queue_model_split(vport->rxq_model)) { + if (idpf_is_queue_model_split(rsrc->rxq_model)) { q->rxdids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; } else { - if (vport->base_rxd) + if (rsrc->base_rxd) q->rxdids = VIRTCHNL2_RXDID_1_32B_BASE_M; else q->rxdids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M; @@ -1346,34 +1349,35 @@ static void idpf_rxq_set_descids(const struct idpf_vport *vport, /** * idpf_txq_group_alloc - Allocate all txq group resources * @vport: vport to allocate txq groups for + * @rsrc: pointer to queue and vector resources * @num_txq: number of txqs to allocate for each group * * Returns 0 on success, negative on failure */ -static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) +static int idpf_txq_group_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + u16 num_txq) { bool split, flow_sch_en; - int i; - vport->txq_grps = kcalloc(vport->num_txq_grp, - sizeof(*vport->txq_grps), GFP_KERNEL); - if (!vport->txq_grps) + rsrc->txq_grps = kcalloc(rsrc->num_txq_grp, + sizeof(*rsrc->txq_grps), GFP_KERNEL); + if (!rsrc->txq_grps) return -ENOMEM; - split = idpf_is_queue_model_split(vport->txq_model); + split = idpf_is_queue_model_split(rsrc->txq_model); flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_SPLITQ_QSCHED); - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; struct idpf_adapter *adapter = vport->adapter; struct idpf_txq_stash *stashes; - int j; tx_qgrp->vport = vport; tx_qgrp->num_txq = num_txq; - for (j = 0; j < tx_qgrp->num_txq; j++) { + for (u16 j = 0; j < tx_qgrp->num_txq; j++) { tx_qgrp->txqs[j] = kzalloc(sizeof(*tx_qgrp->txqs[j]), GFP_KERNEL); if (!tx_qgrp->txqs[j]) @@ -1389,11 +1393,11 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) tx_qgrp->stashes = stashes; } - for (j = 0; j < tx_qgrp->num_txq; j++) { + for (u16 j = 0; j < tx_qgrp->num_txq; j++) { struct idpf_tx_queue *q = tx_qgrp->txqs[j]; q->dev = &adapter->pdev->dev; - q->desc_count = vport->txq_desc_count; + q->desc_count = rsrc->txq_desc_count; q->tx_max_bufs = idpf_get_max_tx_bufs(adapter); q->tx_min_pkt_len = idpf_get_min_tx_pkt_len(adapter); q->netdev = vport->netdev; @@ -1425,7 +1429,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) if (!tx_qgrp->complq) goto err_alloc; - tx_qgrp->complq->desc_count = vport->complq_desc_count; + tx_qgrp->complq->desc_count = rsrc->complq_desc_count; tx_qgrp->complq->txq_grp = tx_qgrp; tx_qgrp->complq->netdev = vport->netdev; tx_qgrp->complq->clean_budget = vport->compln_clean_budget; @@ -1437,7 +1441,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) return 0; err_alloc: - idpf_txq_group_rel(vport); + idpf_txq_group_rel(vport, rsrc); return -ENOMEM; } @@ -1445,30 +1449,32 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) /** * idpf_rxq_group_alloc - Allocate all rxq group resources * @vport: vport to allocate rxq groups for + * @rsrc: pointer to queue and vector resources * @num_rxq: number of rxqs to allocate for each group * * Returns 0 on success, negative on failure */ -static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) +static int idpf_rxq_group_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + u16 num_rxq) { - int i, k, err = 0; + int k, err = 0; bool hs; - vport->rxq_grps = kcalloc(vport->num_rxq_grp, - sizeof(struct idpf_rxq_group), GFP_KERNEL); - if (!vport->rxq_grps) + rsrc->rxq_grps = kcalloc(rsrc->num_rxq_grp, + sizeof(struct idpf_rxq_group), GFP_KERNEL); + if (!rsrc->rxq_grps) return -ENOMEM; hs = idpf_vport_get_hsplit(vport) == ETHTOOL_TCP_DATA_SPLIT_ENABLED; - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - int j; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; rx_qgrp->vport = vport; - if (!idpf_is_queue_model_split(vport->rxq_model)) { + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { rx_qgrp->singleq.num_rxq = num_rxq; - for (j = 0; j < num_rxq; j++) { + for (u16 j = 0; j < num_rxq; j++) { rx_qgrp->singleq.rxqs[j] = kzalloc(sizeof(*rx_qgrp->singleq.rxqs[j]), GFP_KERNEL); @@ -1481,7 +1487,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) } rx_qgrp->splitq.num_rxq_sets = num_rxq; - for (j = 0; j < num_rxq; j++) { + for (u16 j = 0; j < num_rxq; j++) { rx_qgrp->splitq.rxq_sets[j] = kzalloc(sizeof(struct idpf_rxq_set), GFP_KERNEL); @@ -1491,7 +1497,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) } } - rx_qgrp->splitq.bufq_sets = kcalloc(vport->num_bufqs_per_qgrp, + rx_qgrp->splitq.bufq_sets = kcalloc(rsrc->num_bufqs_per_qgrp, sizeof(struct idpf_bufq_set), GFP_KERNEL); if (!rx_qgrp->splitq.bufq_sets) { @@ -1499,14 +1505,14 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) goto err_alloc; } - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[j]; int swq_size = sizeof(struct idpf_sw_queue); struct idpf_buf_queue *q; q = &rx_qgrp->splitq.bufq_sets[j].bufq; - q->desc_count = vport->bufq_desc_count[j]; + q->desc_count = rsrc->bufq_desc_count[j]; q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK; idpf_queue_assign(HSPLIT_EN, q, hs); @@ -1523,7 +1529,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) &bufq_set->refillqs[k]; refillq->desc_count = - vport->bufq_desc_count[j]; + rsrc->bufq_desc_count[j]; idpf_queue_set(GEN_CHK, refillq); idpf_queue_set(RFL_GEN_CHK, refillq); refillq->ring = kcalloc(refillq->desc_count, @@ -1537,24 +1543,24 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) } skip_splitq_rx_init: - for (j = 0; j < num_rxq; j++) { + for (u16 j = 0; j < num_rxq; j++) { struct idpf_rx_queue *q; - if (!idpf_is_queue_model_split(vport->rxq_model)) { + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { q = rx_qgrp->singleq.rxqs[j]; goto setup_rxq; } q = &rx_qgrp->splitq.rxq_sets[j]->rxq; rx_qgrp->splitq.rxq_sets[j]->refillq[0] = &rx_qgrp->splitq.bufq_sets[0].refillqs[j]; - if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) + if (rsrc->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) rx_qgrp->splitq.rxq_sets[j]->refillq[1] = &rx_qgrp->splitq.bufq_sets[1].refillqs[j]; idpf_queue_assign(HSPLIT_EN, q, hs); setup_rxq: - q->desc_count = vport->rxq_desc_count; + q->desc_count = rsrc->rxq_desc_count; q->rx_ptype_lkup = vport->rx_ptype_lkup; q->netdev = vport->netdev; q->bufq_sets = rx_qgrp->splitq.bufq_sets; @@ -1562,13 +1568,13 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK; q->rx_max_pkt_size = vport->netdev->mtu + LIBETH_RX_LL_LEN; - idpf_rxq_set_descids(vport, q); + idpf_rxq_set_descids(rsrc, q); } } err_alloc: if (err) - idpf_rxq_group_rel(vport); + idpf_rxq_group_rel(rsrc); return err; } @@ -1576,28 +1582,30 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) /** * idpf_vport_queue_grp_alloc_all - Allocate all queue groups/resources * @vport: vport with qgrps to allocate + * @rsrc: pointer to queue and vector resources * * Returns 0 on success, negative on failure */ -static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport) +static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { u16 num_txq, num_rxq; int err; - idpf_vport_calc_numq_per_grp(vport, &num_txq, &num_rxq); + idpf_vport_calc_numq_per_grp(rsrc, &num_txq, &num_rxq); - err = idpf_txq_group_alloc(vport, num_txq); + err = idpf_txq_group_alloc(vport, rsrc, num_txq); if (err) goto err_out; - err = idpf_rxq_group_alloc(vport, num_rxq); + err = idpf_rxq_group_alloc(vport, rsrc, num_rxq); if (err) goto err_out; return 0; err_out: - idpf_vport_queue_grp_rel_all(vport); + idpf_vport_queue_grp_rel_all(vport, rsrc); return err; } @@ -1605,34 +1613,36 @@ static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport) /** * idpf_vport_queues_alloc - Allocate memory for all queues * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * Allocate memory for queues associated with a vport. Returns 0 on success, * negative on failure. */ -int idpf_vport_queues_alloc(struct idpf_vport *vport) +int idpf_vport_queues_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { int err; - err = idpf_vport_queue_grp_alloc_all(vport); + err = idpf_vport_queue_grp_alloc_all(vport, rsrc); if (err) goto err_out; - err = idpf_tx_desc_alloc_all(vport); + err = idpf_tx_desc_alloc_all(vport, rsrc); if (err) goto err_out; - err = idpf_rx_desc_alloc_all(vport); + err = idpf_rx_desc_alloc_all(vport, rsrc); if (err) goto err_out; - err = idpf_vport_init_fast_path_txqs(vport); + err = idpf_vport_init_fast_path_txqs(vport, rsrc); if (err) goto err_out; return 0; err_out: - idpf_vport_queues_rel(vport); + idpf_vport_queues_rel(vport, rsrc); return err; } @@ -2969,7 +2979,7 @@ netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev) return NETDEV_TX_OK; } - if (idpf_is_queue_model_split(vport->txq_model)) + if (idpf_is_queue_model_split(vport->dflt_qv_rsrc.txq_model)) return idpf_tx_splitq_frame(skb, tx_q); else return idpf_tx_singleq_frame(skb, tx_q); @@ -4203,19 +4213,19 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget) static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc) { - bool split = idpf_is_queue_model_split(vport->rxq_model); - u16 num_txq_grp = vport->num_txq_grp; + bool split = idpf_is_queue_model_split(rsrc->rxq_model); + u16 num_txq_grp = rsrc->num_txq_grp; struct idpf_rxq_group *rx_qgrp; struct idpf_txq_group *tx_qgrp; u32 i, qv_idx, q_index; - for (i = 0, qv_idx = 0; i < vport->num_rxq_grp; i++) { + for (i = 0, qv_idx = 0; i < rsrc->num_rxq_grp; i++) { u16 num_rxq; if (qv_idx >= rsrc->num_q_vectors) qv_idx = 0; - rx_qgrp = &vport->rxq_grps[i]; + rx_qgrp = &rsrc->rxq_grps[i]; if (split) num_rxq = rx_qgrp->splitq.num_rxq_sets; else @@ -4238,7 +4248,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport, } if (split) { - for (u32 j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (u32 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { struct idpf_buf_queue *bufq; bufq = &rx_qgrp->splitq.bufq_sets[j].bufq; @@ -4252,7 +4262,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport, qv_idx++; } - split = idpf_is_queue_model_split(vport->txq_model); + split = idpf_is_queue_model_split(rsrc->txq_model); for (i = 0, qv_idx = 0; i < num_txq_grp; i++) { u16 num_txq; @@ -4260,7 +4270,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport, if (qv_idx >= rsrc->num_q_vectors) qv_idx = 0; - tx_qgrp = &vport->txq_grps[i]; + tx_qgrp = &rsrc->txq_grps[i]; num_txq = tx_qgrp->num_txq; for (u32 j = 0; j < num_txq; j++) { @@ -4331,7 +4341,7 @@ static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport, int irq_num; u16 qv_idx; - if (idpf_is_queue_model_split(vport->txq_model)) + if (idpf_is_queue_model_split(rsrc->txq_model)) napi_poll = idpf_vport_splitq_napi_poll; else napi_poll = idpf_vport_singleq_napi_poll; @@ -4368,14 +4378,14 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport, if (!rsrc->q_vectors) return -ENOMEM; - txqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp, + txqs_per_vector = DIV_ROUND_UP(rsrc->num_txq_grp, rsrc->num_q_vectors); - rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq_grp, + rxqs_per_vector = DIV_ROUND_UP(rsrc->num_rxq_grp, rsrc->num_q_vectors); - bufqs_per_vector = vport->num_bufqs_per_qgrp * - DIV_ROUND_UP(vport->num_rxq_grp, + bufqs_per_vector = rsrc->num_bufqs_per_qgrp * + DIV_ROUND_UP(rsrc->num_rxq_grp, rsrc->num_q_vectors); - complqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp, + complqs_per_vector = DIV_ROUND_UP(rsrc->num_txq_grp, rsrc->num_q_vectors); for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) { @@ -4400,7 +4410,7 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport, if (!q_vector->rx) goto error; - if (!idpf_is_queue_model_split(vport->rxq_model)) + if (!idpf_is_queue_model_split(rsrc->rxq_model)) continue; q_vector->bufq = kcalloc(bufqs_per_vector, @@ -4487,8 +4497,8 @@ int idpf_config_rss(struct idpf_vport *vport) */ static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport) { + u16 num_active_rxq = vport->dflt_qv_rsrc.num_rxq; struct idpf_adapter *adapter = vport->adapter; - u16 num_active_rxq = vport->num_rxq; struct idpf_rss_data *rss_data; int i; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 1d67ec1f1b3f..203710222771 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -1012,14 +1012,18 @@ static inline void idpf_vport_intr_set_wb_on_itr(struct idpf_q_vector *q_vector) int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget); void idpf_vport_init_num_qs(struct idpf_vport *vport, - struct virtchnl2_create_vport *vport_msg); -void idpf_vport_calc_num_q_desc(struct idpf_vport *vport); + struct virtchnl2_create_vport *vport_msg, + struct idpf_q_vec_rsrc *rsrc); +void idpf_vport_calc_num_q_desc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_index, struct virtchnl2_create_vport *vport_msg, struct idpf_vport_max_q *max_q); -void idpf_vport_calc_num_q_groups(struct idpf_vport *vport); -int idpf_vport_queues_alloc(struct idpf_vport *vport); -void idpf_vport_queues_rel(struct idpf_vport *vport); +void idpf_vport_calc_num_q_groups(struct idpf_q_vec_rsrc *rsrc); +int idpf_vport_queues_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); +void idpf_vport_queues_rel(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); void idpf_vport_intr_rel(struct idpf_q_vec_rsrc *rsrc); int idpf_vport_intr_alloc(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc); @@ -1033,7 +1037,7 @@ void idpf_vport_intr_ena(struct idpf_vport *vport, int idpf_config_rss(struct idpf_vport *vport); int idpf_init_rss(struct idpf_vport *vport); void idpf_deinit_rss(struct idpf_vport *vport); -int idpf_rx_bufs_init_all(struct idpf_vport *vport); +int idpf_rx_bufs_init_all(struct idpf_q_vec_rsrc *rsrc); void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size); struct sk_buff *idpf_rx_build_skb(const struct libeth_fqe *buf, u32 size); diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 8544c2963763..f46471da833b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1149,13 +1149,15 @@ static int idpf_vport_get_q_reg(u32 *reg_vals, int num_regs, u32 q_type, /** * __idpf_queue_reg_init - initialize queue registers * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources * @reg_vals: registers we are initializing * @num_regs: how many registers there are in total * @q_type: queue model * * Return number of queues that are initialized */ -static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, +static int __idpf_queue_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, u32 *reg_vals, int num_regs, u32 q_type) { struct idpf_adapter *adapter = vport->adapter; @@ -1163,8 +1165,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, switch (q_type) { case VIRTCHNL2_QUEUE_TYPE_TX: - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; for (j = 0; j < tx_qgrp->num_txq && k < num_regs; j++, k++) tx_qgrp->txqs[j]->tail = @@ -1172,8 +1174,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, } break; case VIRTCHNL2_QUEUE_TYPE_RX: - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u16 num_rxq = rx_qgrp->singleq.num_rxq; for (j = 0; j < num_rxq && k < num_regs; j++, k++) { @@ -1186,9 +1188,9 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, } break; case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - u8 num_bufqs = vport->num_bufqs_per_qgrp; + for (i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; + u8 num_bufqs = rsrc->num_bufqs_per_qgrp; for (j = 0; j < num_bufqs && k < num_regs; j++, k++) { struct idpf_buf_queue *q; @@ -1209,11 +1211,13 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, /** * idpf_queue_reg_init - initialize queue registers * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources * @chunks: queue registers received over mailbox * * Return 0 on success, negative on failure */ int idpf_queue_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, struct idpf_queue_id_reg_info *chunks) { int num_regs, ret = 0; @@ -1228,14 +1232,14 @@ int idpf_queue_reg_init(struct idpf_vport *vport, num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q, VIRTCHNL2_QUEUE_TYPE_TX, chunks); - if (num_regs < vport->num_txq) { + if (num_regs < rsrc->num_txq) { ret = -EINVAL; goto free_reg_vals; } - num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs, + num_regs = __idpf_queue_reg_init(vport, rsrc, reg_vals, num_regs, VIRTCHNL2_QUEUE_TYPE_TX); - if (num_regs < vport->num_txq) { + if (num_regs < rsrc->num_txq) { ret = -EINVAL; goto free_reg_vals; } @@ -1243,18 +1247,18 @@ int idpf_queue_reg_init(struct idpf_vport *vport, /* Initialize Rx/buffer queue tail register address based on Rx queue * model */ - if (idpf_is_queue_model_split(vport->rxq_model)) { + if (idpf_is_queue_model_split(rsrc->rxq_model)) { num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q, VIRTCHNL2_QUEUE_TYPE_RX_BUFFER, chunks); - if (num_regs < vport->num_bufq) { + if (num_regs < rsrc->num_bufq) { ret = -EINVAL; goto free_reg_vals; } - num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs, + num_regs = __idpf_queue_reg_init(vport, rsrc, reg_vals, num_regs, VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); - if (num_regs < vport->num_bufq) { + if (num_regs < rsrc->num_bufq) { ret = -EINVAL; goto free_reg_vals; } @@ -1262,14 +1266,14 @@ int idpf_queue_reg_init(struct idpf_vport *vport, num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q, VIRTCHNL2_QUEUE_TYPE_RX, chunks); - if (num_regs < vport->num_rxq) { + if (num_regs < rsrc->num_rxq) { ret = -EINVAL; goto free_reg_vals; } - num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs, + num_regs = __idpf_queue_reg_init(vport, rsrc, reg_vals, num_regs, VIRTCHNL2_QUEUE_TYPE_RX); - if (num_regs < vport->num_rxq) { + if (num_regs < rsrc->num_rxq) { ret = -EINVAL; goto free_reg_vals; } @@ -1368,6 +1372,7 @@ int idpf_send_create_vport_msg(struct idpf_adapter *adapter, */ int idpf_check_supported_desc_ids(struct idpf_vport *vport) { + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_adapter *adapter = vport->adapter; struct virtchnl2_create_vport *vport_msg; u64 rx_desc_ids, tx_desc_ids; @@ -1384,17 +1389,17 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport) rx_desc_ids = le64_to_cpu(vport_msg->rx_desc_ids); tx_desc_ids = le64_to_cpu(vport_msg->tx_desc_ids); - if (idpf_is_queue_model_split(vport->rxq_model)) { + if (idpf_is_queue_model_split(rsrc->rxq_model)) { if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M)) { dev_info(&adapter->pdev->dev, "Minimum RX descriptor support not provided, using the default\n"); vport_msg->rx_desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); } } else { if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M)) - vport->base_rxd = true; + rsrc->base_rxd = true; } - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) return 0; if ((tx_desc_ids & MIN_SUPPORT_TXDID) != MIN_SUPPORT_TXDID) { @@ -1480,11 +1485,13 @@ int idpf_send_disable_vport_msg(struct idpf_vport *vport) /** * idpf_send_config_tx_queues_msg - Send virtchnl config tx queues message * @vport: virtual port data structure + * @rsrc: pointer to queue and vector resources * * Send config tx queues virtchnl message. Returns 0 on success, negative on * failure. */ -static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) +static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct virtchnl2_config_tx_queues *ctq __free(kfree) = NULL; struct virtchnl2_txq_info *qi __free(kfree) = NULL; @@ -1492,30 +1499,30 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) u32 config_sz, chunk_sz, buf_sz; int totqs, num_msgs, num_chunks; ssize_t reply_sz; - int i, k = 0; + int k = 0; - totqs = vport->num_txq + vport->num_complq; + totqs = rsrc->num_txq + rsrc->num_complq; qi = kcalloc(totqs, sizeof(struct virtchnl2_txq_info), GFP_KERNEL); if (!qi) return -ENOMEM; /* Populate the queue info buffer with all queue context info */ - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; - int j, sched_mode; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; + int sched_mode; - for (j = 0; j < tx_qgrp->num_txq; j++, k++) { + for (u16 j = 0; j < tx_qgrp->num_txq; j++, k++) { qi[k].queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id); qi[k].model = - cpu_to_le16(vport->txq_model); + cpu_to_le16(rsrc->txq_model); qi[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX); qi[k].ring_len = cpu_to_le16(tx_qgrp->txqs[j]->desc_count); qi[k].dma_ring_addr = cpu_to_le64(tx_qgrp->txqs[j]->dma); - if (idpf_is_queue_model_split(vport->txq_model)) { + if (idpf_is_queue_model_split(rsrc->txq_model)) { struct idpf_tx_queue *q = tx_qgrp->txqs[j]; qi[k].tx_compl_queue_id = @@ -1534,11 +1541,11 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) } } - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) continue; qi[k].queue_id = cpu_to_le32(tx_qgrp->complq->q_id); - qi[k].model = cpu_to_le16(vport->txq_model); + qi[k].model = cpu_to_le16(rsrc->txq_model); qi[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION); qi[k].ring_len = cpu_to_le16(tx_qgrp->complq->desc_count); qi[k].dma_ring_addr = cpu_to_le64(tx_qgrp->complq->dma); @@ -1574,7 +1581,7 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) xn_params.vc_op = VIRTCHNL2_OP_CONFIG_TX_QUEUES; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; - for (i = 0, k = 0; i < num_msgs; i++) { + for (u16 i = 0, k = 0; i < num_msgs; i++) { memset(ctq, 0, buf_sz); ctq->vport_id = cpu_to_le32(vport->vport_id); ctq->num_qinfo = cpu_to_le16(num_chunks); @@ -1599,11 +1606,13 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) /** * idpf_send_config_rx_queues_msg - Send virtchnl config rx queues message * @vport: virtual port data structure + * @rsrc: pointer to queue and vector resources * * Send config rx queues virtchnl message. Returns 0 on success, negative on * failure. */ -static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) +static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct virtchnl2_config_rx_queues *crq __free(kfree) = NULL; struct virtchnl2_rxq_info *qi __free(kfree) = NULL; @@ -1611,28 +1620,27 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) u32 config_sz, chunk_sz, buf_sz; int totqs, num_msgs, num_chunks; ssize_t reply_sz; - int i, k = 0; + int k = 0; - totqs = vport->num_rxq + vport->num_bufq; + totqs = rsrc->num_rxq + rsrc->num_bufq; qi = kcalloc(totqs, sizeof(struct virtchnl2_rxq_info), GFP_KERNEL); if (!qi) return -ENOMEM; /* Populate the queue info buffer with all queue context info */ - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u16 num_rxq; - int j; - if (!idpf_is_queue_model_split(vport->rxq_model)) + if (!idpf_is_queue_model_split(rsrc->rxq_model)) goto setup_rxqs; - for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) { + for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++, k++) { struct idpf_buf_queue *bufq = &rx_qgrp->splitq.bufq_sets[j].bufq; qi[k].queue_id = cpu_to_le32(bufq->q_id); - qi[k].model = cpu_to_le16(vport->rxq_model); + qi[k].model = cpu_to_le16(rsrc->rxq_model); qi[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); qi[k].desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); @@ -1647,16 +1655,16 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) } setup_rxqs: - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) num_rxq = rx_qgrp->splitq.num_rxq_sets; else num_rxq = rx_qgrp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++, k++) { + for (u16 j = 0; j < num_rxq; j++, k++) { const struct idpf_bufq_set *sets; struct idpf_rx_queue *rxq; - if (!idpf_is_queue_model_split(vport->rxq_model)) { + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { rxq = rx_qgrp->singleq.rxqs[j]; goto common_qi_fields; } @@ -1671,7 +1679,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) rxq->rx_buf_size = sets[0].bufq.rx_buf_size; qi[k].rx_bufq1_id = cpu_to_le16(sets[0].bufq.q_id); - if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { + if (rsrc->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { qi[k].bufq2_ena = IDPF_BUFQ2_ENA; qi[k].rx_bufq2_id = cpu_to_le16(sets[1].bufq.q_id); @@ -1692,7 +1700,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) common_qi_fields: qi[k].queue_id = cpu_to_le32(rxq->q_id); - qi[k].model = cpu_to_le16(vport->rxq_model); + qi[k].model = cpu_to_le16(rsrc->rxq_model); qi[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX); qi[k].ring_len = cpu_to_le16(rxq->desc_count); qi[k].dma_ring_addr = cpu_to_le64(rxq->dma); @@ -1726,7 +1734,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) xn_params.vc_op = VIRTCHNL2_OP_CONFIG_RX_QUEUES; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; - for (i = 0, k = 0; i < num_msgs; i++) { + for (u16 i = 0, k = 0; i < num_msgs; i++) { memset(crq, 0, buf_sz); crq->vport_id = cpu_to_le32(vport->vport_id); crq->num_qinfo = cpu_to_le16(num_chunks); @@ -1798,12 +1806,15 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, * idpf_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue * vector message * @vport: virtual port data structure + * @rsrc: pointer to queue and vector resources * @map: true for map and false for unmap * * Send map or unmap queue vector virtchnl message. Returns 0 on success, * negative on failure. */ -int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) +int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + bool map) { struct virtchnl2_queue_vector_maps *vqvm __free(kfree) = NULL; struct virtchnl2_queue_vector *vqv __free(kfree) = NULL; @@ -1811,24 +1822,24 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) u32 config_sz, chunk_sz, buf_sz; u32 num_msgs, num_chunks, num_q; ssize_t reply_sz; - int i, j, k = 0; + int k = 0; - num_q = vport->num_txq + vport->num_rxq; + num_q = rsrc->num_txq + rsrc->num_rxq; buf_sz = sizeof(struct virtchnl2_queue_vector) * num_q; vqv = kzalloc(buf_sz, GFP_KERNEL); if (!vqv) return -ENOMEM; - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; - for (j = 0; j < tx_qgrp->num_txq; j++, k++) { + for (u16 j = 0; j < tx_qgrp->num_txq; j++, k++) { vqv[k].queue_type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX); vqv[k].queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id); - if (idpf_is_queue_model_split(vport->txq_model)) { + if (idpf_is_queue_model_split(rsrc->txq_model)) { vqv[k].vector_id = cpu_to_le16(tx_qgrp->complq->q_vector->v_idx); vqv[k].itr_idx = @@ -1842,22 +1853,22 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) } } - if (vport->num_txq != k) + if (rsrc->num_txq != k) return -EINVAL; - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u16 num_rxq; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) num_rxq = rx_qgrp->splitq.num_rxq_sets; else num_rxq = rx_qgrp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++, k++) { + for (u16 j = 0; j < num_rxq; j++, k++) { struct idpf_rx_queue *rxq; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq; else rxq = rx_qgrp->singleq.rxqs[j]; @@ -1870,11 +1881,11 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) } } - if (idpf_is_queue_model_split(vport->txq_model)) { - if (vport->num_rxq != k - vport->num_complq) + if (idpf_is_queue_model_split(rsrc->txq_model)) { + if (rsrc->num_rxq != k - rsrc->num_complq) return -EINVAL; } else { - if (vport->num_rxq != k - vport->num_txq) + if (rsrc->num_rxq != k - rsrc->num_txq) return -EINVAL; } @@ -1899,7 +1910,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC; } - for (i = 0, k = 0; i < num_msgs; i++) { + for (u16 i = 0, k = 0; i < num_msgs; i++) { memset(vqvm, 0, buf_sz); xn_params.send_buf.iov_base = vqvm; xn_params.send_buf.iov_len = buf_sz; @@ -2011,19 +2022,21 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport, /** * idpf_send_config_queues_msg - Send config queues virtchnl message * @vport: Virtual port private data structure + * @rsrc: pointer to queue and vector resources * * Will send config queues virtchnl message. Returns 0 on success, negative on * failure. */ -int idpf_send_config_queues_msg(struct idpf_vport *vport) +int idpf_send_config_queues_msg(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { int err; - err = idpf_send_config_tx_queues_msg(vport); + err = idpf_send_config_tx_queues_msg(vport, rsrc); if (err) return err; - return idpf_send_config_rx_queues_msg(vport); + return idpf_send_config_rx_queues_msg(vport, rsrc); } /** @@ -2478,12 +2491,14 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) struct idpf_vc_xn_params xn_params = {}; u16 next_ptype_id = 0; ssize_t reply_sz; + bool is_splitq; int i, j, k; if (vport->rx_ptype_lkup) return 0; - if (idpf_is_queue_model_split(vport->rxq_model)) + is_splitq = idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model); + if (is_splitq) max_ptype = IDPF_RX_MAX_PTYPE; else max_ptype = IDPF_RX_MAX_BASE_PTYPE; @@ -2547,7 +2562,7 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) IDPF_INVALID_PTYPE_ID) goto out; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (is_splitq) k = le16_to_cpu(ptype->ptype_id_10); else k = ptype->ptype_id_8; @@ -3050,7 +3065,7 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport, int num_alloc_vecs; vec_info.num_curr_vecs = rsrc->num_q_vectors; - vec_info.num_req_vecs = max(vport->num_txq, vport->num_rxq); + vec_info.num_req_vecs = max(rsrc->num_txq, rsrc->num_rxq); vec_info.default_vport = vport->default_vport; vec_info.index = vport->idx; @@ -3103,8 +3118,8 @@ int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) vport_config->max_q.max_complq = max_q->max_complq; vport_config->max_q.max_bufq = max_q->max_bufq; - vport->txq_model = le16_to_cpu(vport_msg->txq_model); - vport->rxq_model = le16_to_cpu(vport_msg->rxq_model); + rsrc->txq_model = le16_to_cpu(vport_msg->txq_model); + rsrc->rxq_model = le16_to_cpu(vport_msg->rxq_model); vport->vport_type = le16_to_cpu(vport_msg->vport_type); vport->vport_id = le32_to_cpu(vport_msg->vport_id); @@ -3121,9 +3136,9 @@ int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) idpf_vport_set_hsplit(vport, ETHTOOL_TCP_DATA_SPLIT_ENABLED); - idpf_vport_init_num_qs(vport, vport_msg); - idpf_vport_calc_num_q_desc(vport); - idpf_vport_calc_num_q_groups(vport); + idpf_vport_init_num_qs(vport, vport_msg, rsrc); + idpf_vport_calc_num_q_desc(vport, rsrc); + idpf_vport_calc_num_q_groups(rsrc); idpf_vport_alloc_vec_indexes(vport, rsrc); vport->crc_enable = adapter->crc_enable; @@ -3232,6 +3247,7 @@ static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type, /** * __idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters * @vport: virtual port for which the queues ids are initialized + * @rsrc: pointer to queue and vector resources * @qids: queue ids * @num_qids: number of queue ids * @q_type: type of queue @@ -3240,6 +3256,7 @@ static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type, * parameters. Returns number of queue ids initialized. */ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, const u32 *qids, int num_qids, u32 q_type) @@ -3248,19 +3265,19 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, switch (q_type) { case VIRTCHNL2_QUEUE_TYPE_TX: - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; for (j = 0; j < tx_qgrp->num_txq && k < num_qids; j++, k++) tx_qgrp->txqs[j]->q_id = qids[k]; } break; case VIRTCHNL2_QUEUE_TYPE_RX: - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u16 num_rxq; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) num_rxq = rx_qgrp->splitq.num_rxq_sets; else num_rxq = rx_qgrp->singleq.num_rxq; @@ -3268,7 +3285,7 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, for (j = 0; j < num_rxq && k < num_qids; j++, k++) { struct idpf_rx_queue *q; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) q = &rx_qgrp->splitq.rxq_sets[j]->rxq; else q = rx_qgrp->singleq.rxqs[j]; @@ -3277,16 +3294,16 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, } break; case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: - for (i = 0; i < vport->num_txq_grp && k < num_qids; i++, k++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (i = 0; i < rsrc->num_txq_grp && k < num_qids; i++, k++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; tx_qgrp->complq->q_id = qids[k]; } break; case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - u8 num_bufqs = vport->num_bufqs_per_qgrp; + for (i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; + u8 num_bufqs = rsrc->num_bufqs_per_qgrp; for (j = 0; j < num_bufqs && k < num_qids; j++, k++) { struct idpf_buf_queue *q; @@ -3306,12 +3323,14 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, /** * idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters * @vport: virtual port for which the queues ids are initialized + * @rsrc: pointer to queue and vector resources * @chunks: queue ids received over mailbox * * Will initialize all queue ids with ids received as mailbox parameters. * Returns 0 on success, negative if all the queues are not initialized. */ int idpf_vport_queue_ids_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, struct idpf_queue_id_reg_info *chunks) { int num_ids, err = 0; @@ -3325,13 +3344,13 @@ int idpf_vport_queue_ids_init(struct idpf_vport *vport, num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, VIRTCHNL2_QUEUE_TYPE_TX, chunks); - if (num_ids < vport->num_txq) { + if (num_ids < rsrc->num_txq) { err = -EINVAL; goto mem_rel; } - num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, + num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, num_ids, VIRTCHNL2_QUEUE_TYPE_TX); - if (num_ids < vport->num_txq) { + if (num_ids < rsrc->num_txq) { err = -EINVAL; goto mem_rel; } @@ -3339,44 +3358,46 @@ int idpf_vport_queue_ids_init(struct idpf_vport *vport, num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, VIRTCHNL2_QUEUE_TYPE_RX, chunks); - if (num_ids < vport->num_rxq) { + if (num_ids < rsrc->num_rxq) { err = -EINVAL; goto mem_rel; } - num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, + num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, num_ids, VIRTCHNL2_QUEUE_TYPE_RX); - if (num_ids < vport->num_rxq) { + if (num_ids < rsrc->num_rxq) { err = -EINVAL; goto mem_rel; } - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) goto check_rxq; q_type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, q_type, chunks); - if (num_ids < vport->num_complq) { + if (num_ids < rsrc->num_complq) { err = -EINVAL; goto mem_rel; } - num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, q_type); - if (num_ids < vport->num_complq) { + num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, + num_ids, q_type); + if (num_ids < rsrc->num_complq) { err = -EINVAL; goto mem_rel; } check_rxq: - if (!idpf_is_queue_model_split(vport->rxq_model)) + if (!idpf_is_queue_model_split(rsrc->rxq_model)) goto mem_rel; q_type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, q_type, chunks); - if (num_ids < vport->num_bufq) { + if (num_ids < rsrc->num_bufq) { err = -EINVAL; goto mem_rel; } - num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, q_type); - if (num_ids < vport->num_bufq) + num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, + num_ids, q_type); + if (num_ids < rsrc->num_bufq) err = -EINVAL; mem_rel: @@ -3388,23 +3409,24 @@ int idpf_vport_queue_ids_init(struct idpf_vport *vport, /** * idpf_vport_adjust_qs - Adjust to new requested queues * @vport: virtual port data struct + * @rsrc: pointer to queue and vector resources * * Renegotiate queues. Returns 0 on success, negative on failure. */ -int idpf_vport_adjust_qs(struct idpf_vport *vport) +int idpf_vport_adjust_qs(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc) { struct virtchnl2_create_vport vport_msg; int err; - vport_msg.txq_model = cpu_to_le16(vport->txq_model); - vport_msg.rxq_model = cpu_to_le16(vport->rxq_model); + vport_msg.txq_model = cpu_to_le16(rsrc->txq_model); + vport_msg.rxq_model = cpu_to_le16(rsrc->rxq_model); err = idpf_vport_calc_total_qs(vport->adapter, vport->idx, &vport_msg, NULL); if (err) return err; - idpf_vport_init_num_qs(vport, &vport_msg); - idpf_vport_calc_num_q_groups(vport); + idpf_vport_init_num_qs(vport, &vport_msg, rsrc); + idpf_vport_calc_num_q_groups(rsrc); return 0; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index 048b1653dfcd..ef64ca98b1e1 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -103,8 +103,10 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter); int idpf_get_reg_intr_vecs(struct idpf_vport *vport, struct idpf_vec_regs *reg_vals); int idpf_queue_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, struct idpf_queue_id_reg_info *chunks); int idpf_vport_queue_ids_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, struct idpf_queue_id_reg_info *chunks); bool idpf_vport_is_cap_ena(struct idpf_vport *vport, u16 flag); @@ -125,7 +127,8 @@ int idpf_send_destroy_vport_msg(struct idpf_vport *vport); int idpf_send_enable_vport_msg(struct idpf_vport *vport); int idpf_send_disable_vport_msg(struct idpf_vport *vport); -int idpf_vport_adjust_qs(struct idpf_vport *vport); +int idpf_vport_adjust_qs(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); int idpf_vport_alloc_max_qs(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter, @@ -139,7 +142,8 @@ int idpf_send_enable_queues_msg(struct idpf_vport *vport, int idpf_send_disable_queues_msg(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc, struct idpf_queue_id_reg_info *chunks); -int idpf_send_config_queues_msg(struct idpf_vport *vport); +int idpf_send_config_queues_msg(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc); @@ -148,7 +152,9 @@ int idpf_get_vec_ids(struct idpf_adapter *adapter, struct virtchnl2_vector_chunks *chunks); int idpf_send_alloc_vectors_msg(struct idpf_adapter *adapter, u16 num_vectors); int idpf_send_dealloc_vectors_msg(struct idpf_adapter *adapter); -int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map); +int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + bool map); int idpf_add_del_mac_filters(struct idpf_vport *vport, struct idpf_netdev_priv *np, From patchwork Thu May 8 21:50:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Kumar Linga X-Patchwork-Id: 2083149 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=FTHjuM3Q; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::137; helo=smtp4.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ztm5w6yp8z1yYV for ; Fri, 9 May 2025 07:50:44 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 6C50041527; Thu, 8 May 2025 21:50:59 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id ZsD5o6o1Mg6Q; Thu, 8 May 2025 21:50:59 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org DB5484154C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1746741058; bh=jdc0WpR0icbgcFc3dIX7DHZcTBBqEQCPxJBQsytuZBQ=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=FTHjuM3Q6fli1VEj7BMae8+abaYEf6zcKTAG3K+4hkN+DgTNvN2eFlG2MjBBLy03W GWTcd0cSkAxmMD9b0uAKDBeKFwa6sRh3lcZOkJAtG4ULK8junI/uO2DzCNLrkd/CXb 3BcImy2ho/74AIzCpFPvqcgnE5oCyjbSf3s+TaKZYJGX2uQupj0efCBtTmmAc/udwq ai5UdHZ3+M371XmEH8zZiqRySsKE+2gqg0lm8v/+dek+7zw23aDas8//ODVJ/FPJ8m W5v3na1Hy5IboIQVhr9GerwW8pJCmmW/WflnD35e2PDGfygDVvwyQl+W22jWrDIQLi MZbTmIDnaNP9Q== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp4.osuosl.org (Postfix) with ESMTP id DB5484154C; Thu, 8 May 2025 21:50:58 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) by lists1.osuosl.org (Postfix) with ESMTP id 1F52AD94 for ; Thu, 8 May 2025 21:50:53 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 090DA41523 for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id lt2ORRdTlhWR for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18; helo=mgamail.intel.com; envelope-from=pavan.kumar.linga@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 5744E41524 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 5744E41524 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by smtp4.osuosl.org (Postfix) with ESMTPS id 5744E41524 for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) X-CSE-ConnectionGUID: 079DUw3iR/Sfxrn5fReKxQ== X-CSE-MsgGUID: 7h18UPgRQQGStYVDIEDLDA== X-IronPort-AV: E=McAfee;i="6700,10204,11427"; a="47808325" X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="47808325" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2025 14:50:50 -0700 X-CSE-ConnectionGUID: Nl4zLE++RH+xefVbRtZUgg== X-CSE-MsgGUID: WTLarzd/SBuQQHTzGpoZcQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="141534280" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.80.55]) by orviesa005.jf.intel.com with ESMTP; 08 May 2025 14:50:49 -0700 From: Pavan Kumar Linga To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, milena.olech@intel.com, anton.nadezhdin@intel.com, Pavan Kumar Linga Date: Thu, 8 May 2025 14:50:09 -0700 Message-ID: <20250508215013.32668-6-pavan.kumar.linga@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250508215013.32668-1-pavan.kumar.linga@intel.com> References: <20250508215013.32668-1-pavan.kumar.linga@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746741052; x=1778277052; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7gqMymAVnIrvrkHyvlsiZ//G1rC3/gyHDyZvYIRk38o=; b=hkG/tovE0+2/swQOqJOvPQmame4TXqRQxgwL2GQK7VELovXtH5v3zO/l q4WkihqiU4UnC2ZcJkAfb+fPcl8cHeXR0dJOvKWYTQrUWUwJzvb2H/dCm gDVk+b5E4wwBR5qptdGoh2MC9tLRLqsH5SKhQ3SXHMz/4p3QNVGZVzgHm 9mSaOTtxWbUe+5mUeYQpOOSiK4iL2ZG/mnpKMyzFFXLQLFcf/rNKi9Os7 4vzzQIgeA/W80z+fM5e71OdZ8mBxIeSN6xivrm+AyDrpz5b28vdlHj6aS uy6OS+FxZ3obTQbJqQxyylYIEMSDDoKhkTN3Dj6YQcbjC/wQUTP2Sv/g4 g==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=hkG/tovE Subject: [Intel-wired-lan] [PATCH iwl-next v4 5/9] idpf: reshuffle idpf_vport struct members to avoid holes X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" The previous refactor of moving queue and vector resources out of the idpf_vport structure, created few holes. Reshuffle the existing members to avoid holes as much as possible. Reviewed-by: Anton Nadezhdin Signed-off-by: Pavan Kumar Linga Tested-by: Samuel Salin --- drivers/net/ethernet/intel/idpf/idpf.h | 27 +++++++++++++------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 8e13cf29dec7..188bd8364080 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -325,24 +325,24 @@ struct idpf_q_vec_rsrc { /** * struct idpf_vport - Handle for netdevices and queue resources * @dflt_qv_rsrc: contains default queue and vector resources - * @num_txq: Number of allocated TX queues - * @compln_clean_budget: Work budget for completion clean * @txqs: Used only in hotpath to get to the right queue very fast - * @crc_enable: Enable CRC insertion offload - * @rx_ptype_lkup: Lookup table for ptypes on RX * @adapter: back pointer to associated adapter * @netdev: Associated net_device. Each vport should have one and only one * associated netdev. * @flags: See enum idpf_vport_flags - * @vport_type: Default SRIOV, SIOV, etc. + * @compln_clean_budget: Work budget for completion clean * @vport_id: Device given vport identifier + * @vport_type: Default SRIOV, SIOV, etc. * @idx: Software index in adapter vports struct - * @default_vport: Use this vport if one isn't specified + * @num_txq: Number of allocated TX queues * @max_mtu: device given max possible MTU * @default_mac_addr: device will give a default MAC to use * @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation * @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation + * @rx_ptype_lkup: Lookup table for ptypes on RX * @port_stats: per port csum, header split, and other offload stats + * @default_vport: Use this vport if one isn't specified + * @crc_enable: Enable CRC insertion offload * @link_up: True if link is up * @sw_marker_wq: workqueue for marker packets * @tx_tstamp_caps: Capabilities negotiated for Tx timestamping @@ -352,27 +352,26 @@ struct idpf_q_vec_rsrc { */ struct idpf_vport { struct idpf_q_vec_rsrc dflt_qv_rsrc; - u16 num_txq; - u32 compln_clean_budget; struct idpf_tx_queue **txqs; - bool crc_enable; - - struct libeth_rx_pt *rx_ptype_lkup; struct idpf_adapter *adapter; struct net_device *netdev; DECLARE_BITMAP(flags, IDPF_VPORT_FLAGS_NBITS); - u16 vport_type; + u32 compln_clean_budget; u32 vport_id; + u16 vport_type; u16 idx; - bool default_vport; + u16 num_txq; u16 max_mtu; u8 default_mac_addr[ETH_ALEN]; u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS]; u16 tx_itr_profile[IDPF_DIM_PROFILE_SLOTS]; - struct idpf_port_stats port_stats; + struct libeth_rx_pt *rx_ptype_lkup; + struct idpf_port_stats port_stats; + bool default_vport; + bool crc_enable; bool link_up; wait_queue_head_t sw_marker_wq; From patchwork Thu May 8 21:50:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Kumar Linga X-Patchwork-Id: 2083147 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=nixiCbUF; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ztm5t1QWLz1yYV for ; Fri, 9 May 2025 07:50:42 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id B820783CA8; Thu, 8 May 2025 21:50:56 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 7LfmbmSqd6lV; Thu, 8 May 2025 21:50:56 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 8761383CAE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1746741055; bh=AI/2VN3/ULPe9DHjUHbgBEgE8aXlM59EqxYhL2pZu58=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=nixiCbUF3O6DS9aAWbE9RfqprF7Ik7Xy/N92LS74Z6MXvuzy06tDMWEzcUWiG30lG wb1TqwSk/+JPpt4u2riuRRIbEshpA98DiM7h1rV3VuQgkP9fokyrBRM/apS0c2OG5y Qbjnre7PU19d+HFt6aQLC4blx+nH2XPi9vfa5LZ4ao7HueyeNmM+d1Rd/QmzIYij// 98nC7RD+KQvhEVG24ql9s1KBsZOB/lKTORHGWgGleZcgsMWbBZyxQb+6WyUrdteCq/ Ni3upxsTNu4tLg4OxZHD0YrTTlGAMwnAXK2Xs8V+hbpQ1WqVhUyCVSfz+Cv7jfPv/9 I659rJhJ8Qg5A== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp1.osuosl.org (Postfix) with ESMTP id 8761383CAE; Thu, 8 May 2025 21:50:55 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists1.osuosl.org (Postfix) with ESMTP id 731E91A9 for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 570FE422E1 for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 1Poe_I2skjbF for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18; helo=mgamail.intel.com; envelope-from=pavan.kumar.linga@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp2.osuosl.org 7E29C422E3 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 7E29C422E3 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by smtp2.osuosl.org (Postfix) with ESMTPS id 7E29C422E3 for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) X-CSE-ConnectionGUID: yDO97SOXTvmcjT5hlG52Rw== X-CSE-MsgGUID: EIDHxbvTQPmEKwK0KTplEw== X-IronPort-AV: E=McAfee;i="6700,10204,11427"; a="47808326" X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="47808326" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2025 14:50:50 -0700 X-CSE-ConnectionGUID: hRVTFDciQ7aDdbxdR+/4CA== X-CSE-MsgGUID: 00kwTuxQSpmiBnKyK1ctvA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="141534283" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.80.55]) by orviesa005.jf.intel.com with ESMTP; 08 May 2025 14:50:49 -0700 From: Pavan Kumar Linga To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, milena.olech@intel.com, anton.nadezhdin@intel.com, Pavan Kumar Linga Date: Thu, 8 May 2025 14:50:10 -0700 Message-ID: <20250508215013.32668-7-pavan.kumar.linga@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250508215013.32668-1-pavan.kumar.linga@intel.com> References: <20250508215013.32668-1-pavan.kumar.linga@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746741052; x=1778277052; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c/Km5NC6GLOUGS0W15YCoLMcOGvgHptDt+EaYT5mbHU=; b=JMBkl8rSGUf/hmlrBNsiOkygx7R5XOV/arxUfJHu+B7rb2/qw/asCrC2 IQZvz7kR7cHPBf9cH/8HwCDhbuB1TwTzywiZ5wV84CHy4PjWZAUGCDwXl Ubwyf14UWibQDOK4WPiWZOksdKWqdRjvGczhEnNXiRPIs8m3mtkJidX0T H2IjKuo3w9hY5c5hXbvPnziD68fCLtn61a4Yt0DFbFVjcPGWcWxg0v/c2 KtYmYGOyiqLpblgiWvkaOlO85c+i6aoZOnXW03LoPgCelXlsyphKLsHiG dzYbm5sRKDTmnIRy97m1MbFC+k5kt9KnbnsBQ+Bebm1BmZwgZgv06MqrY Q==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=JMBkl8rS Subject: [Intel-wired-lan] [PATCH iwl-next v4 6/9] idpf: add rss_data field to RSS function parameters X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Retrieve rss_data field of vport just once and pass it to RSS related functions instead of retrieving it in each function. While at it, update s/rss/RSS in the RSS function doc comments. Reviewed-by: Anton Nadezhdin Signed-off-by: Pavan Kumar Linga Tested-by: Samuel Salin --- drivers/net/ethernet/intel/idpf/idpf.h | 1 + .../net/ethernet/intel/idpf/idpf_ethtool.c | 2 +- drivers/net/ethernet/intel/idpf/idpf_lib.c | 16 +++++---- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 34 +++++++------------ drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 ++-- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 24 ++++++------- .../net/ethernet/intel/idpf/idpf_virtchnl.h | 8 +++-- 7 files changed, 45 insertions(+), 46 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 188bd8364080..e53a43d5c867 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -9,6 +9,7 @@ struct idpf_adapter; struct idpf_vport; struct idpf_vport_max_q; struct idpf_q_vec_rsrc; +struct idpf_rss_data; #include #include diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c index 607ec4462031..ba54c271b775 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -453,7 +453,7 @@ static int idpf_set_rxfh(struct net_device *netdev, rss_data->rss_lut[lut] = rxfh->indir[lut]; } - err = idpf_config_rss(vport); + err = idpf_config_rss(vport, rss_data); unlock_mutex: idpf_vport_ctrl_unlock(netdev); diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 7d97990fd626..153174d3d51d 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -934,8 +934,8 @@ static void idpf_vport_rel(struct idpf_vport *vport) u16 idx = vport->idx; vport_config = adapter->vport_config[vport->idx]; - idpf_deinit_rss(vport); rss_data = &vport_config->user_config.rss_data; + idpf_deinit_rss(rss_data); kfree(rss_data->rss_key); rss_data->rss_key = NULL; @@ -1322,6 +1322,7 @@ static int idpf_vport_open(struct idpf_vport *vport) struct idpf_adapter *adapter = vport->adapter; struct idpf_vport_config *vport_config; struct idpf_queue_id_reg_info *chunks; + struct idpf_rss_data *rss_data; int err; if (np->state != __IDPF_VPORT_DOWN) @@ -1406,10 +1407,11 @@ static int idpf_vport_open(struct idpf_vport *vport) idpf_restore_features(vport); - if (vport_config->user_config.rss_data.rss_lut) - err = idpf_config_rss(vport); + rss_data = &vport_config->user_config.rss_data; + if (rss_data->rss_lut) + err = idpf_config_rss(vport, rss_data); else - err = idpf_init_rss(vport); + err = idpf_init_rss(vport, rss_data); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize RSS for vport %u: %d\n", vport->vport_id, err); @@ -1426,7 +1428,7 @@ static int idpf_vport_open(struct idpf_vport *vport) return 0; deinit_rss: - idpf_deinit_rss(vport); + idpf_deinit_rss(rss_data); disable_vport: idpf_send_disable_vport_msg(vport); disable_queues: @@ -1903,7 +1905,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, idpf_vport_stop(vport); } - idpf_deinit_rss(vport); + idpf_deinit_rss(&vport_config->user_config.rss_data); /* We're passing in vport here because we need its wait_queue * to send a message and it should be getting all the vport * config data out of the adapter but we need to be careful not @@ -2100,7 +2102,7 @@ static int idpf_vport_manage_rss_lut(struct idpf_vport *vport) memset(rss_data->rss_lut, 0, lut_size); } - return idpf_config_rss(vport); + return idpf_config_rss(vport, rss_data); } /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 56793be3953f..addaab100862 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -4477,33 +4477,32 @@ void idpf_vport_intr_ena(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc) /** * idpf_config_rss - Send virtchnl messages to configure RSS * @vport: virtual port + * @rss_data: pointer to RSS key and lut info * * Return 0 on success, negative on failure */ -int idpf_config_rss(struct idpf_vport *vport) +int idpf_config_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data) { int err; - err = idpf_send_get_set_rss_key_msg(vport, false); + err = idpf_send_get_set_rss_key_msg(vport, rss_data, false); if (err) return err; - return idpf_send_get_set_rss_lut_msg(vport, false); + return idpf_send_get_set_rss_lut_msg(vport, rss_data, false); } /** * idpf_fill_dflt_rss_lut - Fill the indirection table with the default values * @vport: virtual port structure + * @rss_data: pointer to RSS key and lut info */ -static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport) +static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport, + struct idpf_rss_data *rss_data) { u16 num_active_rxq = vport->dflt_qv_rsrc.num_rxq; - struct idpf_adapter *adapter = vport->adapter; - struct idpf_rss_data *rss_data; int i; - rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; - for (i = 0; i < rss_data->rss_lut_size; i++) { rss_data->rss_lut[i] = i % num_active_rxq; rss_data->cached_lut[i] = rss_data->rss_lut[i]; @@ -4513,17 +4512,14 @@ static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport) /** * idpf_init_rss - Allocate and initialize RSS resources * @vport: virtual port + * @rss_data: pointer to RSS key and lut info * * Return 0 on success, negative on failure */ -int idpf_init_rss(struct idpf_vport *vport) +int idpf_init_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data) { - struct idpf_adapter *adapter = vport->adapter; - struct idpf_rss_data *rss_data; u32 lut_size; - rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; - lut_size = rss_data->rss_lut_size * sizeof(u32); rss_data->rss_lut = kzalloc(lut_size, GFP_KERNEL); if (!rss_data->rss_lut) @@ -4538,21 +4534,17 @@ int idpf_init_rss(struct idpf_vport *vport) } /* Fill the default RSS lut values */ - idpf_fill_dflt_rss_lut(vport); + idpf_fill_dflt_rss_lut(vport, rss_data); - return idpf_config_rss(vport); + return idpf_config_rss(vport, rss_data); } /** * idpf_deinit_rss - Release RSS resources - * @vport: virtual port + * @rss_data: pointer to RSS key and lut info */ -void idpf_deinit_rss(struct idpf_vport *vport) +void idpf_deinit_rss(struct idpf_rss_data *rss_data) { - struct idpf_adapter *adapter = vport->adapter; - struct idpf_rss_data *rss_data; - - rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; kfree(rss_data->cached_lut); rss_data->cached_lut = NULL; kfree(rss_data->rss_lut); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 203710222771..e52c5033b25b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -1034,9 +1034,9 @@ int idpf_vport_intr_init(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc); void idpf_vport_intr_ena(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc); -int idpf_config_rss(struct idpf_vport *vport); -int idpf_init_rss(struct idpf_vport *vport); -void idpf_deinit_rss(struct idpf_vport *vport); +int idpf_config_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data); +int idpf_init_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data); +void idpf_deinit_rss(struct idpf_rss_data *rss_data); int idpf_rx_bufs_init_all(struct idpf_q_vec_rsrc *rsrc); void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size); diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index f46471da833b..107b6fd6ea35 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -2272,24 +2272,24 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport) } /** - * idpf_send_get_set_rss_lut_msg - Send virtchnl get or set rss lut message + * idpf_send_get_set_rss_lut_msg - Send virtchnl get or set RSS lut message * @vport: virtual port data structure - * @get: flag to set or get rss look up table + * @rss_data: pointer to RSS key and lut info + * @get: flag to set or get RSS look up table * * Returns 0 on success, negative on failure. */ -int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get) +int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, + struct idpf_rss_data *rss_data, + bool get) { struct virtchnl2_rss_lut *recv_rl __free(kfree) = NULL; struct virtchnl2_rss_lut *rl __free(kfree) = NULL; struct idpf_vc_xn_params xn_params = {}; - struct idpf_rss_data *rss_data; int buf_size, lut_buf_size; ssize_t reply_sz; int i; - rss_data = - &vport->adapter->vport_config[vport->idx]->user_config.rss_data; buf_size = struct_size(rl, lut, rss_data->rss_lut_size); rl = kzalloc(buf_size, GFP_KERNEL); if (!rl) @@ -2347,24 +2347,24 @@ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get) } /** - * idpf_send_get_set_rss_key_msg - Send virtchnl get or set rss key message + * idpf_send_get_set_rss_key_msg - Send virtchnl get or set RSS key message * @vport: virtual port data structure - * @get: flag to set or get rss look up table + * @rss_data: pointer to RSS key and lut info + * @get: flag to set or get RSS look up table * * Returns 0 on success, negative on failure */ -int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get) +int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, + struct idpf_rss_data *rss_data, + bool get) { struct virtchnl2_rss_key *recv_rk __free(kfree) = NULL; struct virtchnl2_rss_key *rk __free(kfree) = NULL; struct idpf_vc_xn_params xn_params = {}; - struct idpf_rss_data *rss_data; ssize_t reply_sz; int i, buf_size; u16 key_size; - rss_data = - &vport->adapter->vport_config[vport->idx]->user_config.rss_data; buf_size = struct_size(rk, key_flex, rss_data->rss_key_size); rk = kzalloc(buf_size, GFP_KERNEL); if (!rk) diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index ef64ca98b1e1..cfeefbc5174f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -167,7 +167,11 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport); int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport); int idpf_send_get_stats_msg(struct idpf_vport *vport); int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs); -int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get); -int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get); +int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, + struct idpf_rss_data *rss_data, + bool get); +int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, + struct idpf_rss_data *rss_data, + bool get); #endif /* _IDPF_VIRTCHNL_H_ */ From patchwork Thu May 8 21:50:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Kumar Linga X-Patchwork-Id: 2083153 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=RDpB4Jx6; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.137; helo=smtp4.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ztm630ZPGz1yYV for ; Fri, 9 May 2025 07:50:51 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id D3EF741528; Thu, 8 May 2025 21:51:05 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id hmtctRSFT6tn; Thu, 8 May 2025 21:51:03 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org ADF9D4154B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1746741063; bh=1LcP8VM/FvzaYdTuX6CBsNdOOfOvUSFdRNhl/ivLTYg=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=RDpB4Jx6wwaul//nxxhMhdqAwkyujw75EoB4qpXFcjeVHADFjO/0dZKTj1cxuVvrS t4fdSRPTSqW9Dw5sDu8RqpFtODw3Q1Qgv/RZeAU61Kl0IyttkcjAUTsqaHQJLcjRgs lZsyGcQNdXdSB7tWqFzVGpY9ywsHpAwzaDO68v41sxwUJnpouCfUkzJoiaKTrECfYU StlFt95Mfwd1RNt/ls+V5wZqIZL9QNw33/wu2F/6od7KNSHlxo5GFWgMygacsifNw+ pNVj8IsoPymhXcVLlXyaAyWDc6dtAaNB/tkliZBe6ZasgGtUsWPGc6jLnVVUPZPoB0 2LV6IQWI7dt9g== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp4.osuosl.org (Postfix) with ESMTP id ADF9D4154B; Thu, 8 May 2025 21:51:03 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) by lists1.osuosl.org (Postfix) with ESMTP id 0E7EE439 for ; Thu, 8 May 2025 21:50:55 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 51EF641523 for ; Thu, 8 May 2025 21:50:53 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id vkaeGPjzSi1W for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18; helo=mgamail.intel.com; envelope-from=pavan.kumar.linga@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 94B6B41527 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 94B6B41527 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by smtp4.osuosl.org (Postfix) with ESMTPS id 94B6B41527 for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) X-CSE-ConnectionGUID: U5GvKQIPS4uFt3uDSOplrQ== X-CSE-MsgGUID: VBhVyXcFRIuyY9uBsHfIww== X-IronPort-AV: E=McAfee;i="6700,10204,11427"; a="47808328" X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="47808328" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2025 14:50:50 -0700 X-CSE-ConnectionGUID: WGpe26INQWaTkSPR9mQADw== X-CSE-MsgGUID: qqKtYH95Q4Sik/WSPECamQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="141534287" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.80.55]) by orviesa005.jf.intel.com with ESMTP; 08 May 2025 14:50:49 -0700 From: Pavan Kumar Linga To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, milena.olech@intel.com, anton.nadezhdin@intel.com, Pavan Kumar Linga , Madhu Chittim Date: Thu, 8 May 2025 14:50:11 -0700 Message-ID: <20250508215013.32668-8-pavan.kumar.linga@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250508215013.32668-1-pavan.kumar.linga@intel.com> References: <20250508215013.32668-1-pavan.kumar.linga@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746741052; x=1778277052; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bqABCHBS0HEmntXIgMWiNruFFR6h/k7C1SVIrmTSd94=; b=PVLQxZsWoGE2Bxdd8785Jbk2WqWrefsIWIl+nZBdMklHW3myW7fV632t QiZHAxHpPeiuVBeDcJNM43tu13jpJihI7nHd7Zivy1YmmwEmEJxLVn7DF wTs8rzKQFOzQ2pYgOOyaf7TsnPN9Nb1wKDXU5i0qOJng8byDaUERWzCoP /toYfD1QBNYOsdJl+fhvSoZWeb49su2o/VjXfneoBPkY9qQocH4/hCrAo HCPa0X/w/nSugG+mtDqoFFn9KEOgjChigiMum+yquv2X03ROy7VjQ7kjK bnsxJB+K3Dhb4+lFH8MfRKtxnv425fkupMaXgStXVK3VruEG3pFHsbMOK g==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=PVLQxZsW Subject: [Intel-wired-lan] [PATCH iwl-next v4 7/9] idpf: generalize send virtchnl message API X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" With the previous refactor of passing idpf resource pointer, all of the virtchnl send message functions do not require full vport structure. Those functions can be generalized to be able to use for configuring vport independent queues. Signed-off-by: Anton Nadezhdin Reviewed-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Tested-by: Samuel Salin --- drivers/net/ethernet/intel/idpf/idpf_dev.c | 2 +- drivers/net/ethernet/intel/idpf/idpf_lib.c | 93 ++++---- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 6 +- drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 2 +- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 217 ++++++++++-------- .../net/ethernet/intel/idpf/idpf_virtchnl.h | 47 ++-- 6 files changed, 199 insertions(+), 168 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c index 3d358030b809..6d5c9098f577 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c @@ -85,7 +85,7 @@ static int idpf_intr_reg_init(struct idpf_vport *vport, if (!reg_vals) return -ENOMEM; - num_regs = idpf_get_reg_intr_vecs(vport, reg_vals); + num_regs = idpf_get_reg_intr_vecs(adapter, reg_vals); if (num_regs < num_vecs) { err = -EINVAL; goto free_reg_vals; diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 153174d3d51d..6f295abdc6c1 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -440,7 +440,6 @@ static int __idpf_del_mac_filter(struct idpf_vport_config *vport_config, /** * idpf_del_mac_filter - Delete a MAC filter from the filter list - * @vport: Main vport structure * @np: Netdev private structure * @macaddr: The MAC address * @async: Don't wait for return message @@ -448,8 +447,7 @@ static int __idpf_del_mac_filter(struct idpf_vport_config *vport_config, * Removes filter from list and if interface is up, tells hardware about the * removed filter. **/ -static int idpf_del_mac_filter(struct idpf_vport *vport, - struct idpf_netdev_priv *np, +static int idpf_del_mac_filter(struct idpf_netdev_priv *np, const u8 *macaddr, bool async) { struct idpf_vport_config *vport_config; @@ -471,7 +469,8 @@ static int idpf_del_mac_filter(struct idpf_vport *vport, if (np->state == __IDPF_VPORT_UP) { int err; - err = idpf_add_del_mac_filters(vport, np, false, async); + err = idpf_add_del_mac_filters(np->adapter, vport_config, + np->vport_id, false, async); if (err) return err; } @@ -519,7 +518,6 @@ static int __idpf_add_mac_filter(struct idpf_vport_config *vport_config, /** * idpf_add_mac_filter - Add a mac filter to the filter list - * @vport: Main vport structure * @np: Netdev private structure * @macaddr: The MAC address * @async: Don't wait for return message @@ -527,8 +525,7 @@ static int __idpf_add_mac_filter(struct idpf_vport_config *vport_config, * Returns 0 on success or error on failure. If interface is up, we'll also * send the virtchnl message to tell hardware about the filter. **/ -static int idpf_add_mac_filter(struct idpf_vport *vport, - struct idpf_netdev_priv *np, +static int idpf_add_mac_filter(struct idpf_netdev_priv *np, const u8 *macaddr, bool async) { struct idpf_vport_config *vport_config; @@ -540,7 +537,8 @@ static int idpf_add_mac_filter(struct idpf_vport *vport, return err; if (np->state == __IDPF_VPORT_UP) - err = idpf_add_del_mac_filters(vport, np, true, async); + err = idpf_add_del_mac_filters(np->adapter, vport_config, + np->vport_id, true, async); return err; } @@ -588,7 +586,7 @@ static void idpf_restore_mac_filters(struct idpf_vport *vport) spin_unlock_bh(&vport_config->mac_filter_list_lock); - idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev), + idpf_add_del_mac_filters(vport->adapter, vport_config, vport->vport_id, true, false); } @@ -612,7 +610,7 @@ static void idpf_remove_mac_filters(struct idpf_vport *vport) spin_unlock_bh(&vport_config->mac_filter_list_lock); - idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev), + idpf_add_del_mac_filters(vport->adapter, vport_config, vport->vport_id, false, false); } @@ -654,8 +652,7 @@ static int idpf_init_mac_addr(struct idpf_vport *vport, eth_hw_addr_set(netdev, vport->default_mac_addr); ether_addr_copy(netdev->perm_addr, vport->default_mac_addr); - return idpf_add_mac_filter(vport, np, vport->default_mac_addr, - false); + return idpf_add_mac_filter(np, vport->default_mac_addr, false); } if (!idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, @@ -667,7 +664,7 @@ static int idpf_init_mac_addr(struct idpf_vport *vport, } eth_hw_addr_random(netdev); - err = idpf_add_mac_filter(vport, np, netdev->dev_addr, false); + err = idpf_add_mac_filter(np, netdev->dev_addr, false); if (err) return err; @@ -839,7 +836,9 @@ static void idpf_vport_stop(struct idpf_vport *vport) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; + struct idpf_adapter *adapter = vport->adapter; struct idpf_queue_id_reg_info *chunks; + u32 vport_id = vport->vport_id; if (np->state <= __IDPF_VPORT_DOWN) return; @@ -847,18 +846,18 @@ static void idpf_vport_stop(struct idpf_vport *vport) netif_carrier_off(vport->netdev); netif_tx_disable(vport->netdev); - chunks = &vport->adapter->vport_config[vport->idx]->qid_reg_info; + chunks = &adapter->vport_config[vport->idx]->qid_reg_info; - idpf_send_disable_vport_msg(vport); + idpf_send_disable_vport_msg(adapter, vport_id); idpf_send_disable_queues_msg(vport, rsrc, chunks); - idpf_send_map_unmap_queue_vector_msg(vport, rsrc, false); + idpf_send_map_unmap_queue_vector_msg(adapter, rsrc, vport_id, false); /* Normally we ask for queues in create_vport, but if the number of * initially requested queues have changed, for example via ethtool * set channels, we do delete queues and then add the queues back * instead of deleting and reallocating the vport. */ if (test_and_clear_bit(IDPF_VPORT_DEL_QUEUES, vport->flags)) - idpf_send_delete_queues_msg(vport, chunks); + idpf_send_delete_queues_msg(adapter, chunks, vport_id); idpf_remove_features(vport); @@ -939,7 +938,7 @@ static void idpf_vport_rel(struct idpf_vport *vport) kfree(rss_data->rss_key); rss_data->rss_key = NULL; - idpf_send_destroy_vport_msg(vport); + idpf_send_destroy_vport_msg(adapter, vport->vport_id); /* Release all max queues allocated to the adapter's pool */ max_q.max_rxq = vport_config->max_q.max_rxq; @@ -1182,7 +1181,8 @@ void idpf_statistics_task(struct work_struct *work) struct idpf_vport *vport = adapter->vports[i]; if (vport && !test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags)) - idpf_send_get_stats_msg(vport); + idpf_send_get_stats_msg(netdev_priv(vport->netdev), + &vport->port_stats); } queue_delayed_work(adapter->stats_wq, &adapter->stats_task, @@ -1323,6 +1323,8 @@ static int idpf_vport_open(struct idpf_vport *vport) struct idpf_vport_config *vport_config; struct idpf_queue_id_reg_info *chunks; struct idpf_rss_data *rss_data; + u32 vport_id = vport->vport_id; + bool rsc_ena; int err; if (np->state != __IDPF_VPORT_DOWN) @@ -1376,14 +1378,16 @@ static int idpf_vport_open(struct idpf_vport *vport) idpf_rx_init_buf_tail(rsrc); idpf_vport_intr_ena(vport, rsrc); - err = idpf_send_config_queues_msg(vport, rsrc); + rsc_ena = idpf_is_feature_ena(vport, NETIF_F_GRO_HW); + err = idpf_send_config_queues_msg(adapter, rsrc, vport_id, rsc_ena); if (err) { dev_err(&adapter->pdev->dev, "Failed to configure queues for vport %u, %d\n", vport->vport_id, err); goto intr_deinit; } - err = idpf_send_map_unmap_queue_vector_msg(vport, rsrc, true); + err = idpf_send_map_unmap_queue_vector_msg(adapter, rsrc, vport_id, + true); if (err) { dev_err(&adapter->pdev->dev, "Failed to map queue vectors for vport %u: %d\n", vport->vport_id, err); @@ -1397,7 +1401,7 @@ static int idpf_vport_open(struct idpf_vport *vport) goto unmap_queue_vectors; } - err = idpf_send_enable_vport_msg(vport); + err = idpf_send_enable_vport_msg(adapter, vport_id); if (err) { dev_err(&adapter->pdev->dev, "Failed to enable vport %u: %d\n", vport->vport_id, err); @@ -1430,11 +1434,11 @@ static int idpf_vport_open(struct idpf_vport *vport) deinit_rss: idpf_deinit_rss(rss_data); disable_vport: - idpf_send_disable_vport_msg(vport); + idpf_send_disable_vport_msg(adapter, vport_id); disable_queues: idpf_send_disable_queues_msg(vport, rsrc, chunks); unmap_queue_vectors: - idpf_send_map_unmap_queue_vector_msg(vport, rsrc, false); + idpf_send_map_unmap_queue_vector_msg(adapter, rsrc, vport_id, false); intr_deinit: idpf_vport_intr_deinit(vport, rsrc); queues_rel: @@ -1848,6 +1852,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, struct idpf_adapter *adapter = vport->adapter; struct idpf_vport_config *vport_config; struct idpf_q_vec_rsrc *new_rsrc; + u32 vport_id = vport->vport_id; struct idpf_vport *new_vport; int err; @@ -1899,28 +1904,21 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, vport_config = adapter->vport_config[vport->idx]; if (current_state <= __IDPF_VPORT_DOWN) { - idpf_send_delete_queues_msg(vport, &vport_config->qid_reg_info); + idpf_send_delete_queues_msg(adapter, &vport_config->qid_reg_info, + vport_id); } else { set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags); idpf_vport_stop(vport); } idpf_deinit_rss(&vport_config->user_config.rss_data); - /* We're passing in vport here because we need its wait_queue - * to send a message and it should be getting all the vport - * config data out of the adapter but we need to be careful not - * to add code to add_queues to change the vport config within - * vport itself as it will be wiped with a memcpy later. - */ - err = idpf_send_add_queues_msg(vport, new_rsrc->num_txq, - new_rsrc->num_complq, - new_rsrc->num_rxq, - new_rsrc->num_bufq); + err = idpf_send_add_queues_msg(adapter, vport_config, new_rsrc, + vport_id); if (err) goto err_reset; - /* Same comment as above regarding avoiding copying the wait_queues and - * mutexes applies here. We do not want to mess with those if possible. + /* Avoid copying the wait_queues and mutexes. We do not want to mess + * with those if possible. */ memcpy(vport, new_vport, offsetof(struct idpf_vport, link_up)); @@ -1939,8 +1937,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, return err; err_reset: - idpf_send_add_queues_msg(vport, rsrc->num_txq, rsrc->num_complq, - rsrc->num_rxq, rsrc->num_bufq); + idpf_send_add_queues_msg(adapter, vport_config, rsrc, vport_id); err_open: if (current_state == __IDPF_VPORT_UP) @@ -1969,7 +1966,7 @@ static int idpf_addr_sync(struct net_device *netdev, const u8 *addr) { struct idpf_netdev_priv *np = netdev_priv(netdev); - return idpf_add_mac_filter(np->vport, np, addr, true); + return idpf_add_mac_filter(np, addr, true); } /** @@ -1997,7 +1994,7 @@ static int idpf_addr_unsync(struct net_device *netdev, const u8 *addr) if (ether_addr_equal(addr, netdev->dev_addr)) return 0; - idpf_del_mac_filter(np->vport, np, addr, true); + idpf_del_mac_filter(np, addr, true); return 0; } @@ -2081,14 +2078,15 @@ static void idpf_set_rx_mode(struct net_device *netdev) */ static int idpf_vport_manage_rss_lut(struct idpf_vport *vport) { - bool ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH); struct idpf_rss_data *rss_data; u16 idx = vport->idx; int lut_size; + bool ena; rss_data = &vport->adapter->vport_config[idx]->user_config.rss_data; lut_size = rss_data->rss_lut_size * sizeof(u32); + ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH); if (ena) { /* This will contain the default or user configured LUT */ memcpy(rss_data->rss_lut, rss_data->cached_lut, lut_size); @@ -2144,8 +2142,13 @@ static int idpf_set_features(struct net_device *netdev, } if (changed & NETIF_F_LOOPBACK) { + bool loopback_ena; + netdev->features ^= NETIF_F_LOOPBACK; - err = idpf_send_ena_dis_loopback_msg(vport); + loopback_ena = idpf_is_feature_ena(vport, NETIF_F_LOOPBACK); + + err = idpf_send_ena_dis_loopback_msg(adapter, vport->vport_id, + loopback_ena); } unlock_mutex: @@ -2307,14 +2310,14 @@ static int idpf_set_mac(struct net_device *netdev, void *p) goto unlock_mutex; vport_config = vport->adapter->vport_config[vport->idx]; - err = idpf_add_mac_filter(vport, np, addr->sa_data, false); + err = idpf_add_mac_filter(np, addr->sa_data, false); if (err) { __idpf_del_mac_filter(vport_config, addr->sa_data); goto unlock_mutex; } if (is_valid_ether_addr(vport->default_mac_addr)) - idpf_del_mac_filter(vport, np, vport->default_mac_addr, false); + idpf_del_mac_filter(np, vport->default_mac_addr, false); ether_addr_copy(vport->default_mac_addr, addr->sa_data); eth_hw_addr_set(netdev, addr->sa_data); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index addaab100862..f6c263c82e97 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -4483,13 +4483,15 @@ void idpf_vport_intr_ena(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc) */ int idpf_config_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data) { + struct idpf_adapter *adapter = vport->adapter; + u32 vport_id = vport->vport_id; int err; - err = idpf_send_get_set_rss_key_msg(vport, rss_data, false); + err = idpf_send_get_set_rss_key_msg(adapter, rss_data, vport_id, false); if (err) return err; - return idpf_send_get_set_rss_lut_msg(vport, rss_data, false); + return idpf_send_get_set_rss_lut_msg(adapter, rss_data, vport_id, false); } /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c index 61d6f774e2f6..0bb07bcb974b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c @@ -84,7 +84,7 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport, if (!reg_vals) return -ENOMEM; - num_regs = idpf_get_reg_intr_vecs(vport, reg_vals); + num_regs = idpf_get_reg_intr_vecs(adapter, reg_vals); if (num_regs < num_vecs) { err = -EINVAL; goto free_reg_vals; diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 107b6fd6ea35..ebf60ab7b2df 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1063,12 +1063,12 @@ idpf_vport_init_queue_reg_chunks(struct idpf_vport_config *vport_config, /** * idpf_get_reg_intr_vecs - Get vector queue register offset - * @vport: virtual port structure + * @adapter: adapter structure to get the vector chunks * @reg_vals: Register offsets to store in * * Returns number of registers that got populated */ -int idpf_get_reg_intr_vecs(struct idpf_vport *vport, +int idpf_get_reg_intr_vecs(struct idpf_adapter *adapter, struct idpf_vec_regs *reg_vals) { struct virtchnl2_vector_chunks *chunks; @@ -1076,7 +1076,7 @@ int idpf_get_reg_intr_vecs(struct idpf_vport *vport, u16 num_vchunks, num_vec; int num_regs = 0, i, j; - chunks = &vport->adapter->req_vec_chunks->vchunks; + chunks = &adapter->req_vec_chunks->vchunks; num_vchunks = le16_to_cpu(chunks->num_vchunks); for (j = 0; j < num_vchunks; j++) { @@ -1412,86 +1412,91 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport) /** * idpf_send_destroy_vport_msg - Send virtchnl destroy vport message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_id: vport identifier used while preparing the virtchnl message * * Send virtchnl destroy vport message. Returns 0 on success, negative on * failure. */ -int idpf_send_destroy_vport_msg(struct idpf_vport *vport) +int idpf_send_destroy_vport_msg(struct idpf_adapter *adapter, u32 vport_id) { struct idpf_vc_xn_params xn_params = {}; struct virtchnl2_vport v_id; ssize_t reply_sz; - v_id.vport_id = cpu_to_le32(vport->vport_id); + v_id.vport_id = cpu_to_le32(vport_id); xn_params.vc_op = VIRTCHNL2_OP_DESTROY_VPORT; xn_params.send_buf.iov_base = &v_id; xn_params.send_buf.iov_len = sizeof(v_id); xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } /** * idpf_send_enable_vport_msg - Send virtchnl enable vport message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_id: vport identifier used while preparing the virtchnl message * * Send enable vport virtchnl message. Returns 0 on success, negative on * failure. */ -int idpf_send_enable_vport_msg(struct idpf_vport *vport) +int idpf_send_enable_vport_msg(struct idpf_adapter *adapter, u32 vport_id) { struct idpf_vc_xn_params xn_params = {}; struct virtchnl2_vport v_id; ssize_t reply_sz; - v_id.vport_id = cpu_to_le32(vport->vport_id); + v_id.vport_id = cpu_to_le32(vport_id); xn_params.vc_op = VIRTCHNL2_OP_ENABLE_VPORT; xn_params.send_buf.iov_base = &v_id; xn_params.send_buf.iov_len = sizeof(v_id); xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } /** * idpf_send_disable_vport_msg - Send virtchnl disable vport message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_id: vport identifier used while preparing the virtchnl message * * Send disable vport virtchnl message. Returns 0 on success, negative on * failure. */ -int idpf_send_disable_vport_msg(struct idpf_vport *vport) +int idpf_send_disable_vport_msg(struct idpf_adapter *adapter, u32 vport_id) { struct idpf_vc_xn_params xn_params = {}; struct virtchnl2_vport v_id; ssize_t reply_sz; - v_id.vport_id = cpu_to_le32(vport->vport_id); + v_id.vport_id = cpu_to_le32(vport_id); xn_params.vc_op = VIRTCHNL2_OP_DISABLE_VPORT; xn_params.send_buf.iov_base = &v_id; xn_params.send_buf.iov_len = sizeof(v_id); xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } /** * idpf_send_config_tx_queues_msg - Send virtchnl config tx queues message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message * * Send config tx queues virtchnl message. Returns 0 on success, negative on * failure. */ -static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport, - struct idpf_q_vec_rsrc *rsrc) +static int idpf_send_config_tx_queues_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id) { struct virtchnl2_config_tx_queues *ctq __free(kfree) = NULL; struct virtchnl2_txq_info *qi __free(kfree) = NULL; @@ -1583,13 +1588,13 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport, for (u16 i = 0, k = 0; i < num_msgs; i++) { memset(ctq, 0, buf_sz); - ctq->vport_id = cpu_to_le32(vport->vport_id); + ctq->vport_id = cpu_to_le32(vport_id); ctq->num_qinfo = cpu_to_le16(num_chunks); memcpy(ctq->qinfo, &qi[k], chunk_sz * num_chunks); xn_params.send_buf.iov_base = ctq; xn_params.send_buf.iov_len = buf_sz; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; @@ -1605,14 +1610,17 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport, /** * idpf_send_config_rx_queues_msg - Send virtchnl config rx queues message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message + * @rsc_ena: flag to check if RSC feature is enabled * * Send config rx queues virtchnl message. Returns 0 on success, negative on * failure. */ -static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport, - struct idpf_q_vec_rsrc *rsrc) +static int idpf_send_config_rx_queues_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id, bool rsc_ena) { struct virtchnl2_config_rx_queues *crq __free(kfree) = NULL; struct virtchnl2_rxq_info *qi __free(kfree) = NULL; @@ -1650,7 +1658,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport, qi[k].buffer_notif_stride = IDPF_RX_BUF_STRIDE; qi[k].rx_buffer_low_watermark = cpu_to_le16(bufq->rx_buffer_low_watermark); - if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) + if (rsc_ena) qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC); } @@ -1686,7 +1694,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport, } qi[k].rx_buffer_low_watermark = cpu_to_le16(rxq->rx_buffer_low_watermark); - if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) + if (rsc_ena) qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC); rxq->rx_hbuf_size = sets[0].bufq.rx_hbuf_size; @@ -1736,13 +1744,13 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport, for (u16 i = 0, k = 0; i < num_msgs; i++) { memset(crq, 0, buf_sz); - crq->vport_id = cpu_to_le32(vport->vport_id); + crq->vport_id = cpu_to_le32(vport_id); crq->num_qinfo = cpu_to_le16(num_chunks); memcpy(crq->qinfo, &qi[k], chunk_sz * num_chunks); xn_params.send_buf.iov_base = crq; xn_params.send_buf.iov_len = buf_sz; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; @@ -1759,15 +1767,17 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport, /** * idpf_send_ena_dis_queues_msg - Send virtchnl enable or disable * queues message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message * @chunks: queue register info + * @vport_id: vport identifier used while preparing the virtchnl message * @ena: if true enable, false disable * * Send enable or disable queues virtchnl message. Returns 0 on success, * negative on failure. */ -static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, +static int idpf_send_ena_dis_queues_msg(struct idpf_adapter *adapter, struct idpf_queue_id_reg_info *chunks, + u32 vport_id, bool ena) { struct virtchnl2_del_ena_dis_queues *eq __free(kfree) = NULL; @@ -1789,7 +1799,7 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, if (!eq) return -ENOMEM; - eq->vport_id = cpu_to_le32(vport->vport_id); + eq->vport_id = cpu_to_le32(vport_id); eq->chunks.num_chunks = cpu_to_le16(num_chunks); idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->queue_chunks, @@ -1797,7 +1807,7 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, xn_params.send_buf.iov_base = eq; xn_params.send_buf.iov_len = buf_sz; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } @@ -1805,15 +1815,17 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, /** * idpf_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue * vector message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message * @map: true for map and false for unmap * * Send map or unmap queue vector virtchnl message. Returns 0 on success, * negative on failure. */ -int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, +int idpf_send_map_unmap_queue_vector_msg(struct idpf_adapter *adapter, struct idpf_q_vec_rsrc *rsrc, + u32 vport_id, bool map) { struct virtchnl2_queue_vector_maps *vqvm __free(kfree) = NULL; @@ -1914,11 +1926,11 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, memset(vqvm, 0, buf_sz); xn_params.send_buf.iov_base = vqvm; xn_params.send_buf.iov_len = buf_sz; - vqvm->vport_id = cpu_to_le32(vport->vport_id); + vqvm->vport_id = cpu_to_le32(vport_id); vqvm->num_qv_maps = cpu_to_le16(num_chunks); memcpy(vqvm->qv_maps, &vqv[k], chunk_sz * num_chunks); - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; @@ -1943,7 +1955,8 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, int idpf_send_enable_queues_msg(struct idpf_vport *vport, struct idpf_queue_id_reg_info *chunks) { - return idpf_send_ena_dis_queues_msg(vport, chunks, true); + return idpf_send_ena_dis_queues_msg(vport->adapter, chunks, + vport->vport_id, true); } /** @@ -1961,7 +1974,8 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport, { int err; - err = idpf_send_ena_dis_queues_msg(vport, chunks, false); + err = idpf_send_ena_dis_queues_msg(vport->adapter, chunks, + vport->vport_id, false); if (err) return err; @@ -1982,14 +1996,16 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport, /** * idpf_send_delete_queues_msg - send delete queues virtchnl message - * @vport: virtual port private data structure + * @adapter: adapter pointer used to send virtchnl message * @chunks: queue ids received over mailbox + * @vport_id: vport identifier used while preparing the virtchnl message * * Will send delete queues virtchnl message. Return 0 on success, negative on * failure. */ -int idpf_send_delete_queues_msg(struct idpf_vport *vport, - struct idpf_queue_id_reg_info *chunks) +int idpf_send_delete_queues_msg(struct idpf_adapter *adapter, + struct idpf_queue_id_reg_info *chunks, + u32 vport_id) { struct virtchnl2_del_ena_dis_queues *eq __free(kfree) = NULL; struct idpf_vc_xn_params xn_params = {}; @@ -2004,7 +2020,7 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport, if (!eq) return -ENOMEM; - eq->vport_id = cpu_to_le32(vport->vport_id); + eq->vport_id = cpu_to_le32(vport_id); eq->chunks.num_chunks = cpu_to_le16(num_chunks); idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->queue_chunks, @@ -2014,50 +2030,52 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport, xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC; xn_params.send_buf.iov_base = eq; xn_params.send_buf.iov_len = buf_size; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } /** * idpf_send_config_queues_msg - Send config queues virtchnl message - * @vport: Virtual port private data structure + * @adapter: adapter pointer used to send virtchnl message * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message + * @rsc_ena: flag to check if RSC feature is enabled * * Will send config queues virtchnl message. Returns 0 on success, negative on * failure. */ -int idpf_send_config_queues_msg(struct idpf_vport *vport, - struct idpf_q_vec_rsrc *rsrc) +int idpf_send_config_queues_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id, bool rsc_ena) { int err; - err = idpf_send_config_tx_queues_msg(vport, rsrc); + err = idpf_send_config_tx_queues_msg(adapter, rsrc, vport_id); if (err) return err; - return idpf_send_config_rx_queues_msg(vport, rsrc); + return idpf_send_config_rx_queues_msg(adapter, rsrc, vport_id, rsc_ena); } /** * idpf_send_add_queues_msg - Send virtchnl add queues message - * @vport: Virtual port private data structure - * @num_tx_q: number of transmit queues - * @num_complq: number of transmit completion queues - * @num_rx_q: number of receive queues - * @num_rx_bufq: number of receive buffer queues + * @adapter: adapter pointer used to send virtchnl message + * @vport_config: vport persistent structure to store the queue chunk info + * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message * * Returns 0 on success, negative on failure. vport _MUST_ be const here as * we should not change any fields within vport itself in this function. */ -int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, - u16 num_complq, u16 num_rx_q, u16 num_rx_bufq) +int idpf_send_add_queues_msg(struct idpf_adapter *adapter, + struct idpf_vport_config *vport_config, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id) { struct virtchnl2_add_queues *vc_msg __free(kfree) = NULL; struct idpf_vc_xn_params xn_params = {}; - struct idpf_vport_config *vport_config; struct virtchnl2_add_queues aq = {}; - u16 vport_idx = vport->idx; ssize_t reply_sz; int size; @@ -2065,13 +2083,11 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, if (!vc_msg) return -ENOMEM; - vport_config = vport->adapter->vport_config[vport_idx]; - - aq.vport_id = cpu_to_le32(vport->vport_id); - aq.num_tx_q = cpu_to_le16(num_tx_q); - aq.num_tx_complq = cpu_to_le16(num_complq); - aq.num_rx_q = cpu_to_le16(num_rx_q); - aq.num_rx_bufq = cpu_to_le16(num_rx_bufq); + aq.vport_id = cpu_to_le32(vport_id); + aq.num_tx_q = cpu_to_le16(rsrc->num_txq); + aq.num_tx_complq = cpu_to_le16(rsrc->num_complq); + aq.num_rx_q = cpu_to_le16(rsrc->num_rxq); + aq.num_rx_bufq = cpu_to_le16(rsrc->num_bufq); xn_params.vc_op = VIRTCHNL2_OP_ADD_QUEUES; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; @@ -2079,15 +2095,15 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, xn_params.send_buf.iov_len = sizeof(aq); xn_params.recv_buf.iov_base = vc_msg; xn_params.recv_buf.iov_len = IDPF_CTLQ_MAX_BUF_LEN; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; /* compare vc_msg num queues with vport num queues */ - if (le16_to_cpu(vc_msg->num_tx_q) != num_tx_q || - le16_to_cpu(vc_msg->num_rx_q) != num_rx_q || - le16_to_cpu(vc_msg->num_tx_complq) != num_complq || - le16_to_cpu(vc_msg->num_rx_bufq) != num_rx_bufq) + if (le16_to_cpu(vc_msg->num_tx_q) != rsrc->num_txq || + le16_to_cpu(vc_msg->num_rx_q) != rsrc->num_rxq || + le16_to_cpu(vc_msg->num_tx_complq) != rsrc->num_complq || + le16_to_cpu(vc_msg->num_rx_bufq) != rsrc->num_bufq) return -EINVAL; size = struct_size(vc_msg, chunks.chunks, @@ -2218,24 +2234,24 @@ int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs) /** * idpf_send_get_stats_msg - Send virtchnl get statistics message - * @vport: vport to get stats for + * @np: netdev private structure + * @port_stats: structure to store the vport statistics * * Returns 0 on success, negative on failure. */ -int idpf_send_get_stats_msg(struct idpf_vport *vport) +int idpf_send_get_stats_msg(struct idpf_netdev_priv *np, + struct idpf_port_stats *port_stats) { - struct idpf_netdev_priv *np = netdev_priv(vport->netdev); struct rtnl_link_stats64 *netstats = &np->netstats; struct virtchnl2_vport_stats stats_msg = {}; struct idpf_vc_xn_params xn_params = {}; ssize_t reply_sz; - /* Don't send get_stats message if the link is down */ if (np->state <= __IDPF_VPORT_DOWN) return 0; - stats_msg.vport_id = cpu_to_le32(vport->vport_id); + stats_msg.vport_id = cpu_to_le32(np->vport_id); xn_params.vc_op = VIRTCHNL2_OP_GET_STATS; xn_params.send_buf.iov_base = &stats_msg; @@ -2243,7 +2259,7 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport) xn_params.recv_buf = xn_params.send_buf; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(np->adapter, &xn_params); if (reply_sz < 0) return reply_sz; if (reply_sz < sizeof(stats_msg)) @@ -2264,7 +2280,7 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport) netstats->rx_dropped = le64_to_cpu(stats_msg.rx_discards); netstats->tx_dropped = le64_to_cpu(stats_msg.tx_discards); - vport->port_stats.vport_stats = stats_msg; + port_stats->vport_stats = stats_msg; spin_unlock_bh(&np->stats_lock); @@ -2273,15 +2289,16 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport) /** * idpf_send_get_set_rss_lut_msg - Send virtchnl get or set RSS lut message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message * @rss_data: pointer to RSS key and lut info + * @vport_id: vport identifier used while preparing the virtchnl message * @get: flag to set or get RSS look up table * * Returns 0 on success, negative on failure. */ -int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, +int idpf_send_get_set_rss_lut_msg(struct idpf_adapter *adapter, struct idpf_rss_data *rss_data, - bool get) + u32 vport_id, bool get) { struct virtchnl2_rss_lut *recv_rl __free(kfree) = NULL; struct virtchnl2_rss_lut *rl __free(kfree) = NULL; @@ -2295,7 +2312,7 @@ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, if (!rl) return -ENOMEM; - rl->vport_id = cpu_to_le32(vport->vport_id); + rl->vport_id = cpu_to_le32(vport_id); xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; xn_params.send_buf.iov_base = rl; @@ -2315,7 +2332,7 @@ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, xn_params.vc_op = VIRTCHNL2_OP_SET_RSS_LUT; } - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; if (!get) @@ -2348,15 +2365,16 @@ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, /** * idpf_send_get_set_rss_key_msg - Send virtchnl get or set RSS key message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message * @rss_data: pointer to RSS key and lut info + * @vport_id: vport identifier used while preparing the virtchnl message * @get: flag to set or get RSS look up table * * Returns 0 on success, negative on failure */ -int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, +int idpf_send_get_set_rss_key_msg(struct idpf_adapter *adapter, struct idpf_rss_data *rss_data, - bool get) + u32 vport_id, bool get) { struct virtchnl2_rss_key *recv_rk __free(kfree) = NULL; struct virtchnl2_rss_key *rk __free(kfree) = NULL; @@ -2370,7 +2388,7 @@ int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, if (!rk) return -ENOMEM; - rk->vport_id = cpu_to_le32(vport->vport_id); + rk->vport_id = cpu_to_le32(vport_id); xn_params.send_buf.iov_base = rk; xn_params.send_buf.iov_len = buf_size; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; @@ -2390,7 +2408,7 @@ int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, xn_params.vc_op = VIRTCHNL2_OP_SET_RSS_KEY; } - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; if (!get) @@ -2696,24 +2714,27 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) /** * idpf_send_ena_dis_loopback_msg - Send virtchnl enable/disable loopback * message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_id: vport identifier used while preparing the virtchnl message + * @loopback_ena: flag to enable or disable loopback * * Returns 0 on success, negative on failure. */ -int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport) +int idpf_send_ena_dis_loopback_msg(struct idpf_adapter *adapter, u32 vport_id, + bool loopback_ena) { struct idpf_vc_xn_params xn_params = {}; struct virtchnl2_loopback loopback; ssize_t reply_sz; - loopback.vport_id = cpu_to_le32(vport->vport_id); - loopback.enable = idpf_is_feature_ena(vport, NETIF_F_LOOPBACK); + loopback.vport_id = cpu_to_le32(vport_id); + loopback.enable = loopback_ena; xn_params.vc_op = VIRTCHNL2_OP_LOOPBACK; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; xn_params.send_buf.iov_base = &loopback; xn_params.send_buf.iov_len = sizeof(loopback); - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } @@ -3619,22 +3640,21 @@ static int idpf_mac_filter_async_handler(struct idpf_adapter *adapter, /** * idpf_add_del_mac_filters - Add/del mac filters - * @vport: Virtual port data structure - * @np: Netdev private structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_config: persistent vport structure to get the MAC filter list + * @vport_id: vport identifier used while preparing the virtchnl message * @add: Add or delete flag * @async: Don't wait for return message * * Returns 0 on success, error on failure. **/ -int idpf_add_del_mac_filters(struct idpf_vport *vport, - struct idpf_netdev_priv *np, - bool add, bool async) +int idpf_add_del_mac_filters(struct idpf_adapter *adapter, + struct idpf_vport_config *vport_config, + u32 vport_id, bool add, bool async) { struct virtchnl2_mac_addr_list *ma_list __free(kfree) = NULL; struct virtchnl2_mac_addr *mac_addr __free(kfree) = NULL; - struct idpf_adapter *adapter = np->adapter; struct idpf_vc_xn_params xn_params = {}; - struct idpf_vport_config *vport_config; u32 num_msgs, total_filters = 0; struct idpf_mac_filter *f; ssize_t reply_sz; @@ -3646,7 +3666,6 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport, xn_params.async = async; xn_params.async_handler = idpf_mac_filter_async_handler; - vport_config = adapter->vport_config[np->vport_idx]; spin_lock_bh(&vport_config->mac_filter_list_lock); /* Find the number of newly added filters */ @@ -3715,7 +3734,7 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport, memset(ma_list, 0, buf_size); } - ma_list->vport_id = cpu_to_le32(np->vport_id); + ma_list->vport_id = cpu_to_le32(vport_id); ma_list->num_mac_addr = cpu_to_le16(num_entries); memcpy(ma_list->mac_addr_list, &mac_addr[k], entries_size); diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index cfeefbc5174f..9df90ba83309 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -100,7 +100,7 @@ void idpf_deinit_dflt_mbx(struct idpf_adapter *adapter); int idpf_vc_core_init(struct idpf_adapter *adapter); void idpf_vc_core_deinit(struct idpf_adapter *adapter); -int idpf_get_reg_intr_vecs(struct idpf_vport *vport, +int idpf_get_reg_intr_vecs(struct idpf_adapter *adapter, struct idpf_vec_regs *reg_vals); int idpf_queue_reg_init(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc, @@ -123,9 +123,9 @@ int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q); u32 idpf_get_vport_id(struct idpf_vport *vport); int idpf_send_create_vport_msg(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); -int idpf_send_destroy_vport_msg(struct idpf_vport *vport); -int idpf_send_enable_vport_msg(struct idpf_vport *vport); -int idpf_send_disable_vport_msg(struct idpf_vport *vport); +int idpf_send_destroy_vport_msg(struct idpf_adapter *adapter, u32 vport_id); +int idpf_send_enable_vport_msg(struct idpf_adapter *adapter, u32 vport_id); +int idpf_send_disable_vport_msg(struct idpf_adapter *adapter, u32 vport_id); int idpf_vport_adjust_qs(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc); @@ -133,17 +133,21 @@ int idpf_vport_alloc_max_qs(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); -int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, - u16 num_complq, u16 num_rx_q, u16 num_rx_bufq); -int idpf_send_delete_queues_msg(struct idpf_vport *vport, - struct idpf_queue_id_reg_info *chunks); +int idpf_send_add_queues_msg(struct idpf_adapter *adapter, + struct idpf_vport_config *vport_config, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id); +int idpf_send_delete_queues_msg(struct idpf_adapter *adapter, + struct idpf_queue_id_reg_info *chunks, + u32 vport_id); int idpf_send_enable_queues_msg(struct idpf_vport *vport, struct idpf_queue_id_reg_info *chunks); int idpf_send_disable_queues_msg(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc, struct idpf_queue_id_reg_info *chunks); -int idpf_send_config_queues_msg(struct idpf_vport *vport, - struct idpf_q_vec_rsrc *rsrc); +int idpf_send_config_queues_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id, bool rsc_ena); int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc); @@ -152,26 +156,29 @@ int idpf_get_vec_ids(struct idpf_adapter *adapter, struct virtchnl2_vector_chunks *chunks); int idpf_send_alloc_vectors_msg(struct idpf_adapter *adapter, u16 num_vectors); int idpf_send_dealloc_vectors_msg(struct idpf_adapter *adapter); -int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, +int idpf_send_map_unmap_queue_vector_msg(struct idpf_adapter *adapter, struct idpf_q_vec_rsrc *rsrc, + u32 vport_id, bool map); -int idpf_add_del_mac_filters(struct idpf_vport *vport, - struct idpf_netdev_priv *np, - bool add, bool async); +int idpf_add_del_mac_filters(struct idpf_adapter *adapter, + struct idpf_vport_config *vport_config, + u32 vport_id, bool add, bool async); int idpf_set_promiscuous(struct idpf_adapter *adapter, struct idpf_vport_user_config_data *config_data, u32 vport_id); int idpf_check_supported_desc_ids(struct idpf_vport *vport); int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport); -int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport); -int idpf_send_get_stats_msg(struct idpf_vport *vport); +int idpf_send_ena_dis_loopback_msg(struct idpf_adapter *adapter, u32 vport_id, + bool loopback_ena); +int idpf_send_get_stats_msg(struct idpf_netdev_priv *np, + struct idpf_port_stats *port_stats); int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs); -int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, +int idpf_send_get_set_rss_key_msg(struct idpf_adapter *adapter, struct idpf_rss_data *rss_data, - bool get); -int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, + u32 vport_id, bool get); +int idpf_send_get_set_rss_lut_msg(struct idpf_adapter *adapter, struct idpf_rss_data *rss_data, - bool get); + u32 vport_id, bool get); #endif /* _IDPF_VIRTCHNL_H_ */ From patchwork Thu May 8 21:50:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Kumar Linga X-Patchwork-Id: 2083152 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=L7dmQySf; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ztm621Cthz1yPv for ; Fri, 9 May 2025 07:50:50 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 77BBC83CA7; Thu, 8 May 2025 21:50:59 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id GLdRdybEMG6Q; Thu, 8 May 2025 21:50:57 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org C8DC783CA6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1746741057; bh=cq7mzSviWhNE+C8Ra3iqR0zi1cxVMSHjelu6KzcLhJA=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=L7dmQySf7y9honPDjoswwWANOlgeUgS8hVov7bItae/qiXVa7WPMPRD1lJ2gORRHb B7EM6pNpszqcSBb51Y2k6hsyjxHpSQlKVzRLpZbFjcQQUiDmTUwKOqOidFtXtsVnjp kROmsgM5m47P8hV01gu6W2+9wmgxZD8Eza1nkrlOaEZjqCfc9zggDnkkW15AIHL7rb gubHfZhRQEldHMm70FyZcBGImdSGX6z/UZZAuTpJ0svL9DCqywSOng6QN0oo+d/0q7 pLhXZwrNptYidHV2Z4RLEXPuAmpFXF+8vMuvwHTVBciVCxE6RIN++JiX8mFWFCRkb8 soWFtj4DMCj7w== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp1.osuosl.org (Postfix) with ESMTP id C8DC783CA6; Thu, 8 May 2025 21:50:57 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists1.osuosl.org (Postfix) with ESMTP id 1904C439 for ; Thu, 8 May 2025 21:50:53 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id D94A5422E6 for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id nTmExfO6pNlJ for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18; helo=mgamail.intel.com; envelope-from=pavan.kumar.linga@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp2.osuosl.org C9BA1400BB DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org C9BA1400BB Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by smtp2.osuosl.org (Postfix) with ESMTPS id C9BA1400BB for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) X-CSE-ConnectionGUID: aPSEDpCpR8GG1qukcPkx+g== X-CSE-MsgGUID: 1tWPpTutTUGaKAWif2ImTg== X-IronPort-AV: E=McAfee;i="6700,10204,11427"; a="47808330" X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="47808330" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2025 14:50:50 -0700 X-CSE-ConnectionGUID: S82MqrzQSGWWD6shZ4Ze5g== X-CSE-MsgGUID: TmDx2aAFR4S840/fxX68VA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="141534290" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.80.55]) by orviesa005.jf.intel.com with ESMTP; 08 May 2025 14:50:50 -0700 From: Pavan Kumar Linga To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, milena.olech@intel.com, anton.nadezhdin@intel.com, Pavan Kumar Linga , Madhu Chittim Date: Thu, 8 May 2025 14:50:12 -0700 Message-ID: <20250508215013.32668-9-pavan.kumar.linga@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250508215013.32668-1-pavan.kumar.linga@intel.com> References: <20250508215013.32668-1-pavan.kumar.linga@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746741052; x=1778277052; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YFKGK2X8faF92fKJFAZDZ0UkORfrazIYAX2DLS1CFZc=; b=fSs8p8Skntjg6Xsuw01H1tL6E4FSh6hr7Pl4PNt7AmmzzqC27ihIJyNx FJzCrCjFse6T4jScH0284blFU/XUsh5pdeW0hlU912dhGi097uWTMV/tI g9p3zm02NdkO7dMcfz9Y5tD5448FAJk9zdP/O83fXHqVmO78CAhu3P86b T+9NiQQ1FpmveaNWROmjPKUgOo6yxIKkBsIxaVBEUGm1E4gQKf4++fU2d zQcL2z2KvmsatMEco1DoyGPAhY5CZ1K6e37H2HnwLKanCeZrWF/QVCuTz 9wZ79mFRS+14BfptSK8J+HgywNf/TguM5+wX32vN6SGPPCV62zHqfA1Ik A==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=fSs8p8Sk Subject: [Intel-wired-lan] [PATCH iwl-next v4 8/9] idpf: avoid calling get_rx_ptypes for each vport X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" RX ptypes received from device control plane doesn't depend on vport info, but might vary based on the queue model. When the driver requests for ptypes, control plane fills both ptype_id_10 (used for splitq) and ptype_id_8 (used for singleq) fields of the virtchnl2_ptype response structure. This allows to call get_rx_ptypes once at the adapter level instead of each vport. Parse and store the received ptypes of both splitq and singleq in a separate lookup table. Respective lookup table is used based on the queue model info. As part of the changes, pull the ptype protocol parsing code into a separate function. Reviewed-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Tested-by: Samuel Salin --- drivers/net/ethernet/intel/idpf/idpf.h | 7 +- drivers/net/ethernet/intel/idpf/idpf_lib.c | 9 - drivers/net/ethernet/intel/idpf/idpf_txrx.c | 4 +- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 310 ++++++++++-------- .../net/ethernet/intel/idpf/idpf_virtchnl.h | 1 - 5 files changed, 174 insertions(+), 157 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index e53a43d5c867..627010dac3aa 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -340,7 +340,6 @@ struct idpf_q_vec_rsrc { * @default_mac_addr: device will give a default MAC to use * @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation * @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation - * @rx_ptype_lkup: Lookup table for ptypes on RX * @port_stats: per port csum, header split, and other offload stats * @default_vport: Use this vport if one isn't specified * @crc_enable: Enable CRC insertion offload @@ -369,7 +368,6 @@ struct idpf_vport { u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS]; u16 tx_itr_profile[IDPF_DIM_PROFILE_SLOTS]; - struct libeth_rx_pt *rx_ptype_lkup; struct idpf_port_stats port_stats; bool default_vport; bool crc_enable; @@ -589,6 +587,8 @@ struct idpf_vc_xn_manager; * @vport_params_reqd: Vport params requested * @vport_params_recvd: Vport params received * @vport_ids: Array of device given vport identifiers + * @singleq_pt_lkup: Lookup table for singleq RX ptypes + * @splitq_pt_lkup: Lookup table for splitq RX ptypes * @vport_config: Vport config parameters * @max_vports: Maximum vports that can be allocated * @num_alloc_vports: Current number of vports allocated @@ -645,6 +645,9 @@ struct idpf_adapter { struct virtchnl2_create_vport **vport_params_recvd; u32 *vport_ids; + struct libeth_rx_pt *singleq_pt_lkup; + struct libeth_rx_pt *splitq_pt_lkup; + struct idpf_vport_config **vport_config; u16 max_vports; u16 num_alloc_vports; diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 6f295abdc6c1..7dabf5ddbf16 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -905,9 +905,6 @@ static void idpf_decfg_netdev(struct idpf_vport *vport) struct idpf_adapter *adapter = vport->adapter; u16 idx = vport->idx; - kfree(vport->rx_ptype_lkup); - vport->rx_ptype_lkup = NULL; - if (test_and_clear_bit(IDPF_VPORT_REG_NETDEV, adapter->vport_config[idx]->flags)) { unregister_netdev(vport->netdev); @@ -1518,10 +1515,6 @@ void idpf_init_task(struct work_struct *work) if (idpf_cfg_netdev(vport)) goto cfg_netdev_err; - err = idpf_send_get_rx_ptype_msg(vport); - if (err) - goto handle_err; - /* Once state is put into DOWN, driver is ready for dev_open */ np = netdev_priv(vport->netdev); np->state = __IDPF_VPORT_DOWN; @@ -1567,8 +1560,6 @@ void idpf_init_task(struct work_struct *work) return; -handle_err: - idpf_decfg_netdev(vport); cfg_netdev_err: idpf_vport_rel(vport); adapter->vports[index] = NULL; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index f6c263c82e97..26e8270d44f8 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -1458,6 +1458,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc, u16 num_rxq) { + struct idpf_adapter *adapter = vport->adapter; int k, err = 0; bool hs; @@ -1548,6 +1549,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, if (!idpf_is_queue_model_split(rsrc->rxq_model)) { q = rx_qgrp->singleq.rxqs[j]; + q->rx_ptype_lkup = adapter->singleq_pt_lkup; goto setup_rxq; } q = &rx_qgrp->splitq.rxq_sets[j]->rxq; @@ -1558,10 +1560,10 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, &rx_qgrp->splitq.bufq_sets[1].refillqs[j]; idpf_queue_assign(HSPLIT_EN, q, hs); + q->rx_ptype_lkup = adapter->splitq_pt_lkup; setup_rxq: q->desc_count = rsrc->rxq_desc_count; - q->rx_ptype_lkup = vport->rx_ptype_lkup; q->netdev = vport->netdev; q->bufq_sets = rx_qgrp->splitq.bufq_sets; q->idx = (i * num_rxq) + j; diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index ebf60ab7b2df..c2d4caa4408d 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -2493,36 +2493,143 @@ static void idpf_finalize_ptype_lookup(struct libeth_rx_pt *ptype) libeth_rx_pt_gen_hash_type(ptype); } +/** + * idpf_parse_protocol_ids - parse protocol IDs for a given packet type + * @ptype: packet type to parse + * @rx_pt: store the parsed packet type info into + */ +static void idpf_parse_protocol_ids(struct virtchnl2_ptype *ptype, + struct libeth_rx_pt *rx_pt) +{ + struct idpf_ptype_state pstate = {}; + + for (u32 j = 0; j < ptype->proto_id_count; j++) { + u16 id = le16_to_cpu(ptype->proto_id[j]); + + switch (id) { + case VIRTCHNL2_PROTO_HDR_GRE: + if (pstate.tunnel_state == IDPF_PTYPE_TUNNEL_IP) { + rx_pt->tunnel_type = + LIBETH_RX_PT_TUNNEL_IP_GRENAT; + pstate.tunnel_state |= + IDPF_PTYPE_TUNNEL_IP_GRENAT; + } + break; + case VIRTCHNL2_PROTO_HDR_MAC: + rx_pt->outer_ip = LIBETH_RX_PT_OUTER_L2; + if (pstate.tunnel_state == IDPF_TUN_IP_GRE) { + rx_pt->tunnel_type = + LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC; + pstate.tunnel_state |= + IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC; + } + break; + case VIRTCHNL2_PROTO_HDR_IPV4: + idpf_fill_ptype_lookup(rx_pt, &pstate, true, false); + break; + case VIRTCHNL2_PROTO_HDR_IPV6: + idpf_fill_ptype_lookup(rx_pt, &pstate, false, false); + break; + case VIRTCHNL2_PROTO_HDR_IPV4_FRAG: + idpf_fill_ptype_lookup(rx_pt, &pstate, true, true); + break; + case VIRTCHNL2_PROTO_HDR_IPV6_FRAG: + idpf_fill_ptype_lookup(rx_pt, &pstate, false, true); + break; + case VIRTCHNL2_PROTO_HDR_UDP: + rx_pt->inner_prot = LIBETH_RX_PT_INNER_UDP; + break; + case VIRTCHNL2_PROTO_HDR_TCP: + rx_pt->inner_prot = LIBETH_RX_PT_INNER_TCP; + break; + case VIRTCHNL2_PROTO_HDR_SCTP: + rx_pt->inner_prot = LIBETH_RX_PT_INNER_SCTP; + break; + case VIRTCHNL2_PROTO_HDR_ICMP: + rx_pt->inner_prot = LIBETH_RX_PT_INNER_ICMP; + break; + case VIRTCHNL2_PROTO_HDR_PAY: + rx_pt->payload_layer = LIBETH_RX_PT_PAYLOAD_L2; + break; + case VIRTCHNL2_PROTO_HDR_ICMPV6: + case VIRTCHNL2_PROTO_HDR_IPV6_EH: + case VIRTCHNL2_PROTO_HDR_PRE_MAC: + case VIRTCHNL2_PROTO_HDR_POST_MAC: + case VIRTCHNL2_PROTO_HDR_ETHERTYPE: + case VIRTCHNL2_PROTO_HDR_SVLAN: + case VIRTCHNL2_PROTO_HDR_CVLAN: + case VIRTCHNL2_PROTO_HDR_MPLS: + case VIRTCHNL2_PROTO_HDR_MMPLS: + case VIRTCHNL2_PROTO_HDR_PTP: + case VIRTCHNL2_PROTO_HDR_CTRL: + case VIRTCHNL2_PROTO_HDR_LLDP: + case VIRTCHNL2_PROTO_HDR_ARP: + case VIRTCHNL2_PROTO_HDR_ECP: + case VIRTCHNL2_PROTO_HDR_EAPOL: + case VIRTCHNL2_PROTO_HDR_PPPOD: + case VIRTCHNL2_PROTO_HDR_PPPOE: + case VIRTCHNL2_PROTO_HDR_IGMP: + case VIRTCHNL2_PROTO_HDR_AH: + case VIRTCHNL2_PROTO_HDR_ESP: + case VIRTCHNL2_PROTO_HDR_IKE: + case VIRTCHNL2_PROTO_HDR_NATT_KEEP: + case VIRTCHNL2_PROTO_HDR_L2TPV2: + case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL: + case VIRTCHNL2_PROTO_HDR_L2TPV3: + case VIRTCHNL2_PROTO_HDR_GTP: + case VIRTCHNL2_PROTO_HDR_GTP_EH: + case VIRTCHNL2_PROTO_HDR_GTPCV2: + case VIRTCHNL2_PROTO_HDR_GTPC_TEID: + case VIRTCHNL2_PROTO_HDR_GTPU: + case VIRTCHNL2_PROTO_HDR_GTPU_UL: + case VIRTCHNL2_PROTO_HDR_GTPU_DL: + case VIRTCHNL2_PROTO_HDR_ECPRI: + case VIRTCHNL2_PROTO_HDR_VRRP: + case VIRTCHNL2_PROTO_HDR_OSPF: + case VIRTCHNL2_PROTO_HDR_TUN: + case VIRTCHNL2_PROTO_HDR_NVGRE: + case VIRTCHNL2_PROTO_HDR_VXLAN: + case VIRTCHNL2_PROTO_HDR_VXLAN_GPE: + case VIRTCHNL2_PROTO_HDR_GENEVE: + case VIRTCHNL2_PROTO_HDR_NSH: + case VIRTCHNL2_PROTO_HDR_QUIC: + case VIRTCHNL2_PROTO_HDR_PFCP: + case VIRTCHNL2_PROTO_HDR_PFCP_NODE: + case VIRTCHNL2_PROTO_HDR_PFCP_SESSION: + case VIRTCHNL2_PROTO_HDR_RTP: + case VIRTCHNL2_PROTO_HDR_NO_PROTO: + break; + default: + break; + } + } +} + /** * idpf_send_get_rx_ptype_msg - Send virtchnl for ptype info - * @vport: virtual port data structure + * @adapter: driver specific private structure * * Returns 0 on success, negative on failure. */ -int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) +static int idpf_send_get_rx_ptype_msg(struct idpf_adapter *adapter) { struct virtchnl2_get_ptype_info *get_ptype_info __free(kfree) = NULL; struct virtchnl2_get_ptype_info *ptype_info __free(kfree) = NULL; - struct libeth_rx_pt *ptype_lkup __free(kfree) = NULL; - int max_ptype, ptypes_recvd = 0, ptype_offset; - struct idpf_adapter *adapter = vport->adapter; + struct libeth_rx_pt *singleq_pt_lkup __free(kfree) = NULL; + struct libeth_rx_pt *splitq_pt_lkup __free(kfree) = NULL; struct idpf_vc_xn_params xn_params = {}; + int ptypes_recvd = 0, ptype_offset; + u32 max_ptype = IDPF_RX_MAX_PTYPE; u16 next_ptype_id = 0; ssize_t reply_sz; - bool is_splitq; - int i, j, k; - - if (vport->rx_ptype_lkup) - return 0; - is_splitq = idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model); - if (is_splitq) - max_ptype = IDPF_RX_MAX_PTYPE; - else - max_ptype = IDPF_RX_MAX_BASE_PTYPE; + singleq_pt_lkup = kcalloc(IDPF_RX_MAX_BASE_PTYPE, + sizeof(*singleq_pt_lkup), GFP_KERNEL); + if (!singleq_pt_lkup) + return -ENOMEM; - ptype_lkup = kcalloc(max_ptype, sizeof(*ptype_lkup), GFP_KERNEL); - if (!ptype_lkup) + splitq_pt_lkup = kcalloc(max_ptype, sizeof(*splitq_pt_lkup), GFP_KERNEL); + if (!splitq_pt_lkup) return -ENOMEM; get_ptype_info = kzalloc(sizeof(*get_ptype_info), GFP_KERNEL); @@ -2563,154 +2670,59 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) ptype_offset = IDPF_RX_PTYPE_HDR_SZ; - for (i = 0; i < le16_to_cpu(ptype_info->num_ptypes); i++) { - struct idpf_ptype_state pstate = { }; + for (u16 i = 0; i < le16_to_cpu(ptype_info->num_ptypes); i++) { + struct libeth_rx_pt rx_pt = {}; struct virtchnl2_ptype *ptype; - u16 id; + u16 pt_10, pt_8; ptype = (struct virtchnl2_ptype *) ((u8 *)ptype_info + ptype_offset); + pt_10 = le16_to_cpu(ptype->ptype_id_10); + pt_8 = ptype->ptype_id_8; + ptype_offset += IDPF_GET_PTYPE_SIZE(ptype); if (ptype_offset > IDPF_CTLQ_MAX_BUF_LEN) return -EINVAL; /* 0xFFFF indicates end of ptypes */ - if (le16_to_cpu(ptype->ptype_id_10) == - IDPF_INVALID_PTYPE_ID) + if (pt_10 == IDPF_INVALID_PTYPE_ID) goto out; - if (is_splitq) - k = le16_to_cpu(ptype->ptype_id_10); - else - k = ptype->ptype_id_8; - - for (j = 0; j < ptype->proto_id_count; j++) { - id = le16_to_cpu(ptype->proto_id[j]); - switch (id) { - case VIRTCHNL2_PROTO_HDR_GRE: - if (pstate.tunnel_state == - IDPF_PTYPE_TUNNEL_IP) { - ptype_lkup[k].tunnel_type = - LIBETH_RX_PT_TUNNEL_IP_GRENAT; - pstate.tunnel_state |= - IDPF_PTYPE_TUNNEL_IP_GRENAT; - } - break; - case VIRTCHNL2_PROTO_HDR_MAC: - ptype_lkup[k].outer_ip = - LIBETH_RX_PT_OUTER_L2; - if (pstate.tunnel_state == - IDPF_TUN_IP_GRE) { - ptype_lkup[k].tunnel_type = - LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC; - pstate.tunnel_state |= - IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC; - } - break; - case VIRTCHNL2_PROTO_HDR_IPV4: - idpf_fill_ptype_lookup(&ptype_lkup[k], - &pstate, true, - false); - break; - case VIRTCHNL2_PROTO_HDR_IPV6: - idpf_fill_ptype_lookup(&ptype_lkup[k], - &pstate, false, - false); - break; - case VIRTCHNL2_PROTO_HDR_IPV4_FRAG: - idpf_fill_ptype_lookup(&ptype_lkup[k], - &pstate, true, - true); - break; - case VIRTCHNL2_PROTO_HDR_IPV6_FRAG: - idpf_fill_ptype_lookup(&ptype_lkup[k], - &pstate, false, - true); - break; - case VIRTCHNL2_PROTO_HDR_UDP: - ptype_lkup[k].inner_prot = - LIBETH_RX_PT_INNER_UDP; - break; - case VIRTCHNL2_PROTO_HDR_TCP: - ptype_lkup[k].inner_prot = - LIBETH_RX_PT_INNER_TCP; - break; - case VIRTCHNL2_PROTO_HDR_SCTP: - ptype_lkup[k].inner_prot = - LIBETH_RX_PT_INNER_SCTP; - break; - case VIRTCHNL2_PROTO_HDR_ICMP: - ptype_lkup[k].inner_prot = - LIBETH_RX_PT_INNER_ICMP; - break; - case VIRTCHNL2_PROTO_HDR_PAY: - ptype_lkup[k].payload_layer = - LIBETH_RX_PT_PAYLOAD_L2; - break; - case VIRTCHNL2_PROTO_HDR_ICMPV6: - case VIRTCHNL2_PROTO_HDR_IPV6_EH: - case VIRTCHNL2_PROTO_HDR_PRE_MAC: - case VIRTCHNL2_PROTO_HDR_POST_MAC: - case VIRTCHNL2_PROTO_HDR_ETHERTYPE: - case VIRTCHNL2_PROTO_HDR_SVLAN: - case VIRTCHNL2_PROTO_HDR_CVLAN: - case VIRTCHNL2_PROTO_HDR_MPLS: - case VIRTCHNL2_PROTO_HDR_MMPLS: - case VIRTCHNL2_PROTO_HDR_PTP: - case VIRTCHNL2_PROTO_HDR_CTRL: - case VIRTCHNL2_PROTO_HDR_LLDP: - case VIRTCHNL2_PROTO_HDR_ARP: - case VIRTCHNL2_PROTO_HDR_ECP: - case VIRTCHNL2_PROTO_HDR_EAPOL: - case VIRTCHNL2_PROTO_HDR_PPPOD: - case VIRTCHNL2_PROTO_HDR_PPPOE: - case VIRTCHNL2_PROTO_HDR_IGMP: - case VIRTCHNL2_PROTO_HDR_AH: - case VIRTCHNL2_PROTO_HDR_ESP: - case VIRTCHNL2_PROTO_HDR_IKE: - case VIRTCHNL2_PROTO_HDR_NATT_KEEP: - case VIRTCHNL2_PROTO_HDR_L2TPV2: - case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL: - case VIRTCHNL2_PROTO_HDR_L2TPV3: - case VIRTCHNL2_PROTO_HDR_GTP: - case VIRTCHNL2_PROTO_HDR_GTP_EH: - case VIRTCHNL2_PROTO_HDR_GTPCV2: - case VIRTCHNL2_PROTO_HDR_GTPC_TEID: - case VIRTCHNL2_PROTO_HDR_GTPU: - case VIRTCHNL2_PROTO_HDR_GTPU_UL: - case VIRTCHNL2_PROTO_HDR_GTPU_DL: - case VIRTCHNL2_PROTO_HDR_ECPRI: - case VIRTCHNL2_PROTO_HDR_VRRP: - case VIRTCHNL2_PROTO_HDR_OSPF: - case VIRTCHNL2_PROTO_HDR_TUN: - case VIRTCHNL2_PROTO_HDR_NVGRE: - case VIRTCHNL2_PROTO_HDR_VXLAN: - case VIRTCHNL2_PROTO_HDR_VXLAN_GPE: - case VIRTCHNL2_PROTO_HDR_GENEVE: - case VIRTCHNL2_PROTO_HDR_NSH: - case VIRTCHNL2_PROTO_HDR_QUIC: - case VIRTCHNL2_PROTO_HDR_PFCP: - case VIRTCHNL2_PROTO_HDR_PFCP_NODE: - case VIRTCHNL2_PROTO_HDR_PFCP_SESSION: - case VIRTCHNL2_PROTO_HDR_RTP: - case VIRTCHNL2_PROTO_HDR_NO_PROTO: - break; - default: - break; - } - } + idpf_parse_protocol_ids(ptype, &rx_pt); + idpf_finalize_ptype_lookup(&rx_pt); - idpf_finalize_ptype_lookup(&ptype_lkup[k]); + /* For a given protocol ID stack, the ptype value might + * vary between ptype_id_10 and ptype_id_8. So store + * them separately for splitq and singleq. Also skip + * the repeated ptypes in case of singleq. + */ + splitq_pt_lkup[pt_10] = rx_pt; + if (!singleq_pt_lkup[pt_8].outer_ip) + singleq_pt_lkup[pt_8] = rx_pt; } } out: - vport->rx_ptype_lkup = no_free_ptr(ptype_lkup); + adapter->splitq_pt_lkup = no_free_ptr(splitq_pt_lkup); + adapter->singleq_pt_lkup = no_free_ptr(singleq_pt_lkup); return 0; } +/** + * idpf_rel_rx_pt_lkup - release RX ptype lookup table + * @adapter: adapter pointer to get the lookup table + */ +static void idpf_rel_rx_pt_lkup(struct idpf_adapter *adapter) +{ + kfree(adapter->splitq_pt_lkup); + adapter->splitq_pt_lkup = NULL; + + kfree(adapter->singleq_pt_lkup); + adapter->singleq_pt_lkup = NULL; +} + /** * idpf_send_ena_dis_loopback_msg - Send virtchnl enable/disable loopback * message @@ -2984,6 +2996,13 @@ int idpf_vc_core_init(struct idpf_adapter *adapter) goto err_intr_req; } + err = idpf_send_get_rx_ptype_msg(adapter); + if (err) { + dev_err(&adapter->pdev->dev, "failed to get RX ptypes: %d\n", + err); + goto intr_rel; + } + err = idpf_ptp_init(adapter); if (err) pci_err(adapter->pdev, "PTP init failed, err=%pe\n", ERR_PTR(err)); @@ -3000,6 +3019,8 @@ int idpf_vc_core_init(struct idpf_adapter *adapter) return 0; +intr_rel: + idpf_intr_rel(adapter); err_intr_req: cancel_delayed_work_sync(&adapter->serv_task); cancel_delayed_work_sync(&adapter->mbx_task); @@ -3052,6 +3073,7 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter) idpf_ptp_release(adapter); idpf_deinit_task(adapter); + idpf_rel_rx_pt_lkup(adapter); idpf_intr_rel(adapter); if (remove_in_prog) diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index 9df90ba83309..f29536eed707 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -168,7 +168,6 @@ int idpf_set_promiscuous(struct idpf_adapter *adapter, struct idpf_vport_user_config_data *config_data, u32 vport_id); int idpf_check_supported_desc_ids(struct idpf_vport *vport); -int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport); int idpf_send_ena_dis_loopback_msg(struct idpf_adapter *adapter, u32 vport_id, bool loopback_ena); int idpf_send_get_stats_msg(struct idpf_netdev_priv *np, From patchwork Thu May 8 21:50:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Kumar Linga X-Patchwork-Id: 2083150 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=m+IUueL8; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ztm5z2Kn1z1yPv for ; Fri, 9 May 2025 07:50:47 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id CAB3283CBD; Thu, 8 May 2025 21:50:57 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id ly2OueHpf9tf; Thu, 8 May 2025 21:50:57 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 136A783CAC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1746741057; bh=UpD7QmvISAQ2jg7kafmaSRG5K+mhEz/JyBvb1GooPWA=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=m+IUueL8nUIoHkRTf4NjrxgwIG26RW0V+xboecQo5Z6JYF6n32/FOSfIOYWeBGMgx NQYw8RhHHStxpso2q2F51ZDtf3XJ04P2DtxVbkIqF+oelTSw/Z7Lp/5vNjeuOcZwXx 178X82ti1ZXVNrDSSll3+XB3iN8h2i/MUNoN6Bxn/DrDAxgFh4UupoCNvC1J5N7QyF wm0XpvDwdQ37mo2GM9Vb/hzELTSMFPr0GDqUCoOW2eShAgp3Z/Mg3GnpYaxbunEAuS MQ18rPNsg2UNRAjNXdnZtM5mkIY1fN1d4wmlb7LNkcunnnx+d+XLIrzENYUmQT27vK aCkbFemjgaVpg== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp1.osuosl.org (Postfix) with ESMTP id 136A783CAC; Thu, 8 May 2025 21:50:57 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) by lists1.osuosl.org (Postfix) with ESMTP id B60621A9 for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id A7AA783C9B for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id TrsxK2WqJi-9 for ; Thu, 8 May 2025 21:50:52 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18; helo=mgamail.intel.com; envelope-from=pavan.kumar.linga@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp1.osuosl.org E7F9783C6D DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org E7F9783C6D Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by smtp1.osuosl.org (Postfix) with ESMTPS id E7F9783C6D for ; Thu, 8 May 2025 21:50:51 +0000 (UTC) X-CSE-ConnectionGUID: cFwLSwHbQl6xMBPHkSmNZg== X-CSE-MsgGUID: qft3lOLDQJaPtqK7NiCDcQ== X-IronPort-AV: E=McAfee;i="6700,10204,11427"; a="47808335" X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="47808335" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2025 14:50:50 -0700 X-CSE-ConnectionGUID: GY5B6B8RSduyDqvDfH3oMg== X-CSE-MsgGUID: g9dYqEwpTD2rj8Mz0w/Eeg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,273,1739865600"; d="scan'208";a="141534293" Received: from unknown (HELO localhost.jf.intel.com) ([10.166.80.55]) by orviesa005.jf.intel.com with ESMTP; 08 May 2025 14:50:50 -0700 From: Pavan Kumar Linga To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, milena.olech@intel.com, anton.nadezhdin@intel.com, Pavan Kumar Linga , Madhu Chittim Date: Thu, 8 May 2025 14:50:13 -0700 Message-ID: <20250508215013.32668-10-pavan.kumar.linga@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250508215013.32668-1-pavan.kumar.linga@intel.com> References: <20250508215013.32668-1-pavan.kumar.linga@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746741052; x=1778277052; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TdT4sP/kioY/QuknRWwcrmrnnisOt1XJwQhdInCv5N4=; b=CtRjAv/EO1CgiDRZODD/BOT1f2LsQTYPPOpIJB6wJCLz2UZ+rhV+wyBe 5hb5CPFZ9zeJmSvjthKG7GWCJzc0/PPTGLtazv8XaI0Mr2pLr6+xc16eY UcMKgqlCoYCF0WxkhKX/XT3l6MAWH0gUC7mO6NSBvCcXN2yhfmxtQEiFs rKpw/Vi92U0F/TNm9GTnH6t53/SnoDChMvu/TyhgCUZud68lzFwvO/CmH W5kbGtjRIHSD+9vyhS3OMDJnq1DBl5HbfWmKFBQ1ZzDpS8t38Ohc9KB/C pb9tYyKp0LYHLj/7oOoFnVbceuFxAYEH8dzlhy85jtv4TIPGQZQC9DMtD Q==; X-Mailman-Original-Authentication-Results: smtp1.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp1.osuosl.org; dkim=pass (2048-bit key, unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=CtRjAv/E Subject: [Intel-wired-lan] [PATCH iwl-next v4 9/9] idpf: generalize mailbox API X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Add a control queue parameter to all mailbox APIs in order to make use of those APIs for non-default mailbox as well. Signed-off-by: Anton Nadezhdin Reviewed-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Tested-by: Samuel Salin --- drivers/net/ethernet/intel/idpf/idpf_lib.c | 2 +- drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 3 +- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 33 ++++++++++--------- .../net/ethernet/intel/idpf/idpf_virtchnl.h | 6 ++-- 4 files changed, 24 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 7dabf5ddbf16..492b03d8f718 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -1202,7 +1202,7 @@ void idpf_mbx_task(struct work_struct *work) queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task, msecs_to_jiffies(300)); - idpf_recv_mb_msg(adapter); + idpf_recv_mb_msg(adapter, adapter->hw.arq); } /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c index 0bb07bcb974b..ac091280e828 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c @@ -146,7 +146,8 @@ static void idpf_vf_trigger_reset(struct idpf_adapter *adapter, /* Do not send VIRTCHNL2_OP_RESET_VF message on driver unload */ if (trig_cause == IDPF_HR_FUNC_RESET && !test_bit(IDPF_REMOVE_IN_PROG, adapter->flags)) - idpf_send_mb_msg(adapter, VIRTCHNL2_OP_RESET_VF, 0, NULL, 0); + idpf_send_mb_msg(adapter, adapter->hw.asq, + VIRTCHNL2_OP_RESET_VF, 0, NULL, 0); } /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index c2d4caa4408d..b7bec559f5ee 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -116,13 +116,15 @@ static void idpf_recv_event_msg(struct idpf_adapter *adapter, /** * idpf_mb_clean - Reclaim the send mailbox queue entries - * @adapter: Driver specific private structure + * @adapter: driver specific private structure + * @asq: send control queue info * * Reclaim the send mailbox queue entries to be used to send further messages * * Returns 0 on success, negative on failure */ -static int idpf_mb_clean(struct idpf_adapter *adapter) +static int idpf_mb_clean(struct idpf_adapter *adapter, + struct idpf_ctlq_info *asq) { u16 i, num_q_msg = IDPF_DFLT_MBX_Q_LEN; struct idpf_ctlq_msg **q_msg; @@ -133,7 +135,7 @@ static int idpf_mb_clean(struct idpf_adapter *adapter) if (!q_msg) return -ENOMEM; - err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg); + err = idpf_ctlq_clean_sq(asq, &num_q_msg, q_msg); if (err) goto err_kfree; @@ -205,7 +207,8 @@ static void idpf_prepare_ptp_mb_msg(struct idpf_adapter *adapter, u32 op, /** * idpf_send_mb_msg - Send message over mailbox - * @adapter: Driver specific private structure + * @adapter: driver specific private structure + * @asq: control queue to send message to * @op: virtchnl opcode * @msg_size: size of the payload * @msg: pointer to buffer holding the payload @@ -215,8 +218,8 @@ static void idpf_prepare_ptp_mb_msg(struct idpf_adapter *adapter, u32 op, * * Returns 0 on success, negative on failure */ -int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, - u16 msg_size, u8 *msg, u16 cookie) +int idpf_send_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *asq, + u32 op, u16 msg_size, u8 *msg, u16 cookie) { struct idpf_ctlq_msg *ctlq_msg; struct idpf_dma_mem *dma_mem; @@ -230,7 +233,7 @@ int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, if (idpf_is_reset_detected(adapter)) return 0; - err = idpf_mb_clean(adapter); + err = idpf_mb_clean(adapter, asq); if (err) return err; @@ -266,7 +269,7 @@ int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, ctlq_msg->ctx.indirect.payload = dma_mem; ctlq_msg->ctx.sw_cookie.data = cookie; - err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg); + err = idpf_ctlq_send(&adapter->hw, asq, 1, ctlq_msg); if (err) goto send_error; @@ -462,7 +465,7 @@ ssize_t idpf_vc_xn_exec(struct idpf_adapter *adapter, cookie = FIELD_PREP(IDPF_VC_XN_SALT_M, xn->salt) | FIELD_PREP(IDPF_VC_XN_IDX_M, xn->idx); - retval = idpf_send_mb_msg(adapter, params->vc_op, + retval = idpf_send_mb_msg(adapter, adapter->hw.asq, params->vc_op, send_buf->iov_len, send_buf->iov_base, cookie); if (retval) { @@ -661,12 +664,13 @@ idpf_vc_xn_forward_reply(struct idpf_adapter *adapter, /** * idpf_recv_mb_msg - Receive message over mailbox - * @adapter: Driver specific private structure + * @adapter: driver specific private structure + * @arq: control queue to receive message from * * Will receive control queue message and posts the receive buffer. Returns 0 * on success and negative on failure. */ -int idpf_recv_mb_msg(struct idpf_adapter *adapter) +int idpf_recv_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *arq) { struct idpf_ctlq_msg ctlq_msg; struct idpf_dma_mem *dma_mem; @@ -678,7 +682,7 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter) * actually received on num_recv. */ num_recv = 1; - err = idpf_ctlq_recv(adapter->hw.arq, &num_recv, &ctlq_msg); + err = idpf_ctlq_recv(arq, &num_recv, &ctlq_msg); if (err || !num_recv) break; @@ -694,8 +698,7 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter) else err = idpf_vc_xn_forward_reply(adapter, &ctlq_msg); - post_err = idpf_ctlq_post_rx_buffs(&adapter->hw, - adapter->hw.arq, + post_err = idpf_ctlq_post_rx_buffs(&adapter->hw, arq, &num_recv, &dma_mem); /* If post failed clear the only buffer we supplied */ @@ -2825,7 +2828,7 @@ int idpf_init_dflt_mbx(struct idpf_adapter *adapter) void idpf_deinit_dflt_mbx(struct idpf_adapter *adapter) { if (adapter->hw.arq && adapter->hw.asq) { - idpf_mb_clean(adapter); + idpf_mb_clean(adapter, adapter->hw.asq); idpf_ctlq_deinit(&adapter->hw); } adapter->hw.arq = NULL; diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index f29536eed707..93b540e49b82 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -115,9 +115,9 @@ bool idpf_sideband_action_ena(struct idpf_vport *vport, struct ethtool_rx_flow_spec *fsp); unsigned int idpf_fsteer_max_rules(struct idpf_vport *vport); -int idpf_recv_mb_msg(struct idpf_adapter *adapter); -int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, - u16 msg_size, u8 *msg, u16 cookie); +int idpf_recv_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *arq); +int idpf_send_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *asq, + u32 op, u16 msg_size, u8 *msg, u16 cookie); int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q); u32 idpf_get_vport_id(struct idpf_vport *vport);