From patchwork Wed Jul 22 22:10:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334237 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=Yq9J8og9; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqW21c0wz9sTh for ; Thu, 23 Jul 2020 08:14:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387601AbgGVWOC (ORCPT ); Wed, 22 Jul 2020 18:14:02 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:15666 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733104AbgGVWLb (ORCPT ); Wed, 22 Jul 2020 18:11:31 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM7112027569; Wed, 22 Jul 2020 15:11:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=FhAggLZ0tLiue9xeV5tnAuoxZeC8O3N9g9v9jT+7MzA=; b=Yq9J8og9b0dzo+IWpNfWrK6a58sydhcZzodN706GFEVd5gNYcO7ccdw5BrNdv1VdNev3 MScu+OIfg50epl7u1uuOqvYnyBwq8P0O3HqffjHQfBrUuW3WxL3/cYzWFCBFxFdLnCBd UxZbSz9DZ+9FHyxFLBIaDmyhgOd6nwTCaBzd3PJfmto1R0n377nwBRufiDUsaheEakKM vbHo5KkepCG3fPjioq78qTxnOVKtd8ieQs7aCPw+HmPJgRxwpAm6fjB6bofl5cEPy6Za /Jz9F0ey9xk9o4MxeO4j/odgedFN2x4AKYxP9+a6FPV0MfhFrwPVu8I3iCorzagJ/9lp oA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 32bxentx65-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:11:14 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:13 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:12 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:11:12 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id E55AE3F703F; Wed, 22 Jul 2020 15:11:05 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 01/15] qed: reformat "qed_chain.h" a bit Date: Thu, 23 Jul 2020 01:10:31 +0300 Message-ID: <20200722221045.5436-2-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Reformat structs and macros definitions a bit prior to making functional changes. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- include/linux/qed/qed_chain.h | 126 ++++++++++++++++++---------------- 1 file changed, 66 insertions(+), 60 deletions(-) diff --git a/include/linux/qed/qed_chain.h b/include/linux/qed/qed_chain.h index 7071dc92b4e2..087073517c09 100644 --- a/include/linux/qed/qed_chain.h +++ b/include/linux/qed/qed_chain.h @@ -26,9 +26,9 @@ enum qed_chain_mode { }; enum qed_chain_use_mode { - QED_CHAIN_USE_TO_PRODUCE, /* Chain starts empty */ - QED_CHAIN_USE_TO_CONSUME, /* Chain starts full */ - QED_CHAIN_USE_TO_CONSUME_PRODUCE, /* Chain starts empty */ + QED_CHAIN_USE_TO_PRODUCE, /* Chain starts empty */ + QED_CHAIN_USE_TO_CONSUME, /* Chain starts full */ + QED_CHAIN_USE_TO_CONSUME_PRODUCE, /* Chain starts empty */ }; enum qed_chain_cnt_type { @@ -40,84 +40,86 @@ enum qed_chain_cnt_type { }; struct qed_chain_next { - struct regpair next_phys; - void *next_virt; + struct regpair next_phys; + void *next_virt; }; struct qed_chain_pbl_u16 { - u16 prod_page_idx; - u16 cons_page_idx; + u16 prod_page_idx; + u16 cons_page_idx; }; struct qed_chain_pbl_u32 { - u32 prod_page_idx; - u32 cons_page_idx; + u32 prod_page_idx; + u32 cons_page_idx; }; struct qed_chain_ext_pbl { - dma_addr_t p_pbl_phys; - void *p_pbl_virt; + dma_addr_t p_pbl_phys; + void *p_pbl_virt; }; struct qed_chain_u16 { /* Cyclic index of next element to produce/consme */ - u16 prod_idx; - u16 cons_idx; + u16 prod_idx; + u16 cons_idx; }; struct qed_chain_u32 { /* Cyclic index of next element to produce/consme */ - u32 prod_idx; - u32 cons_idx; + u32 prod_idx; + u32 cons_idx; }; struct addr_tbl_entry { - void *virt_addr; - dma_addr_t dma_map; + void *virt_addr; + dma_addr_t dma_map; }; struct qed_chain { - /* fastpath portion of the chain - required for commands such + /* Fastpath portion of the chain - required for commands such * as produce / consume. */ + /* Point to next element to produce/consume */ - void *p_prod_elem; - void *p_cons_elem; + void *p_prod_elem; + void *p_cons_elem; /* Fastpath portions of the PBL [if exists] */ + struct { /* Table for keeping the virtual and physical addresses of the * chain pages, respectively to the physical addresses * in the pbl table. */ - struct addr_tbl_entry *pp_addr_tbl; + struct addr_tbl_entry *pp_addr_tbl; union { - struct qed_chain_pbl_u16 u16; - struct qed_chain_pbl_u32 u32; - } c; - } pbl; + struct qed_chain_pbl_u16 u16; + struct qed_chain_pbl_u32 u32; + } c; + } pbl; union { - struct qed_chain_u16 chain16; - struct qed_chain_u32 chain32; - } u; + struct qed_chain_u16 chain16; + struct qed_chain_u32 chain32; + } u; /* Capacity counts only usable elements */ - u32 capacity; - u32 page_cnt; + u32 capacity; + u32 page_cnt; - enum qed_chain_mode mode; + enum qed_chain_mode mode; /* Elements information for fast calculations */ - u16 elem_per_page; - u16 elem_per_page_mask; - u16 elem_size; - u16 next_page_mask; - u16 usable_per_page; - u8 elem_unusable; + u16 elem_per_page; + u16 elem_per_page_mask; + u16 elem_size; + u16 next_page_mask; + u16 usable_per_page; + u8 elem_unusable; - u8 cnt_type; + u8 cnt_type; /* Slowpath of the chain - required for initialization and destruction, * but isn't involved in regular functionality. @@ -125,43 +127,47 @@ struct qed_chain { /* Base address of a pre-allocated buffer for pbl */ struct { - dma_addr_t p_phys_table; - void *p_virt_table; - } pbl_sp; + dma_addr_t p_phys_table; + void *p_virt_table; + } pbl_sp; /* Address of first page of the chain - the address is required * for fastpath operation [consume/produce] but only for the SINGLE * flavour which isn't considered fastpath [== SPQ]. */ - void *p_virt_addr; - dma_addr_t p_phys_addr; + void *p_virt_addr; + dma_addr_t p_phys_addr; /* Total number of elements [for entire chain] */ - u32 size; + u32 size; - u8 intended_use; + u8 intended_use; - bool b_external_pbl; + bool b_external_pbl; }; -#define QED_CHAIN_PBL_ENTRY_SIZE (8) -#define QED_CHAIN_PAGE_SIZE (0x1000) -#define ELEMS_PER_PAGE(elem_size) (QED_CHAIN_PAGE_SIZE / (elem_size)) +#define QED_CHAIN_PBL_ENTRY_SIZE 8 +#define QED_CHAIN_PAGE_SIZE 0x1000 + +#define ELEMS_PER_PAGE(elem_size) \ + (QED_CHAIN_PAGE_SIZE / (elem_size)) -#define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode) \ - (((mode) == QED_CHAIN_MODE_NEXT_PTR) ? \ - (u8)(1 + ((sizeof(struct qed_chain_next) - 1) / \ - (elem_size))) : 0) +#define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode) \ + (((mode) == QED_CHAIN_MODE_NEXT_PTR) ? \ + (u8)(1 + ((sizeof(struct qed_chain_next) - 1) / (elem_size))) : \ + 0) -#define USABLE_ELEMS_PER_PAGE(elem_size, mode) \ - ((u32)(ELEMS_PER_PAGE(elem_size) - \ - UNUSABLE_ELEMS_PER_PAGE(elem_size, mode))) +#define USABLE_ELEMS_PER_PAGE(elem_size, mode) \ + ((u32)(ELEMS_PER_PAGE(elem_size) - \ + UNUSABLE_ELEMS_PER_PAGE((elem_size), (mode)))) -#define QED_CHAIN_PAGE_CNT(elem_cnt, elem_size, mode) \ - DIV_ROUND_UP(elem_cnt, USABLE_ELEMS_PER_PAGE(elem_size, mode)) +#define QED_CHAIN_PAGE_CNT(elem_cnt, elem_size, mode) \ + DIV_ROUND_UP((elem_cnt), USABLE_ELEMS_PER_PAGE((elem_size), (mode))) -#define is_chain_u16(p) ((p)->cnt_type == QED_CHAIN_CNT_TYPE_U16) -#define is_chain_u32(p) ((p)->cnt_type == QED_CHAIN_CNT_TYPE_U32) +#define is_chain_u16(p) \ + ((p)->cnt_type == QED_CHAIN_CNT_TYPE_U16) +#define is_chain_u32(p) \ + ((p)->cnt_type == QED_CHAIN_CNT_TYPE_U32) /* Accessors */ static inline u16 qed_chain_get_prod_idx(struct qed_chain *p_chain) From patchwork Wed Jul 22 22:10:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334235 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=c6Y2N2ZL; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqVh2LMDz9sTj for ; Thu, 23 Jul 2020 08:13:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387578AbgGVWNq (ORCPT ); Wed, 22 Jul 2020 18:13:46 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:3176 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733145AbgGVWLh (ORCPT ); Wed, 22 Jul 2020 18:11:37 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6f3D019802; Wed, 22 Jul 2020 15:11:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=GmEgsOs6h9b3iKtTR1f7ZzvpfMSp6NwSWPeac8kAzwo=; b=c6Y2N2ZLJaqT0j2CraOBXVCGB54OVxNENrQe1pxef7nuy7vMViODmBvNTRXbaMp3sZp3 3AEuvZx++ODlRoSWQLe+q0KSXs3XgtfUUFaMhwYvNr9mgkMJ/e+mSbjR5jpBr0jkZ7Nf UnMNG1OuYMSgPqL+ZMe0JLbzKueUZKHtjYHA4fmIuTnx4I57Z5lFqr3H2D2cGJZfIIJr z/asPlY/0afpipBkcpWjhdNTWx/x6Z/2GFThL7r/UGqmLfll2ltBC1jdeCMUSN/o2Gfy 9JuHCjVu2MP53bTYHB+AdMxSTHgxHDYwE4+dyIx8NnjNIm2UejAEi9mhsEAEIpdCU0Mu 5Q== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 32c0kkt0jr-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:11:21 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:20 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:11:19 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id A61E43F7040; Wed, 22 Jul 2020 15:11:12 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 02/15] qed: reformat Makefile Date: Thu, 23 Jul 2020 01:10:32 +0300 Message-ID: <20200722221045.5436-3-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org List one entry per line and sort them alphabetically to simplify the addition of the new ones. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qed/Makefile | 36 +++++++++++++++++++----- 1 file changed, 29 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile index 4176bbf2a22b..3c75e4fa9b02 100644 --- a/drivers/net/ethernet/qlogic/qed/Makefile +++ b/drivers/net/ethernet/qlogic/qed/Makefile @@ -3,12 +3,34 @@ obj-$(CONFIG_QED) := qed.o -qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \ - qed_int.o qed_main.o qed_mcp.o qed_sp_commands.o qed_spq.o qed_l2.o \ - qed_selftest.o qed_dcbx.o qed_debug.o qed_ptp.o qed_mng_tlv.o -qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o -qed-$(CONFIG_QED_LL2) += qed_ll2.o -qed-$(CONFIG_QED_RDMA) += qed_roce.o qed_rdma.o qed_iwarp.o -qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o +qed-y := \ + qed_cxt.o \ + qed_dcbx.o \ + qed_debug.o \ + qed_dev.o \ + qed_hw.o \ + qed_init_fw_funcs.o \ + qed_init_ops.o \ + qed_int.o \ + qed_l2.o \ + qed_main.o \ + qed_mcp.o \ + qed_mng_tlv.o \ + qed_ptp.o \ + qed_selftest.o \ + qed_sp_commands.o \ + qed_spq.o + qed-$(CONFIG_QED_FCOE) += qed_fcoe.o +qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o +qed-$(CONFIG_QED_LL2) += qed_ll2.o qed-$(CONFIG_QED_OOO) += qed_ooo.o + +qed-$(CONFIG_QED_RDMA) += \ + qed_iwarp.o \ + qed_rdma.o \ + qed_roce.o + +qed-$(CONFIG_QED_SRIOV) += \ + qed_sriov.o \ + qed_vf.o From patchwork Wed Jul 22 22:10:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334208 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=y7nFpl2P; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqSQ4GgGz9sTg for ; Thu, 23 Jul 2020 08:11:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733212AbgGVWLt (ORCPT ); Wed, 22 Jul 2020 18:11:49 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:56784 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733203AbgGVWLs (ORCPT ); Wed, 22 Jul 2020 18:11:48 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6wNC019882; Wed, 22 Jul 2020 15:11:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=R/eDrmdO6oP6aiedm7wII2rOftZuq5jEjSxw9cCq3E0=; b=y7nFpl2PykybtW8LbB6KDzQXD/EiNEXYqs1/BijiUVmnG1yG848Du492QmaaFrQQvir8 u7f6ddxcuC73S5th7BixDaZ/Xd/wGaIJ2awF/LhAYSuQe/fZviW/Lh7y2DVKCOHPC9+O sEGqQMVnr4NHZgu0D2MzELMlQNk246yFj8s1OknLMT8V5be6moPybhGpAc9OlZgfxhBi yx9mJ589D5OYsVtlCpQ7YsWM1kgFL8/ttWldxPAOmZ3xTNzCsWAAQIpu0nKdqbrguQ9D F3RPUOXb3dbQKC9zQNC+UG9lmW8j+MG32hv0UmxEb2RBejT0Q/3wRUqjYxToJvW33GIx bQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 32c0kkt0kb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:11:28 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:11:26 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id 7BE173F703F; Wed, 22 Jul 2020 15:11:19 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , , "kernel test robot" Subject: [PATCH v2 net-next 03/15] qed: move chain methods to a separate file Date: Thu, 23 Jul 2020 01:10:33 +0300 Message-ID: <20200722221045.5436-4-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Move chain allocation/freeing functions to a new file to not mix it with hardware-related code. Reported-by: kernel test robot Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qed/Makefile | 1 + drivers/net/ethernet/qlogic/qed/qed_chain.c | 302 ++++++++++++++++++++ drivers/net/ethernet/qlogic/qed/qed_dev.c | 273 ------------------ 3 files changed, 303 insertions(+), 273 deletions(-) create mode 100644 drivers/net/ethernet/qlogic/qed/qed_chain.c diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile index 3c75e4fa9b02..f947b105cf14 100644 --- a/drivers/net/ethernet/qlogic/qed/Makefile +++ b/drivers/net/ethernet/qlogic/qed/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_QED) := qed.o qed-y := \ + qed_chain.o \ qed_cxt.o \ qed_dcbx.o \ qed_debug.o \ diff --git a/drivers/net/ethernet/qlogic/qed/qed_chain.c b/drivers/net/ethernet/qlogic/qed/qed_chain.c new file mode 100644 index 000000000000..bab02ff32514 --- /dev/null +++ b/drivers/net/ethernet/qlogic/qed/qed_chain.c @@ -0,0 +1,302 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) +/* Copyright (c) 2020 Marvell International Ltd. */ + +#include +#include +#include + +#include "qed_dev_api.h" + +static void qed_chain_free_next_ptr(struct qed_dev *cdev, + struct qed_chain *chain) +{ + struct device *dev = &cdev->pdev->dev; + struct qed_chain_next *next; + dma_addr_t phys, phys_next; + void *virt, *virt_next; + u32 size, i; + + size = chain->elem_size * chain->usable_per_page; + virt = chain->p_virt_addr; + phys = chain->p_phys_addr; + + for (i = 0; i < chain->page_cnt; i++) { + if (!virt) + break; + + next = virt + size; + virt_next = next->next_virt; + phys_next = HILO_DMA_REGPAIR(next->next_phys); + + dma_free_coherent(dev, QED_CHAIN_PAGE_SIZE, virt, phys); + + virt = virt_next; + phys = phys_next; + } +} + +static void qed_chain_free_single(struct qed_dev *cdev, + struct qed_chain *chain) +{ + if (!chain->p_virt_addr) + return; + + dma_free_coherent(&cdev->pdev->dev, QED_CHAIN_PAGE_SIZE, + chain->p_virt_addr, chain->p_phys_addr); +} + +static void qed_chain_free_pbl(struct qed_dev *cdev, struct qed_chain *chain) +{ + struct device *dev = &cdev->pdev->dev; + struct addr_tbl_entry *entry; + u32 pbl_size, i; + + if (!chain->pbl.pp_addr_tbl) + return; + + for (i = 0; i < chain->page_cnt; i++) { + entry = chain->pbl.pp_addr_tbl + i; + if (!entry->virt_addr) + break; + + dma_free_coherent(dev, QED_CHAIN_PAGE_SIZE, entry->virt_addr, + entry->dma_map); + } + + pbl_size = chain->page_cnt * QED_CHAIN_PBL_ENTRY_SIZE; + + if (!chain->b_external_pbl) + dma_free_coherent(dev, pbl_size, chain->pbl_sp.p_virt_table, + chain->pbl_sp.p_phys_table); + + vfree(chain->pbl.pp_addr_tbl); + chain->pbl.pp_addr_tbl = NULL; +} + +/** + * qed_chain_free() - Free chain DMA memory. + * + * @cdev: Main device structure. + * @chain: Chain to free. + */ +void qed_chain_free(struct qed_dev *cdev, struct qed_chain *chain) +{ + switch (chain->mode) { + case QED_CHAIN_MODE_NEXT_PTR: + qed_chain_free_next_ptr(cdev, chain); + break; + case QED_CHAIN_MODE_SINGLE: + qed_chain_free_single(cdev, chain); + break; + case QED_CHAIN_MODE_PBL: + qed_chain_free_pbl(cdev, chain); + break; + default: + break; + } +} + +static int +qed_chain_alloc_sanity_check(struct qed_dev *cdev, + enum qed_chain_cnt_type cnt_type, + size_t elem_size, u32 page_cnt) +{ + u64 chain_size = ELEMS_PER_PAGE(elem_size) * page_cnt; + + /* The actual chain size can be larger than the maximal possible value + * after rounding up the requested elements number to pages, and after + * taking into account the unusuable elements (next-ptr elements). + * The size of a "u16" chain can be (U16_MAX + 1) since the chain + * size/capacity fields are of u32 type. + */ + switch (cnt_type) { + case QED_CHAIN_CNT_TYPE_U16: + if (chain_size > U16_MAX + 1) + break; + + return 0; + case QED_CHAIN_CNT_TYPE_U32: + if (chain_size > U32_MAX) + break; + + return 0; + default: + return -EINVAL; + } + + DP_NOTICE(cdev, + "The actual chain size (0x%llx) is larger than the maximal possible value\n", + chain_size); + + return -EINVAL; +} + +static int qed_chain_alloc_next_ptr(struct qed_dev *cdev, + struct qed_chain *chain) +{ + struct device *dev = &cdev->pdev->dev; + void *virt, *virt_prev = NULL; + dma_addr_t phys; + u32 i; + + for (i = 0; i < chain->page_cnt; i++) { + virt = dma_alloc_coherent(dev, QED_CHAIN_PAGE_SIZE, &phys, + GFP_KERNEL); + if (!virt) + return -ENOMEM; + + if (i == 0) { + qed_chain_init_mem(chain, virt, phys); + qed_chain_reset(chain); + } else { + qed_chain_init_next_ptr_elem(chain, virt_prev, virt, + phys); + } + + virt_prev = virt; + } + + /* Last page's next element should point to the beginning of the + * chain. + */ + qed_chain_init_next_ptr_elem(chain, virt_prev, chain->p_virt_addr, + chain->p_phys_addr); + + return 0; +} + +static int qed_chain_alloc_single(struct qed_dev *cdev, + struct qed_chain *chain) +{ + dma_addr_t phys; + void *virt; + + virt = dma_alloc_coherent(&cdev->pdev->dev, QED_CHAIN_PAGE_SIZE, + &phys, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + qed_chain_init_mem(chain, virt, phys); + qed_chain_reset(chain); + + return 0; +} + +static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain, + struct qed_chain_ext_pbl *ext_pbl) +{ + struct device *dev = &cdev->pdev->dev; + struct addr_tbl_entry *addr_tbl; + dma_addr_t phys, pbl_phys; + void *pbl_virt; + u32 page_cnt, i; + size_t size; + void *virt; + + page_cnt = chain->page_cnt; + + size = array_size(page_cnt, sizeof(*addr_tbl)); + if (unlikely(size == SIZE_MAX)) + return -EOVERFLOW; + + addr_tbl = vzalloc(size); + if (!addr_tbl) + return -ENOMEM; + + chain->pbl.pp_addr_tbl = addr_tbl; + + if (ext_pbl) { + size = 0; + pbl_virt = ext_pbl->p_pbl_virt; + pbl_phys = ext_pbl->p_pbl_phys; + + chain->b_external_pbl = true; + } else { + size = array_size(page_cnt, QED_CHAIN_PBL_ENTRY_SIZE); + if (unlikely(size == SIZE_MAX)) + return -EOVERFLOW; + + pbl_virt = dma_alloc_coherent(dev, size, &pbl_phys, + GFP_KERNEL); + } + + if (!pbl_virt) + return -ENOMEM; + + chain->pbl_sp.p_virt_table = pbl_virt; + chain->pbl_sp.p_phys_table = pbl_phys; + + for (i = 0; i < page_cnt; i++) { + virt = dma_alloc_coherent(dev, QED_CHAIN_PAGE_SIZE, &phys, + GFP_KERNEL); + if (!virt) + return -ENOMEM; + + if (i == 0) { + qed_chain_init_mem(chain, virt, phys); + qed_chain_reset(chain); + } + + /* Fill the PBL table with the physical address of the page */ + *(dma_addr_t *)pbl_virt = phys; + pbl_virt += QED_CHAIN_PBL_ENTRY_SIZE; + + /* Keep the virtual address of the page */ + addr_tbl[i].virt_addr = virt; + addr_tbl[i].dma_map = phys; + } + + return 0; +} + +int qed_chain_alloc(struct qed_dev *cdev, + enum qed_chain_use_mode intended_use, + enum qed_chain_mode mode, + enum qed_chain_cnt_type cnt_type, + u32 num_elems, + size_t elem_size, + struct qed_chain *chain, + struct qed_chain_ext_pbl *ext_pbl) +{ + u32 page_cnt; + int rc; + + if (mode == QED_CHAIN_MODE_SINGLE) + page_cnt = 1; + else + page_cnt = QED_CHAIN_PAGE_CNT(num_elems, elem_size, mode); + + rc = qed_chain_alloc_sanity_check(cdev, cnt_type, elem_size, page_cnt); + if (rc) { + DP_NOTICE(cdev, + "Cannot allocate a chain with the given arguments:\n"); + DP_NOTICE(cdev, + "[use_mode %d, mode %d, cnt_type %d, num_elems %d, elem_size %zu]\n", + intended_use, mode, cnt_type, num_elems, elem_size); + return rc; + } + + qed_chain_init_params(chain, page_cnt, elem_size, intended_use, mode, + cnt_type); + + switch (mode) { + case QED_CHAIN_MODE_NEXT_PTR: + rc = qed_chain_alloc_next_ptr(cdev, chain); + break; + case QED_CHAIN_MODE_SINGLE: + rc = qed_chain_alloc_single(cdev, chain); + break; + case QED_CHAIN_MODE_PBL: + rc = qed_chain_alloc_pbl(cdev, chain, ext_pbl); + break; + default: + return -EINVAL; + } + + if (!rc) + return 0; + + qed_chain_free(cdev, chain); + + return rc; +} diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index 6516a1f921da..d9c7a1a6be94 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -4716,279 +4716,6 @@ void qed_hw_remove(struct qed_dev *cdev) qed_mcp_nvm_info_free(p_hwfn); } -static void qed_chain_free_next_ptr(struct qed_dev *cdev, - struct qed_chain *p_chain) -{ - void *p_virt = p_chain->p_virt_addr, *p_virt_next = NULL; - dma_addr_t p_phys = p_chain->p_phys_addr, p_phys_next = 0; - struct qed_chain_next *p_next; - u32 size, i; - - if (!p_virt) - return; - - size = p_chain->elem_size * p_chain->usable_per_page; - - for (i = 0; i < p_chain->page_cnt; i++) { - if (!p_virt) - break; - - p_next = (struct qed_chain_next *)((u8 *)p_virt + size); - p_virt_next = p_next->next_virt; - p_phys_next = HILO_DMA_REGPAIR(p_next->next_phys); - - dma_free_coherent(&cdev->pdev->dev, - QED_CHAIN_PAGE_SIZE, p_virt, p_phys); - - p_virt = p_virt_next; - p_phys = p_phys_next; - } -} - -static void qed_chain_free_single(struct qed_dev *cdev, - struct qed_chain *p_chain) -{ - if (!p_chain->p_virt_addr) - return; - - dma_free_coherent(&cdev->pdev->dev, - QED_CHAIN_PAGE_SIZE, - p_chain->p_virt_addr, p_chain->p_phys_addr); -} - -static void qed_chain_free_pbl(struct qed_dev *cdev, struct qed_chain *p_chain) -{ - struct addr_tbl_entry *pp_addr_tbl = p_chain->pbl.pp_addr_tbl; - u32 page_cnt = p_chain->page_cnt, i, pbl_size; - - if (!pp_addr_tbl) - return; - - for (i = 0; i < page_cnt; i++) { - if (!pp_addr_tbl[i].virt_addr || !pp_addr_tbl[i].dma_map) - break; - - dma_free_coherent(&cdev->pdev->dev, - QED_CHAIN_PAGE_SIZE, - pp_addr_tbl[i].virt_addr, - pp_addr_tbl[i].dma_map); - } - - pbl_size = page_cnt * QED_CHAIN_PBL_ENTRY_SIZE; - - if (!p_chain->b_external_pbl) - dma_free_coherent(&cdev->pdev->dev, - pbl_size, - p_chain->pbl_sp.p_virt_table, - p_chain->pbl_sp.p_phys_table); - - vfree(p_chain->pbl.pp_addr_tbl); - p_chain->pbl.pp_addr_tbl = NULL; -} - -void qed_chain_free(struct qed_dev *cdev, struct qed_chain *p_chain) -{ - switch (p_chain->mode) { - case QED_CHAIN_MODE_NEXT_PTR: - qed_chain_free_next_ptr(cdev, p_chain); - break; - case QED_CHAIN_MODE_SINGLE: - qed_chain_free_single(cdev, p_chain); - break; - case QED_CHAIN_MODE_PBL: - qed_chain_free_pbl(cdev, p_chain); - break; - } -} - -static int -qed_chain_alloc_sanity_check(struct qed_dev *cdev, - enum qed_chain_cnt_type cnt_type, - size_t elem_size, u32 page_cnt) -{ - u64 chain_size = ELEMS_PER_PAGE(elem_size) * page_cnt; - - /* The actual chain size can be larger than the maximal possible value - * after rounding up the requested elements number to pages, and after - * taking into acount the unusuable elements (next-ptr elements). - * The size of a "u16" chain can be (U16_MAX + 1) since the chain - * size/capacity fields are of a u32 type. - */ - if ((cnt_type == QED_CHAIN_CNT_TYPE_U16 && - chain_size > ((u32)U16_MAX + 1)) || - (cnt_type == QED_CHAIN_CNT_TYPE_U32 && chain_size > U32_MAX)) { - DP_NOTICE(cdev, - "The actual chain size (0x%llx) is larger than the maximal possible value\n", - chain_size); - return -EINVAL; - } - - return 0; -} - -static int -qed_chain_alloc_next_ptr(struct qed_dev *cdev, struct qed_chain *p_chain) -{ - void *p_virt = NULL, *p_virt_prev = NULL; - dma_addr_t p_phys = 0; - u32 i; - - for (i = 0; i < p_chain->page_cnt; i++) { - p_virt = dma_alloc_coherent(&cdev->pdev->dev, - QED_CHAIN_PAGE_SIZE, - &p_phys, GFP_KERNEL); - if (!p_virt) - return -ENOMEM; - - if (i == 0) { - qed_chain_init_mem(p_chain, p_virt, p_phys); - qed_chain_reset(p_chain); - } else { - qed_chain_init_next_ptr_elem(p_chain, p_virt_prev, - p_virt, p_phys); - } - - p_virt_prev = p_virt; - } - /* Last page's next element should point to the beginning of the - * chain. - */ - qed_chain_init_next_ptr_elem(p_chain, p_virt_prev, - p_chain->p_virt_addr, - p_chain->p_phys_addr); - - return 0; -} - -static int -qed_chain_alloc_single(struct qed_dev *cdev, struct qed_chain *p_chain) -{ - dma_addr_t p_phys = 0; - void *p_virt = NULL; - - p_virt = dma_alloc_coherent(&cdev->pdev->dev, - QED_CHAIN_PAGE_SIZE, &p_phys, GFP_KERNEL); - if (!p_virt) - return -ENOMEM; - - qed_chain_init_mem(p_chain, p_virt, p_phys); - qed_chain_reset(p_chain); - - return 0; -} - -static int -qed_chain_alloc_pbl(struct qed_dev *cdev, - struct qed_chain *p_chain, - struct qed_chain_ext_pbl *ext_pbl) -{ - u32 page_cnt = p_chain->page_cnt, size, i; - dma_addr_t p_phys = 0, p_pbl_phys = 0; - struct addr_tbl_entry *pp_addr_tbl; - u8 *p_pbl_virt = NULL; - void *p_virt = NULL; - - size = page_cnt * sizeof(*pp_addr_tbl); - pp_addr_tbl = vzalloc(size); - if (!pp_addr_tbl) - return -ENOMEM; - - /* The allocation of the PBL table is done with its full size, since it - * is expected to be successive. - * qed_chain_init_pbl_mem() is called even in a case of an allocation - * failure, since tbl was previously allocated, and it - * should be saved to allow its freeing during the error flow. - */ - size = page_cnt * QED_CHAIN_PBL_ENTRY_SIZE; - - if (!ext_pbl) { - p_pbl_virt = dma_alloc_coherent(&cdev->pdev->dev, - size, &p_pbl_phys, GFP_KERNEL); - } else { - p_pbl_virt = ext_pbl->p_pbl_virt; - p_pbl_phys = ext_pbl->p_pbl_phys; - p_chain->b_external_pbl = true; - } - - qed_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys, pp_addr_tbl); - if (!p_pbl_virt) - return -ENOMEM; - - for (i = 0; i < page_cnt; i++) { - p_virt = dma_alloc_coherent(&cdev->pdev->dev, - QED_CHAIN_PAGE_SIZE, - &p_phys, GFP_KERNEL); - if (!p_virt) - return -ENOMEM; - - if (i == 0) { - qed_chain_init_mem(p_chain, p_virt, p_phys); - qed_chain_reset(p_chain); - } - - /* Fill the PBL table with the physical address of the page */ - *(dma_addr_t *)p_pbl_virt = p_phys; - /* Keep the virtual address of the page */ - p_chain->pbl.pp_addr_tbl[i].virt_addr = p_virt; - p_chain->pbl.pp_addr_tbl[i].dma_map = p_phys; - - p_pbl_virt += QED_CHAIN_PBL_ENTRY_SIZE; - } - - return 0; -} - -int qed_chain_alloc(struct qed_dev *cdev, - enum qed_chain_use_mode intended_use, - enum qed_chain_mode mode, - enum qed_chain_cnt_type cnt_type, - u32 num_elems, - size_t elem_size, - struct qed_chain *p_chain, - struct qed_chain_ext_pbl *ext_pbl) -{ - u32 page_cnt; - int rc = 0; - - if (mode == QED_CHAIN_MODE_SINGLE) - page_cnt = 1; - else - page_cnt = QED_CHAIN_PAGE_CNT(num_elems, elem_size, mode); - - rc = qed_chain_alloc_sanity_check(cdev, cnt_type, elem_size, page_cnt); - if (rc) { - DP_NOTICE(cdev, - "Cannot allocate a chain with the given arguments:\n"); - DP_NOTICE(cdev, - "[use_mode %d, mode %d, cnt_type %d, num_elems %d, elem_size %zu]\n", - intended_use, mode, cnt_type, num_elems, elem_size); - return rc; - } - - qed_chain_init_params(p_chain, page_cnt, (u8) elem_size, intended_use, - mode, cnt_type); - - switch (mode) { - case QED_CHAIN_MODE_NEXT_PTR: - rc = qed_chain_alloc_next_ptr(cdev, p_chain); - break; - case QED_CHAIN_MODE_SINGLE: - rc = qed_chain_alloc_single(cdev, p_chain); - break; - case QED_CHAIN_MODE_PBL: - rc = qed_chain_alloc_pbl(cdev, p_chain, ext_pbl); - break; - } - if (rc) - goto nomem; - - return 0; - -nomem: - qed_chain_free(cdev, p_chain); - return rc; -} - int qed_fw_l2_queue(struct qed_hwfn *p_hwfn, u16 src_id, u16 *dst_id) { if (src_id >= RESC_NUM(p_hwfn, QED_L2_QUEUE)) { From patchwork Wed Jul 22 22:10:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334210 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=TtHUy2kK; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqSc5DbWz9sT6 for ; Thu, 23 Jul 2020 08:12:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733249AbgGVWL5 (ORCPT ); Wed, 22 Jul 2020 18:11:57 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:13066 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733226AbgGVWLx (ORCPT ); Wed, 22 Jul 2020 18:11:53 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6fei019797; Wed, 22 Jul 2020 15:11:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=nnuyyoiJD7ZTQL8VO7tW2BAxPk4CVEZ/FEeHiAG1BJ4=; b=TtHUy2kKyKWmrkIYVszIMvsv1X/ioCOKoYCvFiEaQKM5Stg9YPUdZlRPZ8yyewnvy0qu XqNejMdV4Lb74FSjqaIFHoc5Xz4JIAA4GOXQdDAp2ZNpElh6yhrGHcOpm/sKNzqjy9hb 6Mght4BZPO0hit5w/0cGajEfLEsiMnt/bWZOXVgU3fIQqU6aAyDze1vRDwQ+xAxJTPGc +i3IJnOEf+GfYTb2XMLXo6hUmPeWegOSuhnmI3NvEVItyFZ09a+6bNTP974yb9dq5lC3 W98NMJXyK4dCilebIKfQb4ZdG5rITNORdnB2mnKK91rTWekQtuucqXIinviVTmSyOWvD jA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 32c0kkt0kv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:11:35 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:33 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:33 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:11:33 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id 8796F3F703F; Wed, 22 Jul 2020 15:11:26 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 04/15] qed: prevent possible double-frees of the chains Date: Thu, 23 Jul 2020 01:10:34 +0300 Message-ID: <20200722221045.5436-5-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Zero-initialize chain on qed_chain_free(), so it couldn't be freed twice and provoke undefined behaviour. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qed/qed_chain.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qlogic/qed/qed_chain.c b/drivers/net/ethernet/qlogic/qed/qed_chain.c index bab02ff32514..917b783433f7 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_chain.c +++ b/drivers/net/ethernet/qlogic/qed/qed_chain.c @@ -92,8 +92,10 @@ void qed_chain_free(struct qed_dev *cdev, struct qed_chain *chain) qed_chain_free_pbl(cdev, chain); break; default: - break; + return; } + + qed_chain_init_mem(chain, NULL, 0); } static int From patchwork Wed Jul 22 22:10:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334211 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=xoxR3ZqI; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqSj588Nz9sTQ for ; Thu, 23 Jul 2020 08:12:05 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733264AbgGVWMC (ORCPT ); Wed, 22 Jul 2020 18:12:02 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:20406 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733226AbgGVWMA (ORCPT ); Wed, 22 Jul 2020 18:12:00 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6g1Z019807; Wed, 22 Jul 2020 15:11:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=oSzLwsref012AkzpvzgFjlOhikclbMl+wXgWBDBRSgo=; b=xoxR3ZqII+0S6TcGtqDB7U2nAFbDJCWA3d07R3178IOIPsF6dQisuCotOt6BLncOA3Yo GQOIbOlw7o6A7SHSohfxTz49FhgHIiid5OihGqw3Qpd2dwRYcD/nzVL0KUhq1sjb9EZ7 t+Eg49oc2+OH5Spj6k9A94b4fJC3c6TR0tbLRDXEw6+36Z11QYk7tCZ9mrNMFnsgon73 w7V0VruHj8IF/f8bKY6aPiNOC5KJ8I1frfaDl1SE/IDD6DB9fHwjeMRTzGhFob0P6bco nwviJDz0XItmBqnu/rzfKruCZ+Ii6pZ+VIIusx8Koq5asnexY5QGW1R76JkYokYp4wp+ sA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 32c0kkt0m2-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:11:43 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:11:39 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id 63B853F7040; Wed, 22 Jul 2020 15:11:33 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 05/15] qed: sanitize PBL chains allocation Date: Thu, 23 Jul 2020 01:10:35 +0300 Message-ID: <20200722221045.5436-6-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org PBL chain elements are actually DMA addresses stored in __le64, but currently their size is hardcoded to 8, and DMA addresses are assigned via cast to variable-sized dma_addr_t without any bitwise conversions. Change the type of pbl_virt array to match the actual one, add a new field to store the size of allocated DMA memory and sanitize elements assignment. Misc: give more logic names to the members of qed_chain::pbl_sp embedded struct. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qed/qed_chain.c | 21 +++++++++---------- .../net/ethernet/qlogic/qed/qed_sp_commands.c | 4 ++-- include/linux/qed/qed_chain.h | 16 +++++++------- 3 files changed, 20 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/qlogic/qed/qed_chain.c b/drivers/net/ethernet/qlogic/qed/qed_chain.c index 917b783433f7..a9ff15b9d8c0 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_chain.c +++ b/drivers/net/ethernet/qlogic/qed/qed_chain.c @@ -49,7 +49,7 @@ static void qed_chain_free_pbl(struct qed_dev *cdev, struct qed_chain *chain) { struct device *dev = &cdev->pdev->dev; struct addr_tbl_entry *entry; - u32 pbl_size, i; + u32 i; if (!chain->pbl.pp_addr_tbl) return; @@ -63,11 +63,10 @@ static void qed_chain_free_pbl(struct qed_dev *cdev, struct qed_chain *chain) entry->dma_map); } - pbl_size = chain->page_cnt * QED_CHAIN_PBL_ENTRY_SIZE; - if (!chain->b_external_pbl) - dma_free_coherent(dev, pbl_size, chain->pbl_sp.p_virt_table, - chain->pbl_sp.p_phys_table); + dma_free_coherent(dev, chain->pbl_sp.table_size, + chain->pbl_sp.table_virt, + chain->pbl_sp.table_phys); vfree(chain->pbl.pp_addr_tbl); chain->pbl.pp_addr_tbl = NULL; @@ -190,7 +189,7 @@ static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain, struct device *dev = &cdev->pdev->dev; struct addr_tbl_entry *addr_tbl; dma_addr_t phys, pbl_phys; - void *pbl_virt; + __le64 *pbl_virt; u32 page_cnt, i; size_t size; void *virt; @@ -214,7 +213,7 @@ static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain, chain->b_external_pbl = true; } else { - size = array_size(page_cnt, QED_CHAIN_PBL_ENTRY_SIZE); + size = array_size(page_cnt, sizeof(*pbl_virt)); if (unlikely(size == SIZE_MAX)) return -EOVERFLOW; @@ -225,8 +224,9 @@ static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain, if (!pbl_virt) return -ENOMEM; - chain->pbl_sp.p_virt_table = pbl_virt; - chain->pbl_sp.p_phys_table = pbl_phys; + chain->pbl_sp.table_virt = pbl_virt; + chain->pbl_sp.table_phys = pbl_phys; + chain->pbl_sp.table_size = size; for (i = 0; i < page_cnt; i++) { virt = dma_alloc_coherent(dev, QED_CHAIN_PAGE_SIZE, &phys, @@ -240,8 +240,7 @@ static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain, } /* Fill the PBL table with the physical address of the page */ - *(dma_addr_t *)pbl_virt = phys; - pbl_virt += QED_CHAIN_PBL_ENTRY_SIZE; + pbl_virt[i] = cpu_to_le64(phys); /* Keep the virtual address of the page */ addr_tbl[i].virt_addr = virt; diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c index 8142f5669b26..aa71adcf31ee 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c +++ b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c @@ -366,11 +366,11 @@ int qed_sp_pf_start(struct qed_hwfn *p_hwfn, /* Place EQ address in RAMROD */ DMA_REGPAIR_LE(p_ramrod->event_ring_pbl_addr, - p_hwfn->p_eq->chain.pbl_sp.p_phys_table); + qed_chain_get_pbl_phys(&p_hwfn->p_eq->chain)); page_cnt = (u8)qed_chain_get_page_cnt(&p_hwfn->p_eq->chain); p_ramrod->event_ring_num_pages = page_cnt; DMA_REGPAIR_LE(p_ramrod->consolid_q_pbl_addr, - p_hwfn->p_consq->chain.pbl_sp.p_phys_table); + qed_chain_get_pbl_phys(&p_hwfn->p_consq->chain)); qed_tunn_set_pf_start_params(p_hwfn, p_tunn, &p_ramrod->tunnel_config); diff --git a/include/linux/qed/qed_chain.h b/include/linux/qed/qed_chain.h index 087073517c09..265e0b671a5c 100644 --- a/include/linux/qed/qed_chain.h +++ b/include/linux/qed/qed_chain.h @@ -127,8 +127,9 @@ struct qed_chain { /* Base address of a pre-allocated buffer for pbl */ struct { - dma_addr_t p_phys_table; - void *p_virt_table; + __le64 *table_virt; + dma_addr_t table_phys; + size_t table_size; } pbl_sp; /* Address of first page of the chain - the address is required @@ -146,7 +147,6 @@ struct qed_chain { bool b_external_pbl; }; -#define QED_CHAIN_PBL_ENTRY_SIZE 8 #define QED_CHAIN_PAGE_SIZE 0x1000 #define ELEMS_PER_PAGE(elem_size) \ @@ -236,7 +236,7 @@ static inline u32 qed_chain_get_page_cnt(struct qed_chain *p_chain) static inline dma_addr_t qed_chain_get_pbl_phys(struct qed_chain *p_chain) { - return p_chain->pbl_sp.p_phys_table; + return p_chain->pbl_sp.table_phys; } /** @@ -527,8 +527,8 @@ static inline void qed_chain_init_params(struct qed_chain *p_chain, p_chain->capacity = p_chain->usable_per_page * page_cnt; p_chain->size = p_chain->elem_per_page * page_cnt; - p_chain->pbl_sp.p_phys_table = 0; - p_chain->pbl_sp.p_virt_table = NULL; + p_chain->pbl_sp.table_phys = 0; + p_chain->pbl_sp.table_virt = NULL; p_chain->pbl.pp_addr_tbl = NULL; } @@ -569,8 +569,8 @@ static inline void qed_chain_init_pbl_mem(struct qed_chain *p_chain, dma_addr_t p_phys_pbl, struct addr_tbl_entry *pp_addr_tbl) { - p_chain->pbl_sp.p_phys_table = p_phys_pbl; - p_chain->pbl_sp.p_virt_table = p_virt_pbl; + p_chain->pbl_sp.table_phys = p_phys_pbl; + p_chain->pbl_sp.table_virt = p_virt_pbl; p_chain->pbl.pp_addr_tbl = pp_addr_tbl; } From patchwork Wed Jul 22 22:10:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334212 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=JoSJi9tj; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqSk6RfKz9sTS for ; Thu, 23 Jul 2020 08:12:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733282AbgGVWMG (ORCPT ); Wed, 22 Jul 2020 18:12:06 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:5504 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733226AbgGVWME (ORCPT ); Wed, 22 Jul 2020 18:12:04 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6NnV027326; Wed, 22 Jul 2020 15:11:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=kpvVGWbIit19fGaFMIJ9aJBmUMKP+VrQ9FjhFVrxU9M=; b=JoSJi9tji2/GbQEvm6KyleVqMjCEzVPWhBzNC+5BvXLoy8kXSHYxxb01Dy3UIm7DS3H0 lVQMubv5bmRZLAPZrUAllf6twDP4PjGSu8774KFYv/9Yqh3P7You4V+UlpF1MuaWRUoL VcLKBolnxZYFBcdLHCUwUFlEU2Ire4GTH3pM7qdirqa1JXhzZYsXRob1zUFqqve04jX/ Y7kgTfkOxVFrNCY9gEQhGtUVegdLwfAKDgoMUyRblG8VzL2XC8ELn9faYAW5S3/T1xoR VSXol7ipQWTPvOP2mosdd8Ag14TqtoOcBJzwHgOjIayAWMNEPVNBWAXWTcWL8Hy78+D1 IA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 32bxentx89-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:11:48 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:47 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:46 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:11:46 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id 313B33F703F; Wed, 22 Jul 2020 15:11:39 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 06/15] qed: move chain initialization inlines next to allocation functions Date: Thu, 23 Jul 2020 01:10:36 +0300 Message-ID: <20200722221045.5436-7-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org qed_chain_init*() are used in one file/place on "cold" path only, so they can be uninlined and moved next to the call sites. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qed/qed_chain.c | 47 ++++++++ include/linux/qed/qed_chain.h | 112 -------------------- 2 files changed, 47 insertions(+), 112 deletions(-) diff --git a/drivers/net/ethernet/qlogic/qed/qed_chain.c b/drivers/net/ethernet/qlogic/qed/qed_chain.c index a9ff15b9d8c0..b60ec3e4654c 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_chain.c +++ b/drivers/net/ethernet/qlogic/qed/qed_chain.c @@ -7,6 +7,53 @@ #include "qed_dev_api.h" +static void qed_chain_init_params(struct qed_chain *chain, + u32 page_cnt, u8 elem_size, + enum qed_chain_use_mode intended_use, + enum qed_chain_mode mode, + enum qed_chain_cnt_type cnt_type) +{ + memset(chain, 0, sizeof(*chain)); + + chain->elem_size = elem_size; + chain->intended_use = intended_use; + chain->mode = mode; + chain->cnt_type = cnt_type; + + chain->elem_per_page = ELEMS_PER_PAGE(elem_size); + chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, mode); + chain->elem_unusable = UNUSABLE_ELEMS_PER_PAGE(elem_size, mode); + + chain->elem_per_page_mask = chain->elem_per_page - 1; + chain->next_page_mask = chain->usable_per_page & + chain->elem_per_page_mask; + + chain->page_cnt = page_cnt; + chain->capacity = chain->usable_per_page * page_cnt; + chain->size = chain->elem_per_page * page_cnt; +} + +static void qed_chain_init_next_ptr_elem(const struct qed_chain *chain, + void *virt_curr, void *virt_next, + dma_addr_t phys_next) +{ + struct qed_chain_next *next; + u32 size; + + size = chain->elem_size * chain->usable_per_page; + next = virt_curr + size; + + DMA_REGPAIR_LE(next->next_phys, phys_next); + next->next_virt = virt_next; +} + +static void qed_chain_init_mem(struct qed_chain *chain, void *virt_addr, + dma_addr_t phys_addr) +{ + chain->p_virt_addr = virt_addr; + chain->p_phys_addr = phys_addr; +} + static void qed_chain_free_next_ptr(struct qed_dev *cdev, struct qed_chain *chain) { diff --git a/include/linux/qed/qed_chain.h b/include/linux/qed/qed_chain.h index 265e0b671a5c..a0d83095dc73 100644 --- a/include/linux/qed/qed_chain.h +++ b/include/linux/qed/qed_chain.h @@ -490,118 +490,6 @@ static inline void qed_chain_reset(struct qed_chain *p_chain) } } -/** - * @brief qed_chain_init - Initalizes a basic chain struct - * - * @param p_chain - * @param p_virt_addr - * @param p_phys_addr physical address of allocated buffer's beginning - * @param page_cnt number of pages in the allocated buffer - * @param elem_size size of each element in the chain - * @param intended_use - * @param mode - */ -static inline void qed_chain_init_params(struct qed_chain *p_chain, - u32 page_cnt, - u8 elem_size, - enum qed_chain_use_mode intended_use, - enum qed_chain_mode mode, - enum qed_chain_cnt_type cnt_type) -{ - /* chain fixed parameters */ - p_chain->p_virt_addr = NULL; - p_chain->p_phys_addr = 0; - p_chain->elem_size = elem_size; - p_chain->intended_use = (u8)intended_use; - p_chain->mode = mode; - p_chain->cnt_type = (u8)cnt_type; - - p_chain->elem_per_page = ELEMS_PER_PAGE(elem_size); - p_chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, mode); - p_chain->elem_per_page_mask = p_chain->elem_per_page - 1; - p_chain->elem_unusable = UNUSABLE_ELEMS_PER_PAGE(elem_size, mode); - p_chain->next_page_mask = (p_chain->usable_per_page & - p_chain->elem_per_page_mask); - - p_chain->page_cnt = page_cnt; - p_chain->capacity = p_chain->usable_per_page * page_cnt; - p_chain->size = p_chain->elem_per_page * page_cnt; - - p_chain->pbl_sp.table_phys = 0; - p_chain->pbl_sp.table_virt = NULL; - p_chain->pbl.pp_addr_tbl = NULL; -} - -/** - * @brief qed_chain_init_mem - - * - * Initalizes a basic chain struct with its chain buffers - * - * @param p_chain - * @param p_virt_addr virtual address of allocated buffer's beginning - * @param p_phys_addr physical address of allocated buffer's beginning - * - */ -static inline void qed_chain_init_mem(struct qed_chain *p_chain, - void *p_virt_addr, dma_addr_t p_phys_addr) -{ - p_chain->p_virt_addr = p_virt_addr; - p_chain->p_phys_addr = p_phys_addr; -} - -/** - * @brief qed_chain_init_pbl_mem - - * - * Initalizes a basic chain struct with its pbl buffers - * - * @param p_chain - * @param p_virt_pbl pointer to a pre allocated side table which will hold - * virtual page addresses. - * @param p_phys_pbl pointer to a pre-allocated side table which will hold - * physical page addresses. - * @param pp_virt_addr_tbl - * pointer to a pre-allocated side table which will hold - * the virtual addresses of the chain pages. - * - */ -static inline void qed_chain_init_pbl_mem(struct qed_chain *p_chain, - void *p_virt_pbl, - dma_addr_t p_phys_pbl, - struct addr_tbl_entry *pp_addr_tbl) -{ - p_chain->pbl_sp.table_phys = p_phys_pbl; - p_chain->pbl_sp.table_virt = p_virt_pbl; - p_chain->pbl.pp_addr_tbl = pp_addr_tbl; -} - -/** - * @brief qed_chain_init_next_ptr_elem - - * - * Initalizes a next pointer element - * - * @param p_chain - * @param p_virt_curr virtual address of a chain page of which the next - * pointer element is initialized - * @param p_virt_next virtual address of the next chain page - * @param p_phys_next physical address of the next chain page - * - */ -static inline void -qed_chain_init_next_ptr_elem(struct qed_chain *p_chain, - void *p_virt_curr, - void *p_virt_next, dma_addr_t p_phys_next) -{ - struct qed_chain_next *p_next; - u32 size; - - size = p_chain->elem_size * p_chain->usable_per_page; - p_next = (struct qed_chain_next *)((u8 *)p_virt_curr + size); - - DMA_REGPAIR_LE(p_next->next_phys, p_phys_next); - - p_next->next_virt = p_virt_next; -} - /** * @brief qed_chain_get_last_elem - * From patchwork Wed Jul 22 22:10:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334216 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=e9jeHI8E; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqT005Lcz9sSn for ; Thu, 23 Jul 2020 08:12:19 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733293AbgGVWMO (ORCPT ); Wed, 22 Jul 2020 18:12:14 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:57500 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733032AbgGVWMN (ORCPT ); Wed, 22 Jul 2020 18:12:13 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6g1a019807; Wed, 22 Jul 2020 15:11:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=SVerhC+9GGCE5gDkusm4mu13mYnVROxSnyAfziKMekE=; b=e9jeHI8Eov7Zs2Gwygqzmai4Gh9mDnU6ymVAuEXra7CtntjwyJ0qtGt+AHwemuISADPu VbhMMdF6lxzVF9oArRTa7SB3BkXYVlOEkvvbvPK6qkRTzS15bGsX5DIT46eeVNO8r654 VNXGpxZTQQY1t1Y2JcZvs6dHpihS07qcXiYH1GgmISatRBM6eBGA4QrpCJ8lngOErMYS xb4BFpA2u+l3JVSzerqF3WWE7YbirjJuebbgdtA74bepGJwZ4kgtMHPAb9OLJiKW2s3Q l1TlIbigo/DjusyMq92XDaA4e8+T1Rs13JaW3strRiSNZzn+JMvX6OZhQ7yzFv4W40K5 4w== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 32c0kkt0mj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:11:56 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:54 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:11:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:11:53 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id 053243F7048; Wed, 22 Jul 2020 15:11:46 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 07/15] qed: simplify initialization of the chains with an external PBL Date: Thu, 23 Jul 2020 01:10:37 +0300 Message-ID: <20200722221045.5436-8-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Fill PBL table parameters for chains with an external PBL data earlier on qed_chain_init_params() rather than on allocation itself. This simplifies allocation code and allows to extend struct ext_pbl for other chain types. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qed/qed_chain.c | 37 +++++++++++---------- 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/qlogic/qed/qed_chain.c b/drivers/net/ethernet/qlogic/qed/qed_chain.c index b60ec3e4654c..6effee3b50f4 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_chain.c +++ b/drivers/net/ethernet/qlogic/qed/qed_chain.c @@ -11,7 +11,8 @@ static void qed_chain_init_params(struct qed_chain *chain, u32 page_cnt, u8 elem_size, enum qed_chain_use_mode intended_use, enum qed_chain_mode mode, - enum qed_chain_cnt_type cnt_type) + enum qed_chain_cnt_type cnt_type, + const struct qed_chain_ext_pbl *ext_pbl) { memset(chain, 0, sizeof(*chain)); @@ -31,6 +32,13 @@ static void qed_chain_init_params(struct qed_chain *chain, chain->page_cnt = page_cnt; chain->capacity = chain->usable_per_page * page_cnt; chain->size = chain->elem_per_page * page_cnt; + + if (ext_pbl && ext_pbl->p_pbl_virt) { + chain->pbl_sp.table_virt = ext_pbl->p_pbl_virt; + chain->pbl_sp.table_phys = ext_pbl->p_pbl_phys; + + chain->b_external_pbl = true; + } } static void qed_chain_init_next_ptr_elem(const struct qed_chain *chain, @@ -230,8 +238,7 @@ static int qed_chain_alloc_single(struct qed_dev *cdev, return 0; } -static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain, - struct qed_chain_ext_pbl *ext_pbl) +static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain) { struct device *dev = &cdev->pdev->dev; struct addr_tbl_entry *addr_tbl; @@ -253,21 +260,14 @@ static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain, chain->pbl.pp_addr_tbl = addr_tbl; - if (ext_pbl) { - size = 0; - pbl_virt = ext_pbl->p_pbl_virt; - pbl_phys = ext_pbl->p_pbl_phys; + if (chain->b_external_pbl) + goto alloc_pages; - chain->b_external_pbl = true; - } else { - size = array_size(page_cnt, sizeof(*pbl_virt)); - if (unlikely(size == SIZE_MAX)) - return -EOVERFLOW; - - pbl_virt = dma_alloc_coherent(dev, size, &pbl_phys, - GFP_KERNEL); - } + size = array_size(page_cnt, sizeof(*pbl_virt)); + if (unlikely(size == SIZE_MAX)) + return -EOVERFLOW; + pbl_virt = dma_alloc_coherent(dev, size, &pbl_phys, GFP_KERNEL); if (!pbl_virt) return -ENOMEM; @@ -275,6 +275,7 @@ static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain, chain->pbl_sp.table_phys = pbl_phys; chain->pbl_sp.table_size = size; +alloc_pages: for (i = 0; i < page_cnt; i++) { virt = dma_alloc_coherent(dev, QED_CHAIN_PAGE_SIZE, &phys, GFP_KERNEL); @@ -325,7 +326,7 @@ int qed_chain_alloc(struct qed_dev *cdev, } qed_chain_init_params(chain, page_cnt, elem_size, intended_use, mode, - cnt_type); + cnt_type, ext_pbl); switch (mode) { case QED_CHAIN_MODE_NEXT_PTR: @@ -335,7 +336,7 @@ int qed_chain_alloc(struct qed_dev *cdev, rc = qed_chain_alloc_single(cdev, chain); break; case QED_CHAIN_MODE_PBL: - rc = qed_chain_alloc_pbl(cdev, chain, ext_pbl); + rc = qed_chain_alloc_pbl(cdev, chain); break; default: return -EINVAL; From patchwork Wed Jul 22 22:10:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334232 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=TKGDLSJb; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqVH0xvLz9sTm for ; Thu, 23 Jul 2020 08:13:27 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387541AbgGVWNX (ORCPT ); Wed, 22 Jul 2020 18:13:23 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:10958 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733139AbgGVWMZ (ORCPT ); Wed, 22 Jul 2020 18:12:25 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6fej019797; Wed, 22 Jul 2020 15:12:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=a0EnwJpf0ran+8HT6cV44IEXjlCvfbNa2spUiSU7yFo=; b=TKGDLSJbOCjuQya1xz6Th0kGqj8EijAGlD/CenPlcp4vgqwifyl8Qn+k0GsxXlPiUCVf MSaRIMvUdm+G3ZehMsjKAH6asjUaUG6OQQOkXFwxGqIeT45zX72WqAfHPiMXFlz8kgAp Ejs0o4qmgGq+Anz+Ch/xvkOr6dTgMoVWYZLtLYRSk+AXhwl7zWz1t3+BhC/INTAQRkbJ iK1GJ3CzyMl0zoDrPhUwDnavdd5IK5Qa5whBho5FZMkvBOpdXC0bQyrBPY7JQZTShvm/ 6srnkXccv2FGKrG+dRO66A0rgyE2gtGUIvEtOXZvS4C9qlY3NrHuwEmxnJ0n9lu1x6cb oQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 32c0kkt0n2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:12:02 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:12:00 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:12:00 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id D54323F703F; Wed, 22 Jul 2020 15:11:53 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 08/15] qed: simplify chain allocation with init params struct Date: Thu, 23 Jul 2020 01:10:38 +0300 Message-ID: <20200722221045.5436-9-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org To simplify qed_chain_alloc() prototype and call sites, introduce struct qed_chain_init_params to specify chain params, and pass a pointer to filled struct to the actual qed_chain_alloc() instead of a long list of separate arguments. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/infiniband/hw/qedr/main.c | 20 ++-- drivers/infiniband/hw/qedr/verbs.c | 95 +++++++++---------- drivers/net/ethernet/qlogic/qed/qed_chain.c | 80 +++++++++------- drivers/net/ethernet/qlogic/qed/qed_dev_api.h | 32 +------ drivers/net/ethernet/qlogic/qed/qed_iscsi.c | 39 ++++---- drivers/net/ethernet/qlogic/qed/qed_ll2.c | 44 +++++---- drivers/net/ethernet/qlogic/qed/qed_spq.c | 90 +++++++++++------- drivers/net/ethernet/qlogic/qede/qede_main.c | 45 ++++----- include/linux/qed/qed_chain.h | 21 ++-- include/linux/qed/qed_if.h | 9 +- 10 files changed, 242 insertions(+), 233 deletions(-) diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c index ccaedfd53e49..b1de8d608e4d 100644 --- a/drivers/infiniband/hw/qedr/main.c +++ b/drivers/infiniband/hw/qedr/main.c @@ -346,9 +346,14 @@ static void qedr_free_resources(struct qedr_dev *dev) static int qedr_alloc_resources(struct qedr_dev *dev) { + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_PBL, + .intended_use = QED_CHAIN_USE_TO_CONSUME, + .cnt_type = QED_CHAIN_CNT_TYPE_U16, + .elem_size = sizeof(struct regpair *), + }; struct qedr_cnq *cnq; __le16 *cons_pi; - u16 n_entries; int i, rc; dev->sgid_tbl = kcalloc(QEDR_MAX_SGID, sizeof(union ib_gid), @@ -382,7 +387,9 @@ static int qedr_alloc_resources(struct qedr_dev *dev) dev->sb_start = dev->ops->rdma_get_start_sb(dev->cdev); /* Allocate CNQ PBLs */ - n_entries = min_t(u32, QED_RDMA_MAX_CNQ_SIZE, QEDR_ROCE_MAX_CNQ_SIZE); + params.num_elems = min_t(u32, QED_RDMA_MAX_CNQ_SIZE, + QEDR_ROCE_MAX_CNQ_SIZE); + for (i = 0; i < dev->num_cnq; i++) { cnq = &dev->cnq_array[i]; @@ -391,13 +398,8 @@ static int qedr_alloc_resources(struct qedr_dev *dev) if (rc) goto err3; - rc = dev->ops->common->chain_alloc(dev->cdev, - QED_CHAIN_USE_TO_CONSUME, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - n_entries, - sizeof(struct regpair *), - &cnq->pbl, NULL); + rc = dev->ops->common->chain_alloc(dev->cdev, &cnq->pbl, + ¶ms); if (rc) goto err4; diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index 9b9e80266367..6737895a0d68 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -891,6 +891,12 @@ int qedr_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, udata, struct qedr_ucontext, ibucontext); struct qed_rdma_destroy_cq_out_params destroy_oparams; struct qed_rdma_destroy_cq_in_params destroy_iparams; + struct qed_chain_init_params chain_params = { + .mode = QED_CHAIN_MODE_PBL, + .intended_use = QED_CHAIN_USE_TO_CONSUME, + .cnt_type = QED_CHAIN_CNT_TYPE_U32, + .elem_size = sizeof(union rdma_cqe), + }; struct qedr_dev *dev = get_qedr_dev(ibdev); struct qed_rdma_create_cq_in_params params; struct qedr_create_cq_ureq ureq = {}; @@ -917,6 +923,7 @@ int qedr_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, chain_entries = qedr_align_cq_entries(entries); chain_entries = min_t(int, chain_entries, QEDR_MAX_CQES); + chain_params.num_elems = chain_entries; /* calc db offset. user will add DPI base, kernel will add db addr */ db_offset = DB_ADDR_SHIFT(DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_32BIT); @@ -951,13 +958,8 @@ int qedr_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, } else { cq->cq_type = QEDR_CQ_TYPE_KERNEL; - rc = dev->ops->common->chain_alloc(dev->cdev, - QED_CHAIN_USE_TO_CONSUME, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U32, - chain_entries, - sizeof(union rdma_cqe), - &cq->pbl, NULL); + rc = dev->ops->common->chain_alloc(dev->cdev, &cq->pbl, + &chain_params); if (rc) goto err0; @@ -1446,6 +1448,12 @@ static int qedr_alloc_srq_kernel_params(struct qedr_srq *srq, struct ib_srq_init_attr *init_attr) { struct qedr_srq_hwq_info *hw_srq = &srq->hw_srq; + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_PBL, + .intended_use = QED_CHAIN_USE_TO_CONSUME_PRODUCE, + .cnt_type = QED_CHAIN_CNT_TYPE_U32, + .elem_size = QEDR_SRQ_WQE_ELEM_SIZE, + }; dma_addr_t phy_prod_pair_addr; u32 num_elems; void *va; @@ -1464,13 +1472,9 @@ static int qedr_alloc_srq_kernel_params(struct qedr_srq *srq, hw_srq->virt_prod_pair_addr = va; num_elems = init_attr->attr.max_wr * RDMA_MAX_SRQ_WQE_SIZE; - rc = dev->ops->common->chain_alloc(dev->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U32, - num_elems, - QEDR_SRQ_WQE_ELEM_SIZE, - &hw_srq->pbl, NULL); + params.num_elems = num_elems; + + rc = dev->ops->common->chain_alloc(dev->cdev, &hw_srq->pbl, ¶ms); if (rc) goto err0; @@ -1901,29 +1905,28 @@ qedr_roce_create_kernel_qp(struct qedr_dev *dev, u32 n_sq_elems, u32 n_rq_elems) { struct qed_rdma_create_qp_out_params out_params; + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_PBL, + .cnt_type = QED_CHAIN_CNT_TYPE_U32, + }; int rc; - rc = dev->ops->common->chain_alloc(dev->cdev, - QED_CHAIN_USE_TO_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U32, - n_sq_elems, - QEDR_SQE_ELEMENT_SIZE, - &qp->sq.pbl, NULL); + params.intended_use = QED_CHAIN_USE_TO_PRODUCE; + params.num_elems = n_sq_elems; + params.elem_size = QEDR_SQE_ELEMENT_SIZE; + rc = dev->ops->common->chain_alloc(dev->cdev, &qp->sq.pbl, ¶ms); if (rc) return rc; in_params->sq_num_pages = qed_chain_get_page_cnt(&qp->sq.pbl); in_params->sq_pbl_ptr = qed_chain_get_pbl_phys(&qp->sq.pbl); - rc = dev->ops->common->chain_alloc(dev->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U32, - n_rq_elems, - QEDR_RQE_ELEMENT_SIZE, - &qp->rq.pbl, NULL); + params.intended_use = QED_CHAIN_USE_TO_CONSUME_PRODUCE; + params.elem_size = n_rq_elems; + params.elem_size = QEDR_RQE_ELEMENT_SIZE; + + rc = dev->ops->common->chain_alloc(dev->cdev, &qp->rq.pbl, ¶ms); if (rc) return rc; @@ -1949,7 +1952,10 @@ qedr_iwarp_create_kernel_qp(struct qedr_dev *dev, u32 n_sq_elems, u32 n_rq_elems) { struct qed_rdma_create_qp_out_params out_params; - struct qed_chain_ext_pbl ext_pbl; + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_PBL, + .cnt_type = QED_CHAIN_CNT_TYPE_U32, + }; int rc; in_params->sq_num_pages = QED_CHAIN_PAGE_CNT(n_sq_elems, @@ -1966,31 +1972,24 @@ qedr_iwarp_create_kernel_qp(struct qedr_dev *dev, return -EINVAL; /* Now we allocate the chain */ - ext_pbl.p_pbl_virt = out_params.sq_pbl_virt; - ext_pbl.p_pbl_phys = out_params.sq_pbl_phys; - rc = dev->ops->common->chain_alloc(dev->cdev, - QED_CHAIN_USE_TO_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U32, - n_sq_elems, - QEDR_SQE_ELEMENT_SIZE, - &qp->sq.pbl, &ext_pbl); + params.intended_use = QED_CHAIN_USE_TO_PRODUCE; + params.num_elems = n_sq_elems; + params.elem_size = QEDR_SQE_ELEMENT_SIZE; + params.ext_pbl_virt = out_params.sq_pbl_virt; + params.ext_pbl_phys = out_params.sq_pbl_phys; + rc = dev->ops->common->chain_alloc(dev->cdev, &qp->sq.pbl, ¶ms); if (rc) goto err; - ext_pbl.p_pbl_virt = out_params.rq_pbl_virt; - ext_pbl.p_pbl_phys = out_params.rq_pbl_phys; - - rc = dev->ops->common->chain_alloc(dev->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U32, - n_rq_elems, - QEDR_RQE_ELEMENT_SIZE, - &qp->rq.pbl, &ext_pbl); + params.intended_use = QED_CHAIN_USE_TO_CONSUME_PRODUCE; + params.num_elems = n_rq_elems; + params.elem_size = QEDR_RQE_ELEMENT_SIZE; + params.ext_pbl_virt = out_params.rq_pbl_virt; + params.ext_pbl_phys = out_params.rq_pbl_phys; + rc = dev->ops->common->chain_alloc(dev->cdev, &qp->rq.pbl, ¶ms); if (rc) goto err; diff --git a/drivers/net/ethernet/qlogic/qed/qed_chain.c b/drivers/net/ethernet/qlogic/qed/qed_chain.c index 6effee3b50f4..a68ee4b3dbbc 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_chain.c +++ b/drivers/net/ethernet/qlogic/qed/qed_chain.c @@ -7,23 +7,22 @@ #include "qed_dev_api.h" -static void qed_chain_init_params(struct qed_chain *chain, - u32 page_cnt, u8 elem_size, - enum qed_chain_use_mode intended_use, - enum qed_chain_mode mode, - enum qed_chain_cnt_type cnt_type, - const struct qed_chain_ext_pbl *ext_pbl) +static void qed_chain_init(struct qed_chain *chain, + const struct qed_chain_init_params *params, + u32 page_cnt) { memset(chain, 0, sizeof(*chain)); - chain->elem_size = elem_size; - chain->intended_use = intended_use; - chain->mode = mode; - chain->cnt_type = cnt_type; + chain->elem_size = params->elem_size; + chain->intended_use = params->intended_use; + chain->mode = params->mode; + chain->cnt_type = params->cnt_type; - chain->elem_per_page = ELEMS_PER_PAGE(elem_size); - chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, mode); - chain->elem_unusable = UNUSABLE_ELEMS_PER_PAGE(elem_size, mode); + chain->elem_per_page = ELEMS_PER_PAGE(params->elem_size); + chain->usable_per_page = USABLE_ELEMS_PER_PAGE(params->elem_size, + params->mode); + chain->elem_unusable = UNUSABLE_ELEMS_PER_PAGE(params->elem_size, + params->mode); chain->elem_per_page_mask = chain->elem_per_page - 1; chain->next_page_mask = chain->usable_per_page & @@ -33,9 +32,9 @@ static void qed_chain_init_params(struct qed_chain *chain, chain->capacity = chain->usable_per_page * page_cnt; chain->size = chain->elem_per_page * page_cnt; - if (ext_pbl && ext_pbl->p_pbl_virt) { - chain->pbl_sp.table_virt = ext_pbl->p_pbl_virt; - chain->pbl_sp.table_phys = ext_pbl->p_pbl_phys; + if (params->ext_pbl_virt) { + chain->pbl_sp.table_virt = params->ext_pbl_virt; + chain->pbl_sp.table_phys = params->ext_pbl_phys; chain->b_external_pbl = true; } @@ -154,10 +153,16 @@ void qed_chain_free(struct qed_dev *cdev, struct qed_chain *chain) static int qed_chain_alloc_sanity_check(struct qed_dev *cdev, - enum qed_chain_cnt_type cnt_type, - size_t elem_size, u32 page_cnt) + const struct qed_chain_init_params *params, + u32 page_cnt) { - u64 chain_size = ELEMS_PER_PAGE(elem_size) * page_cnt; + u64 chain_size; + + chain_size = ELEMS_PER_PAGE(params->elem_size); + chain_size *= page_cnt; + + if (!chain_size) + return -EINVAL; /* The actual chain size can be larger than the maximal possible value * after rounding up the requested elements number to pages, and after @@ -165,7 +170,7 @@ qed_chain_alloc_sanity_check(struct qed_dev *cdev, * The size of a "u16" chain can be (U16_MAX + 1) since the chain * size/capacity fields are of u32 type. */ - switch (cnt_type) { + switch (params->cnt_type) { case QED_CHAIN_CNT_TYPE_U16: if (chain_size > U16_MAX + 1) break; @@ -298,37 +303,42 @@ static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain) return 0; } -int qed_chain_alloc(struct qed_dev *cdev, - enum qed_chain_use_mode intended_use, - enum qed_chain_mode mode, - enum qed_chain_cnt_type cnt_type, - u32 num_elems, - size_t elem_size, - struct qed_chain *chain, - struct qed_chain_ext_pbl *ext_pbl) +/** + * qed_chain_alloc() - Allocate and initialize a chain. + * + * @cdev: Main device structure. + * @chain: Chain to be processed. + * @params: Chain initialization parameters. + * + * Return: 0 on success, negative errno otherwise. + */ +int qed_chain_alloc(struct qed_dev *cdev, struct qed_chain *chain, + struct qed_chain_init_params *params) { u32 page_cnt; int rc; - if (mode == QED_CHAIN_MODE_SINGLE) + if (params->mode == QED_CHAIN_MODE_SINGLE) page_cnt = 1; else - page_cnt = QED_CHAIN_PAGE_CNT(num_elems, elem_size, mode); + page_cnt = QED_CHAIN_PAGE_CNT(params->num_elems, + params->elem_size, + params->mode); - rc = qed_chain_alloc_sanity_check(cdev, cnt_type, elem_size, page_cnt); + rc = qed_chain_alloc_sanity_check(cdev, params, page_cnt); if (rc) { DP_NOTICE(cdev, "Cannot allocate a chain with the given arguments:\n"); DP_NOTICE(cdev, "[use_mode %d, mode %d, cnt_type %d, num_elems %d, elem_size %zu]\n", - intended_use, mode, cnt_type, num_elems, elem_size); + params->intended_use, params->mode, params->cnt_type, + params->num_elems, params->elem_size); return rc; } - qed_chain_init_params(chain, page_cnt, elem_size, intended_use, mode, - cnt_type, ext_pbl); + qed_chain_init(chain, params, page_cnt); - switch (mode) { + switch (params->mode) { case QED_CHAIN_MODE_NEXT_PTR: rc = qed_chain_alloc_next_ptr(cdev, chain); break; diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h index 395d4932c262..d3c1f3879be8 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h +++ b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h @@ -254,35 +254,9 @@ int qed_dmae_host2host(struct qed_hwfn *p_hwfn, dma_addr_t dest_addr, u32 size_in_dwords, struct qed_dmae_params *p_params); -/** - * @brief qed_chain_alloc - Allocate and initialize a chain - * - * @param p_hwfn - * @param intended_use - * @param mode - * @param num_elems - * @param elem_size - * @param p_chain - * @param ext_pbl - a possible external PBL - * - * @return int - */ -int -qed_chain_alloc(struct qed_dev *cdev, - enum qed_chain_use_mode intended_use, - enum qed_chain_mode mode, - enum qed_chain_cnt_type cnt_type, - u32 num_elems, - size_t elem_size, - struct qed_chain *p_chain, struct qed_chain_ext_pbl *ext_pbl); - -/** - * @brief qed_chain_free - Free chain DMA memory - * - * @param p_hwfn - * @param p_chain - */ -void qed_chain_free(struct qed_dev *cdev, struct qed_chain *p_chain); +int qed_chain_alloc(struct qed_dev *cdev, struct qed_chain *chain, + struct qed_chain_init_params *params); +void qed_chain_free(struct qed_dev *cdev, struct qed_chain *chain); /** * @@brief qed_fw_l2_queue - Get absolute L2 queue ID diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c index 25d2c882d7ac..4eae4ee3538f 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c @@ -684,9 +684,13 @@ static int qed_iscsi_setup_connection(struct qed_iscsi_conn *p_conn) static int qed_iscsi_allocate_connection(struct qed_hwfn *p_hwfn, struct qed_iscsi_conn **p_out_conn) { - u16 uhq_num_elements = 0, xhq_num_elements = 0, r2tq_num_elements = 0; struct scsi_terminate_extra_params *p_q_cnts = NULL; struct qed_iscsi_pf_params *p_params = NULL; + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_PBL, + .intended_use = QED_CHAIN_USE_TO_CONSUME_PRODUCE, + .cnt_type = QED_CHAIN_CNT_TYPE_U16, + }; struct tcp_upload_params *p_tcp = NULL; struct qed_iscsi_conn *p_conn = NULL; int rc = 0; @@ -727,34 +731,25 @@ static int qed_iscsi_allocate_connection(struct qed_hwfn *p_hwfn, goto nomem_upload_param; p_conn->tcp_upload_params_virt_addr = p_tcp; - r2tq_num_elements = p_params->num_r2tq_pages_in_ring * - QED_CHAIN_PAGE_SIZE / 0x80; - rc = qed_chain_alloc(p_hwfn->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - r2tq_num_elements, 0x80, &p_conn->r2tq, NULL); + params.num_elems = p_params->num_r2tq_pages_in_ring * + QED_CHAIN_PAGE_SIZE / sizeof(struct iscsi_wqe); + params.elem_size = sizeof(struct iscsi_wqe); + + rc = qed_chain_alloc(p_hwfn->cdev, &p_conn->r2tq, ¶ms); if (rc) goto nomem_r2tq; - uhq_num_elements = p_params->num_uhq_pages_in_ring * + params.num_elems = p_params->num_uhq_pages_in_ring * QED_CHAIN_PAGE_SIZE / sizeof(struct iscsi_uhqe); - rc = qed_chain_alloc(p_hwfn->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - uhq_num_elements, - sizeof(struct iscsi_uhqe), &p_conn->uhq, NULL); + params.elem_size = sizeof(struct iscsi_uhqe); + + rc = qed_chain_alloc(p_hwfn->cdev, &p_conn->uhq, ¶ms); if (rc) goto nomem_uhq; - xhq_num_elements = uhq_num_elements; - rc = qed_chain_alloc(p_hwfn->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - xhq_num_elements, - sizeof(struct iscsi_xhqe), &p_conn->xhq, NULL); + params.elem_size = sizeof(struct iscsi_xhqe); + + rc = qed_chain_alloc(p_hwfn->cdev, &p_conn->xhq, ¶ms); if (rc) goto nomem; diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c index 6f4aec339cd4..0452b728c527 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c @@ -1125,6 +1125,12 @@ static int qed_ll2_acquire_connection_rx(struct qed_hwfn *p_hwfn, struct qed_ll2_info *p_ll2_info) { + struct qed_chain_init_params params = { + .intended_use = QED_CHAIN_USE_TO_CONSUME_PRODUCE, + .cnt_type = QED_CHAIN_CNT_TYPE_U16, + .num_elems = p_ll2_info->input.rx_num_desc, + }; + struct qed_dev *cdev = p_hwfn->cdev; struct qed_ll2_rx_packet *p_descq; u32 capacity; int rc = 0; @@ -1132,13 +1138,10 @@ qed_ll2_acquire_connection_rx(struct qed_hwfn *p_hwfn, if (!p_ll2_info->input.rx_num_desc) goto out; - rc = qed_chain_alloc(p_hwfn->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_NEXT_PTR, - QED_CHAIN_CNT_TYPE_U16, - p_ll2_info->input.rx_num_desc, - sizeof(struct core_rx_bd), - &p_ll2_info->rx_queue.rxq_chain, NULL); + params.mode = QED_CHAIN_MODE_NEXT_PTR; + params.elem_size = sizeof(struct core_rx_bd); + + rc = qed_chain_alloc(cdev, &p_ll2_info->rx_queue.rxq_chain, ¶ms); if (rc) { DP_NOTICE(p_hwfn, "Failed to allocate ll2 rxq chain\n"); goto out; @@ -1154,13 +1157,10 @@ qed_ll2_acquire_connection_rx(struct qed_hwfn *p_hwfn, } p_ll2_info->rx_queue.descq_array = p_descq; - rc = qed_chain_alloc(p_hwfn->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - p_ll2_info->input.rx_num_desc, - sizeof(struct core_rx_fast_path_cqe), - &p_ll2_info->rx_queue.rcq_chain, NULL); + params.mode = QED_CHAIN_MODE_PBL; + params.elem_size = sizeof(struct core_rx_fast_path_cqe); + + rc = qed_chain_alloc(cdev, &p_ll2_info->rx_queue.rcq_chain, ¶ms); if (rc) { DP_NOTICE(p_hwfn, "Failed to allocate ll2 rcq chain\n"); goto out; @@ -1177,6 +1177,13 @@ qed_ll2_acquire_connection_rx(struct qed_hwfn *p_hwfn, static int qed_ll2_acquire_connection_tx(struct qed_hwfn *p_hwfn, struct qed_ll2_info *p_ll2_info) { + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_PBL, + .intended_use = QED_CHAIN_USE_TO_CONSUME_PRODUCE, + .cnt_type = QED_CHAIN_CNT_TYPE_U16, + .num_elems = p_ll2_info->input.tx_num_desc, + .elem_size = sizeof(struct core_tx_bd), + }; struct qed_ll2_tx_packet *p_descq; u32 desc_size; u32 capacity; @@ -1185,13 +1192,8 @@ static int qed_ll2_acquire_connection_tx(struct qed_hwfn *p_hwfn, if (!p_ll2_info->input.tx_num_desc) goto out; - rc = qed_chain_alloc(p_hwfn->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - p_ll2_info->input.tx_num_desc, - sizeof(struct core_tx_bd), - &p_ll2_info->tx_queue.txq_chain, NULL); + rc = qed_chain_alloc(p_hwfn->cdev, &p_ll2_info->tx_queue.txq_chain, + ¶ms); if (rc) goto out; diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c index 92ab029789e5..0bc1a0aeb56e 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_spq.c +++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c @@ -382,22 +382,26 @@ int qed_eq_completion(struct qed_hwfn *p_hwfn, void *cookie) int qed_eq_alloc(struct qed_hwfn *p_hwfn, u16 num_elem) { + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_PBL, + .intended_use = QED_CHAIN_USE_TO_PRODUCE, + .cnt_type = QED_CHAIN_CNT_TYPE_U16, + .num_elems = num_elem, + .elem_size = sizeof(union event_ring_element), + }; struct qed_eq *p_eq; + int ret; /* Allocate EQ struct */ p_eq = kzalloc(sizeof(*p_eq), GFP_KERNEL); if (!p_eq) return -ENOMEM; - /* Allocate and initialize EQ chain*/ - if (qed_chain_alloc(p_hwfn->cdev, - QED_CHAIN_USE_TO_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - num_elem, - sizeof(union event_ring_element), - &p_eq->chain, NULL)) + ret = qed_chain_alloc(p_hwfn->cdev, &p_eq->chain, ¶ms); + if (ret) { + DP_NOTICE(p_hwfn, "Failed to allocate EQ chain\n"); goto eq_allocate_fail; + } /* register EQ completion on the SP SB */ qed_int_register_cb(p_hwfn, qed_eq_completion, @@ -408,7 +412,8 @@ int qed_eq_alloc(struct qed_hwfn *p_hwfn, u16 num_elem) eq_allocate_fail: kfree(p_eq); - return -ENOMEM; + + return ret; } void qed_eq_setup(struct qed_hwfn *p_hwfn) @@ -529,33 +534,40 @@ void qed_spq_setup(struct qed_hwfn *p_hwfn) int qed_spq_alloc(struct qed_hwfn *p_hwfn) { + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_SINGLE, + .intended_use = QED_CHAIN_USE_TO_PRODUCE, + .cnt_type = QED_CHAIN_CNT_TYPE_U16, + .elem_size = sizeof(struct slow_path_element), + }; + struct qed_dev *cdev = p_hwfn->cdev; struct qed_spq_entry *p_virt = NULL; struct qed_spq *p_spq = NULL; dma_addr_t p_phys = 0; u32 capacity; + int ret; /* SPQ struct */ p_spq = kzalloc(sizeof(struct qed_spq), GFP_KERNEL); if (!p_spq) return -ENOMEM; - /* SPQ ring */ - if (qed_chain_alloc(p_hwfn->cdev, - QED_CHAIN_USE_TO_PRODUCE, - QED_CHAIN_MODE_SINGLE, - QED_CHAIN_CNT_TYPE_U16, - 0, /* N/A when the mode is SINGLE */ - sizeof(struct slow_path_element), - &p_spq->chain, NULL)) - goto spq_allocate_fail; + /* SPQ ring */ + ret = qed_chain_alloc(cdev, &p_spq->chain, ¶ms); + if (ret) { + DP_NOTICE(p_hwfn, "Failed to allocate SPQ chain\n"); + goto spq_chain_alloc_fail; + } /* allocate and fill the SPQ elements (incl. ramrod data list) */ capacity = qed_chain_get_capacity(&p_spq->chain); - p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev, + ret = -ENOMEM; + + p_virt = dma_alloc_coherent(&cdev->pdev->dev, capacity * sizeof(struct qed_spq_entry), &p_phys, GFP_KERNEL); if (!p_virt) - goto spq_allocate_fail; + goto spq_alloc_fail; p_spq->p_virt = p_virt; p_spq->p_phys = p_phys; @@ -563,10 +575,12 @@ int qed_spq_alloc(struct qed_hwfn *p_hwfn) return 0; -spq_allocate_fail: - qed_chain_free(p_hwfn->cdev, &p_spq->chain); +spq_alloc_fail: + qed_chain_free(cdev, &p_spq->chain); +spq_chain_alloc_fail: kfree(p_spq); - return -ENOMEM; + + return ret; } void qed_spq_free(struct qed_hwfn *p_hwfn) @@ -967,30 +981,40 @@ int qed_spq_completion(struct qed_hwfn *p_hwfn, return 0; } +#define QED_SPQ_CONSQ_ELEM_SIZE 0x80 + int qed_consq_alloc(struct qed_hwfn *p_hwfn) { + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_PBL, + .intended_use = QED_CHAIN_USE_TO_PRODUCE, + .cnt_type = QED_CHAIN_CNT_TYPE_U16, + .num_elems = QED_CHAIN_PAGE_SIZE / QED_SPQ_CONSQ_ELEM_SIZE, + .elem_size = QED_SPQ_CONSQ_ELEM_SIZE, + }; struct qed_consq *p_consq; + int ret; /* Allocate ConsQ struct */ p_consq = kzalloc(sizeof(*p_consq), GFP_KERNEL); if (!p_consq) return -ENOMEM; - /* Allocate and initialize EQ chain*/ - if (qed_chain_alloc(p_hwfn->cdev, - QED_CHAIN_USE_TO_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - QED_CHAIN_PAGE_SIZE / 0x80, - 0x80, &p_consq->chain, NULL)) - goto consq_allocate_fail; + /* Allocate and initialize ConsQ chain */ + ret = qed_chain_alloc(p_hwfn->cdev, &p_consq->chain, ¶ms); + if (ret) { + DP_NOTICE(p_hwfn, "Failed to allocate ConsQ chain"); + goto consq_alloc_fail; + } p_hwfn->p_consq = p_consq; + return 0; -consq_allocate_fail: +consq_alloc_fail: kfree(p_consq); - return -ENOMEM; + + return ret; } void qed_consq_setup(struct qed_hwfn *p_hwfn) diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c index 6f2171dc0dea..b5a95f165520 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_main.c +++ b/drivers/net/ethernet/qlogic/qede/qede_main.c @@ -1442,6 +1442,11 @@ static void qede_set_tpa_param(struct qede_rx_queue *rxq) /* This function allocates all memory needed per Rx queue */ static int qede_alloc_mem_rxq(struct qede_dev *edev, struct qede_rx_queue *rxq) { + struct qed_chain_init_params params = { + .cnt_type = QED_CHAIN_CNT_TYPE_U16, + .num_elems = RX_RING_SIZE, + }; + struct qed_dev *cdev = edev->cdev; int i, rc, size; rxq->num_rx_buffers = edev->q_num_rx_buffers; @@ -1477,24 +1482,20 @@ static int qede_alloc_mem_rxq(struct qede_dev *edev, struct qede_rx_queue *rxq) } /* Allocate FW Rx ring */ - rc = edev->ops->common->chain_alloc(edev->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_NEXT_PTR, - QED_CHAIN_CNT_TYPE_U16, - RX_RING_SIZE, - sizeof(struct eth_rx_bd), - &rxq->rx_bd_ring, NULL); + params.mode = QED_CHAIN_MODE_NEXT_PTR; + params.intended_use = QED_CHAIN_USE_TO_CONSUME_PRODUCE; + params.elem_size = sizeof(struct eth_rx_bd); + + rc = edev->ops->common->chain_alloc(cdev, &rxq->rx_bd_ring, ¶ms); if (rc) goto err; /* Allocate FW completion ring */ - rc = edev->ops->common->chain_alloc(edev->cdev, - QED_CHAIN_USE_TO_CONSUME, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - RX_RING_SIZE, - sizeof(union eth_rx_cqe), - &rxq->rx_comp_ring, NULL); + params.mode = QED_CHAIN_MODE_PBL; + params.intended_use = QED_CHAIN_USE_TO_CONSUME; + params.elem_size = sizeof(union eth_rx_cqe); + + rc = edev->ops->common->chain_alloc(cdev, &rxq->rx_comp_ring, ¶ms); if (rc) goto err; @@ -1531,7 +1532,13 @@ static void qede_free_mem_txq(struct qede_dev *edev, struct qede_tx_queue *txq) /* This function allocates all memory needed per Tx queue */ static int qede_alloc_mem_txq(struct qede_dev *edev, struct qede_tx_queue *txq) { - union eth_tx_bd_types *p_virt; + struct qed_chain_init_params params = { + .mode = QED_CHAIN_MODE_PBL, + .intended_use = QED_CHAIN_USE_TO_CONSUME_PRODUCE, + .cnt_type = QED_CHAIN_CNT_TYPE_U16, + .num_elems = edev->q_num_tx_buffers, + .elem_size = sizeof(union eth_tx_bd_types), + }; int size, rc; txq->num_tx_buffers = edev->q_num_tx_buffers; @@ -1549,13 +1556,7 @@ static int qede_alloc_mem_txq(struct qede_dev *edev, struct qede_tx_queue *txq) goto err; } - rc = edev->ops->common->chain_alloc(edev->cdev, - QED_CHAIN_USE_TO_CONSUME_PRODUCE, - QED_CHAIN_MODE_PBL, - QED_CHAIN_CNT_TYPE_U16, - txq->num_tx_buffers, - sizeof(*p_virt), - &txq->tx_pbl, NULL); + rc = edev->ops->common->chain_alloc(edev->cdev, &txq->tx_pbl, ¶ms); if (rc) goto err; diff --git a/include/linux/qed/qed_chain.h b/include/linux/qed/qed_chain.h index a0d83095dc73..f5cfee0934e5 100644 --- a/include/linux/qed/qed_chain.h +++ b/include/linux/qed/qed_chain.h @@ -54,11 +54,6 @@ struct qed_chain_pbl_u32 { u32 cons_page_idx; }; -struct qed_chain_ext_pbl { - dma_addr_t p_pbl_phys; - void *p_pbl_virt; -}; - struct qed_chain_u16 { /* Cyclic index of next element to produce/consme */ u16 prod_idx; @@ -119,7 +114,7 @@ struct qed_chain { u16 usable_per_page; u8 elem_unusable; - u8 cnt_type; + enum qed_chain_cnt_type cnt_type; /* Slowpath of the chain - required for initialization and destruction, * but isn't involved in regular functionality. @@ -142,11 +137,23 @@ struct qed_chain { /* Total number of elements [for entire chain] */ u32 size; - u8 intended_use; + enum qed_chain_use_mode intended_use; bool b_external_pbl; }; +struct qed_chain_init_params { + enum qed_chain_mode mode; + enum qed_chain_use_mode intended_use; + enum qed_chain_cnt_type cnt_type; + + u32 num_elems; + size_t elem_size; + + void *ext_pbl_virt; + dma_addr_t ext_pbl_phys; +}; + #define QED_CHAIN_PAGE_SIZE 0x1000 #define ELEMS_PER_PAGE(elem_size) \ diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h index a5c6854343e6..cd6a5c7e56eb 100644 --- a/include/linux/qed/qed_if.h +++ b/include/linux/qed/qed_if.h @@ -948,13 +948,8 @@ struct qed_common_ops { u8 dp_level); int (*chain_alloc)(struct qed_dev *cdev, - enum qed_chain_use_mode intended_use, - enum qed_chain_mode mode, - enum qed_chain_cnt_type cnt_type, - u32 num_elems, - size_t elem_size, - struct qed_chain *p_chain, - struct qed_chain_ext_pbl *ext_pbl); + struct qed_chain *chain, + struct qed_chain_init_params *params); void (*chain_free)(struct qed_dev *cdev, struct qed_chain *p_chain); From patchwork Wed Jul 22 22:10:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334222 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=l5mrJz9E; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqTX2GQ2z9sTT for ; Thu, 23 Jul 2020 08:12:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387473AbgGVWMq (ORCPT ); Wed, 22 Jul 2020 18:12:46 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:3718 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387413AbgGVWMa (ORCPT ); Wed, 22 Jul 2020 18:12:30 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6feo019797; Wed, 22 Jul 2020 15:12:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=jRLxhNIpSzeJRLCvsdZldS9LND3ihNgSRMJsSADNQ9Q=; b=l5mrJz9EIT6UGz53lgdl4vkXtVvJwXmDdZkQqMMQ7mM37OZbg6iKXy/sgLPqcqZj8DqQ +fFoavhlQH1A9rHX26CBr0f5lc+3JVcRTXiBXSZrZBpk3zmC2tlESm3aXfyBnR1BLqVc 1hnNpnGTOVu5SPZUj0A4VcGs0Gd5UKSC/s58D7wda7llfLqLGgPUYdYetZ6J15A1AG8Y fo8jRsAk08vqL6H6ti92GjhDGh6HbWjOz6HiPh8f5CwGgI0G07vComuVOEuNLBSgpfx3 39fbWXdmCUWgpvg5XS7FRps8DV5wPtLguwTU9laYSWGERdGsQqr/orc3//CK60gso9q1 pA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 32c0kkt0nd-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:12:13 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:12:07 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:12:07 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id A90A13F7041; Wed, 22 Jul 2020 15:12:00 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 09/15] qed: add support for different page sizes for chains Date: Thu, 23 Jul 2020 01:10:39 +0300 Message-ID: <20200722221045.5436-10-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Extend current infrastructure to store chain page size in a struct and use it in all functions instead of fixed QED_CHAIN_PAGE_SIZE. Its value remains the default one, but can be overridden in qed_chain_init_params before chain allocation. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/infiniband/hw/qedr/verbs.c | 2 ++ drivers/net/ethernet/qlogic/qed/qed_chain.c | 28 +++++++++++++-------- include/linux/qed/qed_chain.h | 21 ++++++++++------ 3 files changed, 33 insertions(+), 18 deletions(-) diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index 6737895a0d68..49b8a43e3fa2 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -1960,9 +1960,11 @@ qedr_iwarp_create_kernel_qp(struct qedr_dev *dev, in_params->sq_num_pages = QED_CHAIN_PAGE_CNT(n_sq_elems, QEDR_SQE_ELEMENT_SIZE, + QED_CHAIN_PAGE_SIZE, QED_CHAIN_MODE_PBL); in_params->rq_num_pages = QED_CHAIN_PAGE_CNT(n_rq_elems, QEDR_RQE_ELEMENT_SIZE, + QED_CHAIN_PAGE_SIZE, QED_CHAIN_MODE_PBL); qp->qed_qp = dev->ops->rdma_create_qp(dev->rdma_ctx, diff --git a/drivers/net/ethernet/qlogic/qed/qed_chain.c b/drivers/net/ethernet/qlogic/qed/qed_chain.c index a68ee4b3dbbc..f8efd36d66e0 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_chain.c +++ b/drivers/net/ethernet/qlogic/qed/qed_chain.c @@ -18,8 +18,10 @@ static void qed_chain_init(struct qed_chain *chain, chain->mode = params->mode; chain->cnt_type = params->cnt_type; - chain->elem_per_page = ELEMS_PER_PAGE(params->elem_size); + chain->elem_per_page = ELEMS_PER_PAGE(params->elem_size, + params->page_size); chain->usable_per_page = USABLE_ELEMS_PER_PAGE(params->elem_size, + params->page_size, params->mode); chain->elem_unusable = UNUSABLE_ELEMS_PER_PAGE(params->elem_size, params->mode); @@ -28,6 +30,7 @@ static void qed_chain_init(struct qed_chain *chain, chain->next_page_mask = chain->usable_per_page & chain->elem_per_page_mask; + chain->page_size = params->page_size; chain->page_cnt = page_cnt; chain->capacity = chain->usable_per_page * page_cnt; chain->size = chain->elem_per_page * page_cnt; @@ -82,7 +85,7 @@ static void qed_chain_free_next_ptr(struct qed_dev *cdev, virt_next = next->next_virt; phys_next = HILO_DMA_REGPAIR(next->next_phys); - dma_free_coherent(dev, QED_CHAIN_PAGE_SIZE, virt, phys); + dma_free_coherent(dev, chain->page_size, virt, phys); virt = virt_next; phys = phys_next; @@ -95,7 +98,7 @@ static void qed_chain_free_single(struct qed_dev *cdev, if (!chain->p_virt_addr) return; - dma_free_coherent(&cdev->pdev->dev, QED_CHAIN_PAGE_SIZE, + dma_free_coherent(&cdev->pdev->dev, chain->page_size, chain->p_virt_addr, chain->p_phys_addr); } @@ -113,7 +116,7 @@ static void qed_chain_free_pbl(struct qed_dev *cdev, struct qed_chain *chain) if (!entry->virt_addr) break; - dma_free_coherent(dev, QED_CHAIN_PAGE_SIZE, entry->virt_addr, + dma_free_coherent(dev, chain->page_size, entry->virt_addr, entry->dma_map); } @@ -158,7 +161,7 @@ qed_chain_alloc_sanity_check(struct qed_dev *cdev, { u64 chain_size; - chain_size = ELEMS_PER_PAGE(params->elem_size); + chain_size = ELEMS_PER_PAGE(params->elem_size, params->page_size); chain_size *= page_cnt; if (!chain_size) @@ -201,7 +204,7 @@ static int qed_chain_alloc_next_ptr(struct qed_dev *cdev, u32 i; for (i = 0; i < chain->page_cnt; i++) { - virt = dma_alloc_coherent(dev, QED_CHAIN_PAGE_SIZE, &phys, + virt = dma_alloc_coherent(dev, chain->page_size, &phys, GFP_KERNEL); if (!virt) return -ENOMEM; @@ -232,7 +235,7 @@ static int qed_chain_alloc_single(struct qed_dev *cdev, dma_addr_t phys; void *virt; - virt = dma_alloc_coherent(&cdev->pdev->dev, QED_CHAIN_PAGE_SIZE, + virt = dma_alloc_coherent(&cdev->pdev->dev, chain->page_size, &phys, GFP_KERNEL); if (!virt) return -ENOMEM; @@ -282,7 +285,7 @@ static int qed_chain_alloc_pbl(struct qed_dev *cdev, struct qed_chain *chain) alloc_pages: for (i = 0; i < page_cnt; i++) { - virt = dma_alloc_coherent(dev, QED_CHAIN_PAGE_SIZE, &phys, + virt = dma_alloc_coherent(dev, chain->page_size, &phys, GFP_KERNEL); if (!virt) return -ENOMEM; @@ -318,11 +321,15 @@ int qed_chain_alloc(struct qed_dev *cdev, struct qed_chain *chain, u32 page_cnt; int rc; + if (!params->page_size) + params->page_size = QED_CHAIN_PAGE_SIZE; + if (params->mode == QED_CHAIN_MODE_SINGLE) page_cnt = 1; else page_cnt = QED_CHAIN_PAGE_CNT(params->num_elems, params->elem_size, + params->page_size, params->mode); rc = qed_chain_alloc_sanity_check(cdev, params, page_cnt); @@ -330,9 +337,10 @@ int qed_chain_alloc(struct qed_dev *cdev, struct qed_chain *chain, DP_NOTICE(cdev, "Cannot allocate a chain with the given arguments:\n"); DP_NOTICE(cdev, - "[use_mode %d, mode %d, cnt_type %d, num_elems %d, elem_size %zu]\n", + "[use_mode %d, mode %d, cnt_type %d, num_elems %d, elem_size %zu, page_size %u]\n", params->intended_use, params->mode, params->cnt_type, - params->num_elems, params->elem_size); + params->num_elems, params->elem_size, + params->page_size); return rc; } diff --git a/include/linux/qed/qed_chain.h b/include/linux/qed/qed_chain.h index f5cfee0934e5..8a96c361cc19 100644 --- a/include/linux/qed/qed_chain.h +++ b/include/linux/qed/qed_chain.h @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -120,6 +121,8 @@ struct qed_chain { * but isn't involved in regular functionality. */ + u32 page_size; + /* Base address of a pre-allocated buffer for pbl */ struct { __le64 *table_virt; @@ -147,6 +150,7 @@ struct qed_chain_init_params { enum qed_chain_use_mode intended_use; enum qed_chain_cnt_type cnt_type; + u32 page_size; u32 num_elems; size_t elem_size; @@ -154,22 +158,23 @@ struct qed_chain_init_params { dma_addr_t ext_pbl_phys; }; -#define QED_CHAIN_PAGE_SIZE 0x1000 +#define QED_CHAIN_PAGE_SIZE SZ_4K -#define ELEMS_PER_PAGE(elem_size) \ - (QED_CHAIN_PAGE_SIZE / (elem_size)) +#define ELEMS_PER_PAGE(elem_size, page_size) \ + ((page_size) / (elem_size)) #define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode) \ (((mode) == QED_CHAIN_MODE_NEXT_PTR) ? \ (u8)(1 + ((sizeof(struct qed_chain_next) - 1) / (elem_size))) : \ 0) -#define USABLE_ELEMS_PER_PAGE(elem_size, mode) \ - ((u32)(ELEMS_PER_PAGE(elem_size) - \ +#define USABLE_ELEMS_PER_PAGE(elem_size, page_size, mode) \ + ((u32)(ELEMS_PER_PAGE((elem_size), (page_size)) - \ UNUSABLE_ELEMS_PER_PAGE((elem_size), (mode)))) -#define QED_CHAIN_PAGE_CNT(elem_cnt, elem_size, mode) \ - DIV_ROUND_UP((elem_cnt), USABLE_ELEMS_PER_PAGE((elem_size), (mode))) +#define QED_CHAIN_PAGE_CNT(elem_cnt, elem_size, page_size, mode) \ + DIV_ROUND_UP((elem_cnt), \ + USABLE_ELEMS_PER_PAGE((elem_size), (page_size), (mode))) #define is_chain_u16(p) \ ((p)->cnt_type == QED_CHAIN_CNT_TYPE_U16) @@ -604,7 +609,7 @@ static inline void qed_chain_pbl_zero_mem(struct qed_chain *p_chain) for (i = 0; i < page_cnt; i++) memset(p_chain->pbl.pp_addr_tbl[i].virt_addr, 0, - QED_CHAIN_PAGE_SIZE); + p_chain->page_size); } #endif From patchwork Wed Jul 22 22:10:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334218 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=v3wgaAQ8; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqTH0ylZz9sSn for ; Thu, 23 Jul 2020 08:12:35 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387434AbgGVWMc (ORCPT ); Wed, 22 Jul 2020 18:12:32 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:51578 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2387421AbgGVWMb (ORCPT ); Wed, 22 Jul 2020 18:12:31 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6NwB027312; Wed, 22 Jul 2020 15:12:15 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=YdH18anOnOfotz7dCcu2Qk951gXqE+vLtMEFYOTUaAo=; b=v3wgaAQ84SK5P5+JcGAiY1vnuPQIEzL0YI51XGogO7TPgW69XqNhLdfwLUIebd01pGpN dVBC88Sq4UX+tz0OkkMwP1/8vnLZyPnaJjinr40rAIFYpSU92CkQTV494pdaSQ/jD1QZ bAenH+UPntjNqQnAeEwbYIva53io+NU46bgw2P3HGEdu3B1uPPADlDnS4SoVDqNU0YDE 6ribthwqsCp/lRNU03tD7atV07jTYRlkx7LzdwBLTmDZdZuj6OQrbza3WVsQtQ3dejns H/XvuMXM8WTTmuygJFyLwUm3PBX/aqB61HzwnIR9dNq4Jq5LF8R5Cj6TUkpWtWh8KHOf Ww== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 32bxentx93-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:12:15 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:12:13 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:12:14 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id 6C4513F703F; Wed, 22 Jul 2020 15:12:07 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 10/15] qed: optimize common chain accessors Date: Thu, 23 Jul 2020 01:10:40 +0300 Message-ID: <20200722221045.5436-11-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Constify chain pointers and refactor qed_chain_get_elem_left{,u32}() a bit. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- include/linux/qed/qed_chain.h | 60 +++++++++++++++++++---------------- 1 file changed, 33 insertions(+), 27 deletions(-) diff --git a/include/linux/qed/qed_chain.h b/include/linux/qed/qed_chain.h index 8a96c361cc19..434479e2ab65 100644 --- a/include/linux/qed/qed_chain.h +++ b/include/linux/qed/qed_chain.h @@ -182,73 +182,79 @@ struct qed_chain_init_params { ((p)->cnt_type == QED_CHAIN_CNT_TYPE_U32) /* Accessors */ -static inline u16 qed_chain_get_prod_idx(struct qed_chain *p_chain) + +static inline u16 qed_chain_get_prod_idx(const struct qed_chain *chain) +{ + return chain->u.chain16.prod_idx; +} + +static inline u16 qed_chain_get_cons_idx(const struct qed_chain *chain) { - return p_chain->u.chain16.prod_idx; + return chain->u.chain16.cons_idx; } -static inline u16 qed_chain_get_cons_idx(struct qed_chain *p_chain) +static inline u32 qed_chain_get_prod_idx_u32(const struct qed_chain *chain) { - return p_chain->u.chain16.cons_idx; + return chain->u.chain32.prod_idx; } -static inline u32 qed_chain_get_cons_idx_u32(struct qed_chain *p_chain) +static inline u32 qed_chain_get_cons_idx_u32(const struct qed_chain *chain) { - return p_chain->u.chain32.cons_idx; + return chain->u.chain32.cons_idx; } -static inline u16 qed_chain_get_elem_left(struct qed_chain *p_chain) +static inline u16 qed_chain_get_elem_left(const struct qed_chain *chain) { - u16 elem_per_page = p_chain->elem_per_page; - u32 prod = p_chain->u.chain16.prod_idx; - u32 cons = p_chain->u.chain16.cons_idx; + u32 prod = qed_chain_get_prod_idx(chain); + u32 cons = qed_chain_get_cons_idx(chain); + u16 elem_per_page = chain->elem_per_page; u16 used; if (prod < cons) prod += (u32)U16_MAX + 1; used = (u16)(prod - cons); - if (p_chain->mode == QED_CHAIN_MODE_NEXT_PTR) - used -= prod / elem_per_page - cons / elem_per_page; + if (chain->mode == QED_CHAIN_MODE_NEXT_PTR) + used -= (u16)(prod / elem_per_page - cons / elem_per_page); - return (u16)(p_chain->capacity - used); + return (u16)(chain->capacity - used); } -static inline u32 qed_chain_get_elem_left_u32(struct qed_chain *p_chain) +static inline u32 qed_chain_get_elem_left_u32(const struct qed_chain *chain) { - u16 elem_per_page = p_chain->elem_per_page; - u64 prod = p_chain->u.chain32.prod_idx; - u64 cons = p_chain->u.chain32.cons_idx; + u64 prod = qed_chain_get_prod_idx_u32(chain); + u64 cons = qed_chain_get_cons_idx_u32(chain); + u16 elem_per_page = chain->elem_per_page; u32 used; if (prod < cons) prod += (u64)U32_MAX + 1; used = (u32)(prod - cons); - if (p_chain->mode == QED_CHAIN_MODE_NEXT_PTR) + if (chain->mode == QED_CHAIN_MODE_NEXT_PTR) used -= (u32)(prod / elem_per_page - cons / elem_per_page); - return p_chain->capacity - used; + return chain->capacity - used; } -static inline u16 qed_chain_get_usable_per_page(struct qed_chain *p_chain) +static inline u16 qed_chain_get_usable_per_page(const struct qed_chain *chain) { - return p_chain->usable_per_page; + return chain->usable_per_page; } -static inline u8 qed_chain_get_unusable_per_page(struct qed_chain *p_chain) +static inline u8 qed_chain_get_unusable_per_page(const struct qed_chain *chain) { - return p_chain->elem_unusable; + return chain->elem_unusable; } -static inline u32 qed_chain_get_page_cnt(struct qed_chain *p_chain) +static inline u32 qed_chain_get_page_cnt(const struct qed_chain *chain) { - return p_chain->page_cnt; + return chain->page_cnt; } -static inline dma_addr_t qed_chain_get_pbl_phys(struct qed_chain *p_chain) +static inline dma_addr_t qed_chain_get_pbl_phys(const struct qed_chain *chain) { - return p_chain->pbl_sp.table_phys; + return chain->pbl_sp.table_phys; } /** From patchwork Wed Jul 22 22:10:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334219 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=Fi1UECe3; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqTQ1KbXz9sTQ for ; Thu, 23 Jul 2020 08:12:42 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387454AbgGVWMi (ORCPT ); Wed, 22 Jul 2020 18:12:38 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:59380 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2387421AbgGVWMh (ORCPT ); Wed, 22 Jul 2020 18:12:37 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM711F027569; Wed, 22 Jul 2020 15:12:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=abvwJyxQ5+FVGDhi3/w7dacNEWbcUQpv4mD6cdlqFSk=; b=Fi1UECe3nN7oqmktzUTw5sjwWSupwCXlNkqLEV2ukApFy3CS645Iq3TAm8XlT+Dq52Pv CYa7rPZwmcihmvF/xwJr1CdfRf+fK1eVmz03XqYARj8upd3Rle4JphprXtywCBYiyoV0 V5hQOGlPwUXRIXzhOR4TDIC07XloU/TLuD3rkmS14gLB02u1KLL6g6NOVlK0onSp0vUy +CP4iN8HgQe27HKCIjno2m+aMpjyKDMir6U3qgIU06B0KODXULSYfuBsnAjZ0GdMa4Mf SJWSEwFpXeSQG05Jpp/W5ujuemz3bJ7zaemr/CUqz7BLXiFRkKIlUfzxtNtnE6ZwFyic Fw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 32bxentx9n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:12:21 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:12:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:12:20 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id 47E6A3F703F; Wed, 22 Jul 2020 15:12:14 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 11/15] qed: introduce qed_chain_get_elem_used{,u32}() Date: Thu, 23 Jul 2020 01:10:41 +0300 Message-ID: <20200722221045.5436-12-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add reverse-variants of qed_chain_get_elem_left{,u32}() to be able to know current chain occupation. They will be used in the upcoming qede XDP_REDIRECT code. They share most of the logics with the mentioned ones, so were reused to collapse the latters. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- include/linux/qed/qed_chain.h | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/include/linux/qed/qed_chain.h b/include/linux/qed/qed_chain.h index 434479e2ab65..4d58dc8943f0 100644 --- a/include/linux/qed/qed_chain.h +++ b/include/linux/qed/qed_chain.h @@ -203,7 +203,7 @@ static inline u32 qed_chain_get_cons_idx_u32(const struct qed_chain *chain) return chain->u.chain32.cons_idx; } -static inline u16 qed_chain_get_elem_left(const struct qed_chain *chain) +static inline u16 qed_chain_get_elem_used(const struct qed_chain *chain) { u32 prod = qed_chain_get_prod_idx(chain); u32 cons = qed_chain_get_cons_idx(chain); @@ -217,10 +217,15 @@ static inline u16 qed_chain_get_elem_left(const struct qed_chain *chain) if (chain->mode == QED_CHAIN_MODE_NEXT_PTR) used -= (u16)(prod / elem_per_page - cons / elem_per_page); - return (u16)(chain->capacity - used); + return used; } -static inline u32 qed_chain_get_elem_left_u32(const struct qed_chain *chain) +static inline u16 qed_chain_get_elem_left(const struct qed_chain *chain) +{ + return (u16)(chain->capacity - qed_chain_get_elem_used(chain)); +} + +static inline u32 qed_chain_get_elem_used_u32(const struct qed_chain *chain) { u64 prod = qed_chain_get_prod_idx_u32(chain); u64 cons = qed_chain_get_cons_idx_u32(chain); @@ -234,7 +239,12 @@ static inline u32 qed_chain_get_elem_left_u32(const struct qed_chain *chain) if (chain->mode == QED_CHAIN_MODE_NEXT_PTR) used -= (u32)(prod / elem_per_page - cons / elem_per_page); - return chain->capacity - used; + return used; +} + +static inline u32 qed_chain_get_elem_left_u32(const struct qed_chain *chain) +{ + return chain->capacity - qed_chain_get_elem_used_u32(chain); } static inline u16 qed_chain_get_usable_per_page(const struct qed_chain *chain) From patchwork Wed Jul 22 22:10:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334225 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=GEOJAg5Y; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqTp4LGxz9sTT for ; Thu, 23 Jul 2020 08:13:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387487AbgGVWMw (ORCPT ); Wed, 22 Jul 2020 18:12:52 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:10194 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1732822AbgGVWMq (ORCPT ); Wed, 22 Jul 2020 18:12:46 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6OQ6027398; Wed, 22 Jul 2020 15:12:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=jo7zq49EmldMSHyr0Cgl+awUK69ydVsJpsPNWv3pVX0=; b=GEOJAg5YopPfRQLgYK9fdZndFpkfI0v3VwGIGftNTF4CBTEVIoLHT8N4NXL5x+x5CMCm tLMs0LOjgJOLh26mww6rWZV29QLdQ0c7bRUPkUxO8Kfh4HD466Y1uS+NgPprGb3h09Gk UhZlPPHliGtRVTOQ5m6tFXSVWUhLogyXs2nNzk79cw34bhelp3W3Ew20PeD07Y+GqIYQ 9iAShbZacWBQYWxcRUdpeKJIGMvw1nC6Fadc8MG6B+4HiOKow36mq6Id3MVdQgF7E/Ub i/1RDPm5vvBgpEd2wXGzpase26SmlG4xLNd0/lyFnVgDH5bb90pGXZ9wR21gOXkh5o02 0g== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 32bxentxa3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:12:29 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:12:28 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:12:27 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:12:27 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id 2678A3F703F; Wed, 22 Jul 2020 15:12:20 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 12/15] qede: reformat several structures in "qede.h" Date: Thu, 23 Jul 2020 01:10:42 +0300 Message-ID: <20200722221045.5436-13-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Make the file more readable and easier for adding new fields. Misc: use IFNAMSIZ and netdev_name() instead of sizeof_field() and direct net_device::name dereferencing. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qede/qede.h | 166 +++++++++++++----------- 1 file changed, 89 insertions(+), 77 deletions(-) diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h index f1d7f73de902..e8ed0bb94ee0 100644 --- a/drivers/net/ethernet/qlogic/qede/qede.h +++ b/drivers/net/ethernet/qlogic/qede/qede.h @@ -176,16 +176,17 @@ struct qede_dev { u32 dp_module; u8 dp_level; - unsigned long flags; -#define IS_VF(edev) (test_bit(QEDE_FLAGS_IS_VF, &(edev)->flags)) + unsigned long flags; +#define IS_VF(edev) test_bit(QEDE_FLAGS_IS_VF, \ + &(edev)->flags) const struct qed_eth_ops *ops; struct qede_ptp *ptp; u64 ptp_skip_txts; - struct qed_dev_eth_info dev_info; -#define QEDE_MAX_RSS_CNT(edev) ((edev)->dev_info.num_queues) -#define QEDE_MAX_TSS_CNT(edev) ((edev)->dev_info.num_queues) + struct qed_dev_eth_info dev_info; +#define QEDE_MAX_RSS_CNT(edev) ((edev)->dev_info.num_queues) +#define QEDE_MAX_TSS_CNT(edev) ((edev)->dev_info.num_queues) #define QEDE_IS_BB(edev) \ ((edev)->dev_info.common.dev_type == QED_DEV_TYPE_BB) #define QEDE_IS_AH(edev) \ @@ -198,14 +199,15 @@ struct qede_dev { u8 fp_num_rx; u16 req_queues; u16 num_queues; -#define QEDE_QUEUE_CNT(edev) ((edev)->num_queues) -#define QEDE_RSS_COUNT(edev) ((edev)->num_queues - (edev)->fp_num_tx) + +#define QEDE_QUEUE_CNT(edev) ((edev)->num_queues) +#define QEDE_RSS_COUNT(edev) ((edev)->num_queues - (edev)->fp_num_tx) #define QEDE_RX_QUEUE_IDX(edev, i) (i) -#define QEDE_TSS_COUNT(edev) ((edev)->num_queues - (edev)->fp_num_rx) +#define QEDE_TSS_COUNT(edev) ((edev)->num_queues - (edev)->fp_num_rx) struct qed_int_info int_info; - /* Smaller private varaiant of the RTNL lock */ + /* Smaller private variant of the RTNL lock */ struct mutex qede_lock; u32 state; /* Protected by qede_lock */ u16 rx_buf_size; @@ -226,22 +228,28 @@ struct qede_dev { SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) struct qede_stats stats; -#define QEDE_RSS_INDIR_INITED BIT(0) -#define QEDE_RSS_KEY_INITED BIT(1) -#define QEDE_RSS_CAPS_INITED BIT(2) - u32 rss_params_inited; /* bit-field to track initialized rss params */ - u16 rss_ind_table[128]; - u32 rss_key[10]; - u8 rss_caps; - - u16 q_num_rx_buffers; /* Must be a power of two */ - u16 q_num_tx_buffers; /* Must be a power of two */ - - bool gro_disable; - struct list_head vlan_list; - u16 configured_vlans; - u16 non_configured_vlans; - bool accept_any_vlan; + + /* Bitfield to track initialized RSS params */ + u32 rss_params_inited; +#define QEDE_RSS_INDIR_INITED BIT(0) +#define QEDE_RSS_KEY_INITED BIT(1) +#define QEDE_RSS_CAPS_INITED BIT(2) + + u16 rss_ind_table[128]; + u32 rss_key[10]; + u8 rss_caps; + + /* Both must be a power of two */ + u16 q_num_rx_buffers; + u16 q_num_tx_buffers; + + bool gro_disable; + + struct list_head vlan_list; + u16 configured_vlans; + u16 non_configured_vlans; + bool accept_any_vlan; + struct delayed_work sp_task; unsigned long sp_flags; u16 vxlan_dst_port; @@ -252,14 +260,14 @@ struct qede_dev { struct qede_rdma_dev rdma_info; - struct bpf_prog *xdp_prog; + struct bpf_prog *xdp_prog; - unsigned long err_flags; -#define QEDE_ERR_IS_HANDLED 31 -#define QEDE_ERR_ATTN_CLR_EN 0 -#define QEDE_ERR_GET_DBG_INFO 1 -#define QEDE_ERR_IS_RECOVERABLE 2 -#define QEDE_ERR_WARN 3 + unsigned long err_flags; +#define QEDE_ERR_IS_HANDLED 31 +#define QEDE_ERR_ATTN_CLR_EN 0 +#define QEDE_ERR_GET_DBG_INFO 1 +#define QEDE_ERR_IS_RECOVERABLE 2 +#define QEDE_ERR_WARN 3 struct qede_dump_info dump_info; }; @@ -372,29 +380,30 @@ struct sw_tx_bd { }; struct sw_tx_xdp { - struct page *page; - dma_addr_t mapping; + struct page *page; + dma_addr_t mapping; }; struct qede_tx_queue { - u8 is_xdp; - bool is_legacy; - u16 sw_tx_cons; - u16 sw_tx_prod; - u16 num_tx_buffers; /* Slowpath only */ + u8 is_xdp; + bool is_legacy; + u16 sw_tx_cons; + u16 sw_tx_prod; + u16 num_tx_buffers; /* Slowpath only */ - u64 xmit_pkts; - u64 stopped_cnt; - u64 tx_mem_alloc_err; + u64 xmit_pkts; + u64 stopped_cnt; + u64 tx_mem_alloc_err; - __le16 *hw_cons_ptr; + __le16 *hw_cons_ptr; /* Needed for the mapping of packets */ - struct device *dev; + struct device *dev; + + void __iomem *doorbell_addr; + union db_prod tx_db; - void __iomem *doorbell_addr; - union db_prod tx_db; - int index; /* Slowpath only */ + int index; /* Slowpath only */ #define QEDE_TXQ_XDP_TO_IDX(edev, txq) ((txq)->index - \ QEDE_MAX_TSS_CNT(edev)) #define QEDE_TXQ_IDX_TO_XDP(edev, idx) ((idx) + QEDE_MAX_TSS_CNT(edev)) @@ -406,22 +415,22 @@ struct qede_tx_queue { #define QEDE_NDEV_TXQ_ID_TO_TXQ(edev, idx) \ (&((edev)->fp_array[QEDE_NDEV_TXQ_ID_TO_FP_ID(edev, idx)].txq \ [QEDE_NDEV_TXQ_ID_TO_TXQ_COS(edev, idx)])) -#define QEDE_FP_TC0_TXQ(fp) (&((fp)->txq[0])) +#define QEDE_FP_TC0_TXQ(fp) (&((fp)->txq[0])) /* Regular Tx requires skb + metadata for release purpose, * while XDP requires the pages and the mapped address. */ union { - struct sw_tx_bd *skbs; - struct sw_tx_xdp *xdp; - } sw_tx_ring; + struct sw_tx_bd *skbs; + struct sw_tx_xdp *xdp; + } sw_tx_ring; - struct qed_chain tx_pbl; + struct qed_chain tx_pbl; /* Slowpath; Should be kept in end [unless missing padding] */ - void *handle; - u16 cos; - u16 ndev_txq_id; + void *handle; + u16 cos; + u16 ndev_txq_id; }; #define BD_UNMAP_ADDR(bd) HILO_U64(le32_to_cpu((bd)->addr.hi), \ @@ -435,32 +444,35 @@ struct qede_tx_queue { #define BD_UNMAP_LEN(bd) (le16_to_cpu((bd)->nbytes)) struct qede_fastpath { - struct qede_dev *edev; -#define QEDE_FASTPATH_TX BIT(0) -#define QEDE_FASTPATH_RX BIT(1) -#define QEDE_FASTPATH_XDP BIT(2) -#define QEDE_FASTPATH_COMBINED (QEDE_FASTPATH_TX | QEDE_FASTPATH_RX) - u8 type; - u8 id; - u8 xdp_xmit; - struct napi_struct napi; - struct qed_sb_info *sb_info; - struct qede_rx_queue *rxq; - struct qede_tx_queue *txq; - struct qede_tx_queue *xdp_tx; - -#define VEC_NAME_SIZE (sizeof_field(struct net_device, name) + 8) - char name[VEC_NAME_SIZE]; + struct qede_dev *edev; + + u8 type; +#define QEDE_FASTPATH_TX BIT(0) +#define QEDE_FASTPATH_RX BIT(1) +#define QEDE_FASTPATH_XDP BIT(2) +#define QEDE_FASTPATH_COMBINED (QEDE_FASTPATH_TX | QEDE_FASTPATH_RX) + + u8 id; + + u8 xdp_xmit; + + struct napi_struct napi; + struct qed_sb_info *sb_info; + struct qede_rx_queue *rxq; + struct qede_tx_queue *txq; + struct qede_tx_queue *xdp_tx; + + char name[IFNAMSIZ + 8]; }; /* Debug print definitions */ -#define DP_NAME(edev) ((edev)->ndev->name) +#define DP_NAME(edev) netdev_name((edev)->ndev) -#define XMIT_PLAIN 0 -#define XMIT_L4_CSUM BIT(0) -#define XMIT_LSO BIT(1) -#define XMIT_ENC BIT(2) -#define XMIT_ENC_GSO_L4_CSUM BIT(3) +#define XMIT_PLAIN 0 +#define XMIT_L4_CSUM BIT(0) +#define XMIT_LSO BIT(1) +#define XMIT_ENC BIT(2) +#define XMIT_ENC_GSO_L4_CSUM BIT(3) #define QEDE_CSUM_ERROR BIT(0) #define QEDE_CSUM_UNNECESSARY BIT(1) From patchwork Wed Jul 22 22:10:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334223 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=OyyvejPf; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqTd4DKNz9sTT for ; Thu, 23 Jul 2020 08:12:53 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387481AbgGVWMw (ORCPT ); Wed, 22 Jul 2020 18:12:52 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:39130 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733156AbgGVWMv (ORCPT ); Wed, 22 Jul 2020 18:12:51 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6NwC027312; Wed, 22 Jul 2020 15:12:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Ssj4J4qySheODq3g8D9MIv3C9Zs4Q9sn3HtKtpEr3Pc=; b=OyyvejPfiz52NLhxS51C3ta+UCJWNxNrKUgRzDJ0XoheInoAp+A4MiAViZsRX8e4etb4 6SC7NU763s4Ee3AEjwZm7UGCjN2n70Lubx2gk/W41gRrOdslMy2SFljpMNAXha9vAwg0 J9INMhEcIt3IhRxYhqZOSpnaeUa1JXomW0heVMW/YqMK0WJIJf+tjDwZ17JzW8Zo1gDF CZELG3b83UXUaJv54/RK6Z/hEubw2XeICvDT60iYOoUCQQ4eCCYQVygbylMNP4PhJitn 8D6LC8BZBs0kPxYfvN+qb/ty07P9mzWURFPsGSu+PXALGF9etMytIezvqK8zk5mrHmn2 Bg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 32bxentxa8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:12:35 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:12:34 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:12:34 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id E97F53F7040; Wed, 22 Jul 2020 15:12:27 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 13/15] qede: reformat net_device_ops declarations Date: Thu, 23 Jul 2020 01:10:43 +0300 Message-ID: <20200722221045.5436-14-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Correct the indentation of net_device_ops declarations for fancier look. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qede/qede_main.c | 122 +++++++++---------- 1 file changed, 61 insertions(+), 61 deletions(-) diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c index b5a95f165520..92bcdfa27961 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_main.c +++ b/drivers/net/ethernet/qlogic/qede/qede_main.c @@ -639,79 +639,79 @@ qede_setup_tc_offload(struct net_device *dev, enum tc_setup_type type, } static const struct net_device_ops qede_netdev_ops = { - .ndo_open = qede_open, - .ndo_stop = qede_close, - .ndo_start_xmit = qede_start_xmit, - .ndo_select_queue = qede_select_queue, - .ndo_set_rx_mode = qede_set_rx_mode, - .ndo_set_mac_address = qede_set_mac_addr, - .ndo_validate_addr = eth_validate_addr, - .ndo_change_mtu = qede_change_mtu, - .ndo_do_ioctl = qede_ioctl, - .ndo_tx_timeout = qede_tx_timeout, + .ndo_open = qede_open, + .ndo_stop = qede_close, + .ndo_start_xmit = qede_start_xmit, + .ndo_select_queue = qede_select_queue, + .ndo_set_rx_mode = qede_set_rx_mode, + .ndo_set_mac_address = qede_set_mac_addr, + .ndo_validate_addr = eth_validate_addr, + .ndo_change_mtu = qede_change_mtu, + .ndo_do_ioctl = qede_ioctl, + .ndo_tx_timeout = qede_tx_timeout, #ifdef CONFIG_QED_SRIOV - .ndo_set_vf_mac = qede_set_vf_mac, - .ndo_set_vf_vlan = qede_set_vf_vlan, - .ndo_set_vf_trust = qede_set_vf_trust, + .ndo_set_vf_mac = qede_set_vf_mac, + .ndo_set_vf_vlan = qede_set_vf_vlan, + .ndo_set_vf_trust = qede_set_vf_trust, #endif - .ndo_vlan_rx_add_vid = qede_vlan_rx_add_vid, - .ndo_vlan_rx_kill_vid = qede_vlan_rx_kill_vid, - .ndo_fix_features = qede_fix_features, - .ndo_set_features = qede_set_features, - .ndo_get_stats64 = qede_get_stats64, + .ndo_vlan_rx_add_vid = qede_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = qede_vlan_rx_kill_vid, + .ndo_fix_features = qede_fix_features, + .ndo_set_features = qede_set_features, + .ndo_get_stats64 = qede_get_stats64, #ifdef CONFIG_QED_SRIOV - .ndo_set_vf_link_state = qede_set_vf_link_state, - .ndo_set_vf_spoofchk = qede_set_vf_spoofchk, - .ndo_get_vf_config = qede_get_vf_config, - .ndo_set_vf_rate = qede_set_vf_rate, + .ndo_set_vf_link_state = qede_set_vf_link_state, + .ndo_set_vf_spoofchk = qede_set_vf_spoofchk, + .ndo_get_vf_config = qede_get_vf_config, + .ndo_set_vf_rate = qede_set_vf_rate, #endif - .ndo_udp_tunnel_add = udp_tunnel_nic_add_port, - .ndo_udp_tunnel_del = udp_tunnel_nic_del_port, - .ndo_features_check = qede_features_check, - .ndo_bpf = qede_xdp, + .ndo_udp_tunnel_add = udp_tunnel_nic_add_port, + .ndo_udp_tunnel_del = udp_tunnel_nic_del_port, + .ndo_features_check = qede_features_check, + .ndo_bpf = qede_xdp, #ifdef CONFIG_RFS_ACCEL - .ndo_rx_flow_steer = qede_rx_flow_steer, + .ndo_rx_flow_steer = qede_rx_flow_steer, #endif - .ndo_setup_tc = qede_setup_tc_offload, + .ndo_setup_tc = qede_setup_tc_offload, }; static const struct net_device_ops qede_netdev_vf_ops = { - .ndo_open = qede_open, - .ndo_stop = qede_close, - .ndo_start_xmit = qede_start_xmit, - .ndo_select_queue = qede_select_queue, - .ndo_set_rx_mode = qede_set_rx_mode, - .ndo_set_mac_address = qede_set_mac_addr, - .ndo_validate_addr = eth_validate_addr, - .ndo_change_mtu = qede_change_mtu, - .ndo_vlan_rx_add_vid = qede_vlan_rx_add_vid, - .ndo_vlan_rx_kill_vid = qede_vlan_rx_kill_vid, - .ndo_fix_features = qede_fix_features, - .ndo_set_features = qede_set_features, - .ndo_get_stats64 = qede_get_stats64, - .ndo_udp_tunnel_add = udp_tunnel_nic_add_port, - .ndo_udp_tunnel_del = udp_tunnel_nic_del_port, - .ndo_features_check = qede_features_check, + .ndo_open = qede_open, + .ndo_stop = qede_close, + .ndo_start_xmit = qede_start_xmit, + .ndo_select_queue = qede_select_queue, + .ndo_set_rx_mode = qede_set_rx_mode, + .ndo_set_mac_address = qede_set_mac_addr, + .ndo_validate_addr = eth_validate_addr, + .ndo_change_mtu = qede_change_mtu, + .ndo_vlan_rx_add_vid = qede_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = qede_vlan_rx_kill_vid, + .ndo_fix_features = qede_fix_features, + .ndo_set_features = qede_set_features, + .ndo_get_stats64 = qede_get_stats64, + .ndo_udp_tunnel_add = udp_tunnel_nic_add_port, + .ndo_udp_tunnel_del = udp_tunnel_nic_del_port, + .ndo_features_check = qede_features_check, }; static const struct net_device_ops qede_netdev_vf_xdp_ops = { - .ndo_open = qede_open, - .ndo_stop = qede_close, - .ndo_start_xmit = qede_start_xmit, - .ndo_select_queue = qede_select_queue, - .ndo_set_rx_mode = qede_set_rx_mode, - .ndo_set_mac_address = qede_set_mac_addr, - .ndo_validate_addr = eth_validate_addr, - .ndo_change_mtu = qede_change_mtu, - .ndo_vlan_rx_add_vid = qede_vlan_rx_add_vid, - .ndo_vlan_rx_kill_vid = qede_vlan_rx_kill_vid, - .ndo_fix_features = qede_fix_features, - .ndo_set_features = qede_set_features, - .ndo_get_stats64 = qede_get_stats64, - .ndo_udp_tunnel_add = udp_tunnel_nic_add_port, - .ndo_udp_tunnel_del = udp_tunnel_nic_del_port, - .ndo_features_check = qede_features_check, - .ndo_bpf = qede_xdp, + .ndo_open = qede_open, + .ndo_stop = qede_close, + .ndo_start_xmit = qede_start_xmit, + .ndo_select_queue = qede_select_queue, + .ndo_set_rx_mode = qede_set_rx_mode, + .ndo_set_mac_address = qede_set_mac_addr, + .ndo_validate_addr = eth_validate_addr, + .ndo_change_mtu = qede_change_mtu, + .ndo_vlan_rx_add_vid = qede_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = qede_vlan_rx_kill_vid, + .ndo_fix_features = qede_fix_features, + .ndo_set_features = qede_set_features, + .ndo_get_stats64 = qede_get_stats64, + .ndo_udp_tunnel_add = udp_tunnel_nic_add_port, + .ndo_udp_tunnel_del = udp_tunnel_nic_del_port, + .ndo_features_check = qede_features_check, + .ndo_bpf = qede_xdp, }; /* ------------------------------------------------------------------------- From patchwork Wed Jul 22 22:10:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334226 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=fxBs/9lz; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqTv51Hdz9sTh for ; Thu, 23 Jul 2020 08:13:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387516AbgGVWND (ORCPT ); Wed, 22 Jul 2020 18:13:03 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:45016 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733156AbgGVWM7 (ORCPT ); Wed, 22 Jul 2020 18:12:59 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6wNI019882; Wed, 22 Jul 2020 15:12:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=zS0D2xqWH6fEMmnPectJQZgM/k93cgYZ441B5zCe7LU=; b=fxBs/9lzqdnwaRSipj40590evKJZfQNoilZv69t+5tW+Qzn+RgRDgAsarhqSF/5ICHvD 27gSouEq1kb/U/YU1zSdvolliqEcyBba7xmeYDCiVfTVyOkb3WeUsZd7BSFc/JhsFt6Y sQhV1fg8eV+Wpf7MxMjmixFeylBRCgzjywKT1hbWDCU7OT+dK3gdlr+46zv7hg9EYw4s mYSRbMqbMvpC9bgbiCSFfaZMp3pcMpw82Mlz/cd0qvaL8s60YtY9/SXhCpPmD7i9lgOu 87whYmNS2mKX9ilLkoIlndxZx+s0Qbkomh7GKZdfqa/p+Iiwgw118jU9WL5H9AOFYNRC Cw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 32c0kkt0pt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:12:42 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:12:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:12:41 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id AB4833F703F; Wed, 22 Jul 2020 15:12:34 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 14/15] qede: refactor XDP Tx processing Date: Thu, 23 Jul 2020 01:10:44 +0300 Message-ID: <20200722221045.5436-15-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Current XDP Tx logic is suboptimal and can't be reused for XDP_REDIRECT path. Make qede_xdp_{tx_int,xmit}() more universal and effective in general to allow future expanding. Misc: use unlikely() hints where appropriate and replace "fallthrough" comments with pseudo-keywords. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qede/qede.h | 1 + drivers/net/ethernet/qlogic/qede/qede_fp.c | 89 +++++++++++----------- 2 files changed, 45 insertions(+), 45 deletions(-) diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h index e8ed0bb94ee0..308c66a5f98f 100644 --- a/drivers/net/ethernet/qlogic/qede/qede.h +++ b/drivers/net/ethernet/qlogic/qede/qede.h @@ -455,6 +455,7 @@ struct qede_fastpath { u8 id; u8 xdp_xmit; +#define QEDE_XDP_TX BIT(0) struct napi_struct napi; struct qed_sb_info *sb_info; diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c index 1c4ece0713f8..c80bf6d37b89 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_fp.c +++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c @@ -302,48 +302,37 @@ static inline void qede_update_tx_producer(struct qede_tx_queue *txq) wmb(); } -static int qede_xdp_xmit(struct qede_dev *edev, struct qede_fastpath *fp, - struct sw_rx_data *metadata, u16 padding, u16 length) +static int qede_xdp_xmit(struct qede_tx_queue *txq, dma_addr_t dma, u16 pad, + u16 len, struct page *page) { - struct qede_tx_queue *txq = fp->xdp_tx; - struct eth_tx_1st_bd *first_bd; - u16 idx = txq->sw_tx_prod; + struct eth_tx_1st_bd *bd; + struct sw_tx_xdp *xdp; u16 val; - if (!qed_chain_get_elem_left(&txq->tx_pbl)) { + if (unlikely(qed_chain_get_elem_used(&txq->tx_pbl) >= + txq->num_tx_buffers)) { txq->stopped_cnt++; return -ENOMEM; } - first_bd = (struct eth_tx_1st_bd *)qed_chain_produce(&txq->tx_pbl); + bd = qed_chain_produce(&txq->tx_pbl); + bd->data.nbds = 1; + bd->data.bd_flags.bitfields = BIT(ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT); - memset(first_bd, 0, sizeof(*first_bd)); - first_bd->data.bd_flags.bitfields = - BIT(ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT); - - val = (length & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK) << + val = (len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK) << ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT; - first_bd->data.bitfields |= cpu_to_le16(val); - first_bd->data.nbds = 1; + bd->data.bitfields = cpu_to_le16(val); /* We can safely ignore the offset, as it's 0 for XDP */ - BD_SET_UNMAP_ADDR_LEN(first_bd, metadata->mapping + padding, length); + BD_SET_UNMAP_ADDR_LEN(bd, dma + pad, len); - /* Synchronize the buffer back to device, as program [probably] - * has changed it. - */ - dma_sync_single_for_device(&edev->pdev->dev, - metadata->mapping + padding, - length, PCI_DMA_TODEVICE); + xdp = txq->sw_tx_ring.xdp + txq->sw_tx_prod; + xdp->mapping = dma; + xdp->page = page; - txq->sw_tx_ring.xdp[idx].page = metadata->data; - txq->sw_tx_ring.xdp[idx].mapping = metadata->mapping; txq->sw_tx_prod = (txq->sw_tx_prod + 1) % txq->num_tx_buffers; - /* Mark the fastpath for future XDP doorbell */ - fp->xdp_xmit = 1; - return 0; } @@ -362,20 +351,21 @@ int qede_txq_has_work(struct qede_tx_queue *txq) static void qede_xdp_tx_int(struct qede_dev *edev, struct qede_tx_queue *txq) { - u16 hw_bd_cons, idx; + struct sw_tx_xdp *xdp_info, *xdp_arr = txq->sw_tx_ring.xdp; + struct device *dev = &edev->pdev->dev; + u16 hw_bd_cons; hw_bd_cons = le16_to_cpu(*txq->hw_cons_ptr); barrier(); while (hw_bd_cons != qed_chain_get_cons_idx(&txq->tx_pbl)) { - qed_chain_consume(&txq->tx_pbl); - idx = txq->sw_tx_cons; + xdp_info = xdp_arr + txq->sw_tx_cons; - dma_unmap_page(&edev->pdev->dev, - txq->sw_tx_ring.xdp[idx].mapping, - PAGE_SIZE, DMA_BIDIRECTIONAL); - __free_page(txq->sw_tx_ring.xdp[idx].page); + dma_unmap_page(dev, xdp_info->mapping, PAGE_SIZE, + DMA_BIDIRECTIONAL); + __free_page(xdp_info->page); + qed_chain_consume(&txq->tx_pbl); txq->sw_tx_cons = (txq->sw_tx_cons + 1) % txq->num_tx_buffers; txq->xmit_pkts++; } @@ -1064,32 +1054,39 @@ static bool qede_rx_xdp(struct qede_dev *edev, switch (act) { case XDP_TX: /* We need the replacement buffer before transmit. */ - if (qede_alloc_rx_buffer(rxq, true)) { + if (unlikely(qede_alloc_rx_buffer(rxq, true))) { qede_recycle_rx_bd_ring(rxq, 1); + trace_xdp_exception(edev->ndev, prog, act); - return false; + break; } /* Now if there's a transmission problem, we'd still have to * throw current buffer, as replacement was already allocated. */ - if (qede_xdp_xmit(edev, fp, bd, *data_offset, *len)) { - dma_unmap_page(rxq->dev, bd->mapping, - PAGE_SIZE, DMA_BIDIRECTIONAL); + if (unlikely(qede_xdp_xmit(fp->xdp_tx, bd->mapping, + *data_offset, *len, bd->data))) { + dma_unmap_page(rxq->dev, bd->mapping, PAGE_SIZE, + rxq->data_direction); __free_page(bd->data); + trace_xdp_exception(edev->ndev, prog, act); + } else { + dma_sync_single_for_device(rxq->dev, + bd->mapping + *data_offset, + *len, rxq->data_direction); + fp->xdp_xmit |= QEDE_XDP_TX; } /* Regardless, we've consumed an Rx BD */ qede_rx_bd_ring_consume(rxq); - return false; - + break; default: bpf_warn_invalid_xdp_action(act); - /* Fall through */ + fallthrough; case XDP_ABORTED: trace_xdp_exception(edev->ndev, prog, act); - /* Fall through */ + fallthrough; case XDP_DROP: qede_recycle_rx_bd_ring(rxq, cqe->bd_num); } @@ -1353,6 +1350,9 @@ int qede_poll(struct napi_struct *napi, int budget) napi); struct qede_dev *edev = fp->edev; int rx_work_done = 0; + u16 xdp_prod; + + fp->xdp_xmit = 0; if (likely(fp->type & QEDE_FASTPATH_TX)) { int cos; @@ -1380,10 +1380,9 @@ int qede_poll(struct napi_struct *napi, int budget) } } - if (fp->xdp_xmit) { - u16 xdp_prod = qed_chain_get_prod_idx(&fp->xdp_tx->tx_pbl); + if (fp->xdp_xmit & QEDE_XDP_TX) { + xdp_prod = qed_chain_get_prod_idx(&fp->xdp_tx->tx_pbl); - fp->xdp_xmit = 0; fp->xdp_tx->tx_db.data.bd_prod = cpu_to_le16(xdp_prod); qede_update_tx_producer(fp->xdp_tx); } From patchwork Wed Jul 22 22:10:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1334228 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" header.d=marvell.com header.i=@marvell.com header.a=rsa-sha256 header.s=pfpt0818 header.b=v+GXP0At; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BBqTz1nVyz9sPf for ; Thu, 23 Jul 2020 08:13:11 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387531AbgGVWNJ (ORCPT ); Wed, 22 Jul 2020 18:13:09 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:24102 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733156AbgGVWNF (ORCPT ); Wed, 22 Jul 2020 18:13:05 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06MM6g1e019807; Wed, 22 Jul 2020 15:12:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=3Po8RfnWucc3DJIq3LC7D4FFGro0yJZkEt/sSt+5JX0=; b=v+GXP0Atde+Wjibsls1dw+2qF1jqIG3gaKaFI5hZh3Hyw38KP9n6aJNTOyUGMXgGCXJF mdtw53Tmanx9fm4r7Hblv5WV4tM0L19IZWvSQkxGDhbrhjz6I8LvEKpcUZerytthV8qz izOzqx5nwWY8qGJjTwV6QqSgR1Xd2LD8m2KmjIf+AMhA1TZJDeNYt1HajTxTpKnEl3Mr LDVyhGy288aRzYK2cum9WFjHrmB/vvhai2/4C6wGqUclQP8FEr2c2ptIB0UlhJ4hHhPx BDfGsquD403XWQTTKU+LICPvDx1/3IMMSRvqo//lIDEzi8z9v8/ZcdWM48VR/GFBQqX+ kQ== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 32c0kkt0q5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 22 Jul 2020 15:12:49 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 22 Jul 2020 15:12:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 15:12:47 -0700 Received: from NN-LT0049.marvell.com (NN-LT0049.marvell.com [10.193.54.6]) by maili.marvell.com (Postfix) with ESMTP id 6F3A13F703F; Wed, 22 Jul 2020 15:12:41 -0700 (PDT) From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski CC: Alexander Lobakin , Igor Russkikh , Michal Kalderon , "Ariel Elior" , Denis Bolotin , "Doug Ledford" , Jason Gunthorpe , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Martin KaFai Lau , Song Liu , "Yonghong Song" , Andrii Nakryiko , KP Singh , , , , , Subject: [PATCH v2 net-next 15/15] qede: add .ndo_xdp_xmit() and XDP_REDIRECT support Date: Thu, 23 Jul 2020 01:10:45 +0300 Message-ID: <20200722221045.5436-16-alobakin@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200722221045.5436-1-alobakin@marvell.com> References: <20200722221045.5436-1-alobakin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-22_16:2020-07-22,2020-07-22 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add XDP_REDIRECT case handling and the corresponding NDO to support redirecting XDP frames. This also includes registering driver memory model (currently order-0 page mode) in BPF subsystem. The total number of XDP queues is usually 1:1 with Rx ones. Signed-off-by: Alexander Lobakin Signed-off-by: Igor Russkikh Signed-off-by: Michal Kalderon --- drivers/net/ethernet/qlogic/qede/qede.h | 8 ++ drivers/net/ethernet/qlogic/qede/qede_fp.c | 97 +++++++++++++++++++- drivers/net/ethernet/qlogic/qede/qede_main.c | 18 ++++ 3 files changed, 118 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h index 308c66a5f98f..803c1fcca8ad 100644 --- a/drivers/net/ethernet/qlogic/qede/qede.h +++ b/drivers/net/ethernet/qlogic/qede/qede.h @@ -199,6 +199,7 @@ struct qede_dev { u8 fp_num_rx; u16 req_queues; u16 num_queues; + u16 total_xdp_queues; #define QEDE_QUEUE_CNT(edev) ((edev)->num_queues) #define QEDE_RSS_COUNT(edev) ((edev)->num_queues - (edev)->fp_num_tx) @@ -381,6 +382,7 @@ struct sw_tx_bd { struct sw_tx_xdp { struct page *page; + struct xdp_frame *xdpf; dma_addr_t mapping; }; @@ -403,6 +405,9 @@ struct qede_tx_queue { void __iomem *doorbell_addr; union db_prod tx_db; + /* Spinlock for XDP queues in case of XDP_REDIRECT */ + spinlock_t xdp_tx_lock; + int index; /* Slowpath only */ #define QEDE_TXQ_XDP_TO_IDX(edev, txq) ((txq)->index - \ QEDE_MAX_TSS_CNT(edev)) @@ -456,6 +461,7 @@ struct qede_fastpath { u8 xdp_xmit; #define QEDE_XDP_TX BIT(0) +#define QEDE_XDP_REDIRECT BIT(1) struct napi_struct napi; struct qed_sb_info *sb_info; @@ -516,6 +522,8 @@ struct qede_reload_args { /* Datapath functions definition */ netdev_tx_t qede_start_xmit(struct sk_buff *skb, struct net_device *ndev); +int qede_xdp_transmit(struct net_device *dev, int n_frames, + struct xdp_frame **frames, u32 flags); u16 qede_select_queue(struct net_device *dev, struct sk_buff *skb, struct net_device *sb_dev); netdev_features_t qede_features_check(struct sk_buff *skb, diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c index c80bf6d37b89..a2494bf85007 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_fp.c +++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c @@ -303,7 +303,7 @@ static inline void qede_update_tx_producer(struct qede_tx_queue *txq) } static int qede_xdp_xmit(struct qede_tx_queue *txq, dma_addr_t dma, u16 pad, - u16 len, struct page *page) + u16 len, struct page *page, struct xdp_frame *xdpf) { struct eth_tx_1st_bd *bd; struct sw_tx_xdp *xdp; @@ -330,12 +330,66 @@ static int qede_xdp_xmit(struct qede_tx_queue *txq, dma_addr_t dma, u16 pad, xdp = txq->sw_tx_ring.xdp + txq->sw_tx_prod; xdp->mapping = dma; xdp->page = page; + xdp->xdpf = xdpf; txq->sw_tx_prod = (txq->sw_tx_prod + 1) % txq->num_tx_buffers; return 0; } +int qede_xdp_transmit(struct net_device *dev, int n_frames, + struct xdp_frame **frames, u32 flags) +{ + struct qede_dev *edev = netdev_priv(dev); + struct device *dmadev = &edev->pdev->dev; + struct qede_tx_queue *xdp_tx; + struct xdp_frame *xdpf; + dma_addr_t mapping; + int i, drops = 0; + u16 xdp_prod; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + if (unlikely(!netif_running(dev))) + return -ENETDOWN; + + i = smp_processor_id() % edev->total_xdp_queues; + xdp_tx = edev->fp_array[i].xdp_tx; + + spin_lock(&xdp_tx->xdp_tx_lock); + + for (i = 0; i < n_frames; i++) { + xdpf = frames[i]; + + mapping = dma_map_single(dmadev, xdpf->data, xdpf->len, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dmadev, mapping))) { + xdp_return_frame_rx_napi(xdpf); + drops++; + + continue; + } + + if (unlikely(qede_xdp_xmit(xdp_tx, mapping, 0, xdpf->len, + NULL, xdpf))) { + xdp_return_frame_rx_napi(xdpf); + drops++; + } + } + + if (flags & XDP_XMIT_FLUSH) { + xdp_prod = qed_chain_get_prod_idx(&xdp_tx->tx_pbl); + + xdp_tx->tx_db.data.bd_prod = cpu_to_le16(xdp_prod); + qede_update_tx_producer(xdp_tx); + } + + spin_unlock(&xdp_tx->xdp_tx_lock); + + return n_frames - drops; +} + int qede_txq_has_work(struct qede_tx_queue *txq) { u16 hw_bd_cons; @@ -353,6 +407,7 @@ static void qede_xdp_tx_int(struct qede_dev *edev, struct qede_tx_queue *txq) { struct sw_tx_xdp *xdp_info, *xdp_arr = txq->sw_tx_ring.xdp; struct device *dev = &edev->pdev->dev; + struct xdp_frame *xdpf; u16 hw_bd_cons; hw_bd_cons = le16_to_cpu(*txq->hw_cons_ptr); @@ -360,10 +415,19 @@ static void qede_xdp_tx_int(struct qede_dev *edev, struct qede_tx_queue *txq) while (hw_bd_cons != qed_chain_get_cons_idx(&txq->tx_pbl)) { xdp_info = xdp_arr + txq->sw_tx_cons; + xdpf = xdp_info->xdpf; + + if (xdpf) { + dma_unmap_single(dev, xdp_info->mapping, xdpf->len, + DMA_TO_DEVICE); + xdp_return_frame(xdpf); - dma_unmap_page(dev, xdp_info->mapping, PAGE_SIZE, - DMA_BIDIRECTIONAL); - __free_page(xdp_info->page); + xdp_info->xdpf = NULL; + } else { + dma_unmap_page(dev, xdp_info->mapping, PAGE_SIZE, + DMA_BIDIRECTIONAL); + __free_page(xdp_info->page); + } qed_chain_consume(&txq->tx_pbl); txq->sw_tx_cons = (txq->sw_tx_cons + 1) % txq->num_tx_buffers; @@ -1065,7 +1129,8 @@ static bool qede_rx_xdp(struct qede_dev *edev, * throw current buffer, as replacement was already allocated. */ if (unlikely(qede_xdp_xmit(fp->xdp_tx, bd->mapping, - *data_offset, *len, bd->data))) { + *data_offset, *len, bd->data, + NULL))) { dma_unmap_page(rxq->dev, bd->mapping, PAGE_SIZE, rxq->data_direction); __free_page(bd->data); @@ -1079,6 +1144,25 @@ static bool qede_rx_xdp(struct qede_dev *edev, } /* Regardless, we've consumed an Rx BD */ + qede_rx_bd_ring_consume(rxq); + break; + case XDP_REDIRECT: + /* We need the replacement buffer before transmit. */ + if (unlikely(qede_alloc_rx_buffer(rxq, true))) { + qede_recycle_rx_bd_ring(rxq, 1); + + trace_xdp_exception(edev->ndev, prog, act); + break; + } + + dma_unmap_page(rxq->dev, bd->mapping, PAGE_SIZE, + rxq->data_direction); + + if (unlikely(xdp_do_redirect(edev->ndev, &xdp, prog))) + DP_NOTICE(edev, "Failed to redirect the packet\n"); + else + fp->xdp_xmit |= QEDE_XDP_REDIRECT; + qede_rx_bd_ring_consume(rxq); break; default: @@ -1387,6 +1471,9 @@ int qede_poll(struct napi_struct *napi, int budget) qede_update_tx_producer(fp->xdp_tx); } + if (fp->xdp_xmit & QEDE_XDP_REDIRECT) + xdp_do_flush_map(); + return rx_work_done; } diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c index 92bcdfa27961..1aaae3203f5a 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_main.c +++ b/drivers/net/ethernet/qlogic/qede/qede_main.c @@ -672,6 +672,7 @@ static const struct net_device_ops qede_netdev_ops = { #ifdef CONFIG_RFS_ACCEL .ndo_rx_flow_steer = qede_rx_flow_steer, #endif + .ndo_xdp_xmit = qede_xdp_transmit, .ndo_setup_tc = qede_setup_tc_offload, }; @@ -712,6 +713,7 @@ static const struct net_device_ops qede_netdev_vf_xdp_ops = { .ndo_udp_tunnel_del = udp_tunnel_nic_del_port, .ndo_features_check = qede_features_check, .ndo_bpf = qede_xdp, + .ndo_xdp_xmit = qede_xdp_transmit, }; /* ------------------------------------------------------------------------- @@ -1712,6 +1714,7 @@ static void qede_init_fp(struct qede_dev *edev) { int queue_id, rxq_index = 0, txq_index = 0; struct qede_fastpath *fp; + bool init_xdp = false; for_each_queue(queue_id) { fp = &edev->fp_array[queue_id]; @@ -1723,6 +1726,9 @@ static void qede_init_fp(struct qede_dev *edev) fp->xdp_tx->index = QEDE_TXQ_IDX_TO_XDP(edev, rxq_index); fp->xdp_tx->is_xdp = 1; + + spin_lock_init(&fp->xdp_tx->xdp_tx_lock); + init_xdp = true; } if (fp->type & QEDE_FASTPATH_RX) { @@ -1738,6 +1744,13 @@ static void qede_init_fp(struct qede_dev *edev) /* Driver have no error path from here */ WARN_ON(xdp_rxq_info_reg(&fp->rxq->xdp_rxq, edev->ndev, fp->rxq->rxq_id) < 0); + + if (xdp_rxq_info_reg_mem_model(&fp->rxq->xdp_rxq, + MEM_TYPE_PAGE_ORDER0, + NULL)) { + DP_NOTICE(edev, + "Failed to register XDP memory model\n"); + } } if (fp->type & QEDE_FASTPATH_TX) { @@ -1763,6 +1776,11 @@ static void qede_init_fp(struct qede_dev *edev) snprintf(fp->name, sizeof(fp->name), "%s-fp-%d", edev->ndev->name, queue_id); } + + if (init_xdp) { + edev->total_xdp_queues = QEDE_RSS_COUNT(edev); + DP_INFO(edev, "Total XDP queues: %u\n", edev->total_xdp_queues); + } } static int qede_set_real_num_queues(struct qede_dev *edev)