From patchwork Wed May 9 11:21:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 910761 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=broadcom.com header.i=@broadcom.com header.b="TU89Nn3E"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40gv863XSpz9s47 for ; Wed, 9 May 2018 21:21:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756271AbeEILV4 (ORCPT ); Wed, 9 May 2018 07:21:56 -0400 Received: from mail-pg0-f67.google.com ([74.125.83.67]:40812 "EHLO mail-pg0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756248AbeEILVy (ORCPT ); Wed, 9 May 2018 07:21:54 -0400 Received: by mail-pg0-f67.google.com with SMTP id l2-v6so22534119pgc.7 for ; Wed, 09 May 2018 04:21:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jTC7rm7G8fiyml0MmeZb5LfhsGac828ejhihDdUDpBY=; b=TU89Nn3EsZnYxBt6CpGmeCLNORB54MJWJjBXaoXioy7pfwy2XX6IcWywksTnZkIUf9 /Li0xUDr9bfDccB9qMbPD2GljvoJpVjq+GL7DWW4nK4AlVRklV04xJKV30fJ53TSTNxA EjCusvb9dT4aDWyZqbhe6l/+E7dpdrLnyXnGk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jTC7rm7G8fiyml0MmeZb5LfhsGac828ejhihDdUDpBY=; b=IlbfepscXNvtIX2oSfX4ytqAs0PqjMdnfcRngx3hlQpIS1UfctO7EpIEg9ktV4XLM9 xkkx8r3R/Wr+q3HUWWpk5k7e1J0gZ4+rF82f1zeMaabfmC/u5xtBD0Mi0eNX99gBDelW 5W1CQjW6gT3xjx88gB+ua5O7gDyHOMwYGSCI5S8pWF041xrhu9MK8wPVgIRAD5fbupOJ TGGerVNge0Y3xOeajskQt09B3a9lINRLpvC7nk7tMi3K+aH+s71xKVFUSDa4qOV0ZaJM 9gLcBXrGd9hCfgNQBrXFfJiOJVD1F32929cDkqSzFRACHKfCMypoxB5Y2g2gC6Kksm1f Hteg== X-Gm-Message-State: ALQs6tBMXl3Pf3ImIFBKmLSZ4mtbLhE2ySmSGTahJcU2Gs3eWmHgfDXl xVvX5Ye1rc5FJGbaFOSr+uZQng== X-Google-Smtp-Source: AB8JxZo22v29XZp1k5fnfvWWXVyWalYRCbNq7H/h103fgJ6MWFA2wyJnfOC4A06QMUc3sLPqCl1CoA== X-Received: by 10.98.3.3 with SMTP id 3mr43583823pfd.255.1525864913603; Wed, 09 May 2018 04:21:53 -0700 (PDT) Received: from localhost.dhcp.broadcom.net ([192.19.223.250]) by smtp.gmail.com with ESMTPSA id z83sm23841480pfd.103.2018.05.09.04.21.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 May 2018 04:21:53 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org Subject: [PATCH net-next RFC 1/3] net: Add support to configure SR-IOV VF minimum and maximum queues. Date: Wed, 9 May 2018 07:21:41 -0400 Message-Id: <1525864903-32619-2-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525864903-32619-1-git-send-email-michael.chan@broadcom.com> References: <1525864903-32619-1-git-send-email-michael.chan@broadcom.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org VF Queue resources are always limited and there is currently no infrastructure to allow the admin. on the host to add or reduce queue resources for any particular VF. With ever increasing number of VFs being supported, it is desirable to allow the admin. to configure queue resources differently for the VFs. Some VFs may require more or fewer queues due to different bandwidth requirements or different number of vCPUs in the VM. This patch adds the infrastructure to do that by adding IFLA_VF_QUEUES netlink attribute and a new .ndo_set_vf_queues() to the net_device_ops. Four parameters are exposed for each VF: o min_tx_queues - Guaranteed or current tx queues assigned to the VF. o max_tx_queues - Maximum but not necessarily guaranteed tx queues available to the VF. o min_rx_queues - Guaranteed or current rx queues assigned to the VF. o max_rx_queues - Maximum but not necessarily guaranteed rx queues available to the VF. The "ip link set" command will subsequently be patched to support the new operation to set the above parameters. After the admin. makes a change to the above parameters, the corresponding VF will have a new range of channels to set using ethtool -L. Signed-off-by: Michael Chan --- include/linux/if_link.h | 4 ++++ include/linux/netdevice.h | 6 ++++++ include/uapi/linux/if_link.h | 9 +++++++++ net/core/rtnetlink.c | 28 +++++++++++++++++++++++++--- 4 files changed, 44 insertions(+), 3 deletions(-) diff --git a/include/linux/if_link.h b/include/linux/if_link.h index 622658d..8e81121 100644 --- a/include/linux/if_link.h +++ b/include/linux/if_link.h @@ -29,5 +29,9 @@ struct ifla_vf_info { __u32 rss_query_en; __u32 trusted; __be16 vlan_proto; + __u32 min_tx_queues; + __u32 max_tx_queues; + __u32 min_rx_queues; + __u32 max_rx_queues; }; #endif /* _LINUX_IF_LINK_H */ diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 03ed492..30a3caf 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1023,6 +1023,8 @@ struct dev_ifalias { * with PF and querying it may introduce a theoretical security risk. * int (*ndo_set_vf_rss_query_en)(struct net_device *dev, int vf, bool setting); * int (*ndo_get_vf_port)(struct net_device *dev, int vf, struct sk_buff *skb); + * int (*ndo_set_vf_queues)(struct net_device *dev, int vf, int min_txq, + * int max_txq, int min_rxq, int max_rxq); * int (*ndo_setup_tc)(struct net_device *dev, enum tc_setup_type type, * void *type_data); * Called to setup any 'tc' scheduler, classifier or action on @dev. @@ -1272,6 +1274,10 @@ struct net_device_ops { int (*ndo_set_vf_rss_query_en)( struct net_device *dev, int vf, bool setting); + int (*ndo_set_vf_queues)(struct net_device *dev, + int vf, + int min_txq, int max_txq, + int min_rxq, int max_rxq); int (*ndo_setup_tc)(struct net_device *dev, enum tc_setup_type type, void *type_data); diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h index b852664..fc56a47 100644 --- a/include/uapi/linux/if_link.h +++ b/include/uapi/linux/if_link.h @@ -658,6 +658,7 @@ enum { IFLA_VF_IB_NODE_GUID, /* VF Infiniband node GUID */ IFLA_VF_IB_PORT_GUID, /* VF Infiniband port GUID */ IFLA_VF_VLAN_LIST, /* nested list of vlans, option for QinQ */ + IFLA_VF_QUEUES, /* Min and Max TX/RX queues */ __IFLA_VF_MAX, }; @@ -748,6 +749,14 @@ struct ifla_vf_trust { __u32 setting; }; +struct ifla_vf_queues { + __u32 vf; + __u32 min_tx_queues; + __u32 max_tx_queues; + __u32 min_rx_queues; + __u32 max_rx_queues; +}; + /* VF ports management section * * Nested layout of set/get msg is: diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index 8080254..7cf3582 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -921,7 +921,8 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev, nla_total_size_64bit(sizeof(__u64)) + /* IFLA_VF_STATS_TX_DROPPED */ nla_total_size_64bit(sizeof(__u64)) + - nla_total_size(sizeof(struct ifla_vf_trust))); + nla_total_size(sizeof(struct ifla_vf_trust)) + + nla_total_size(sizeof(struct ifla_vf_queues))); return size; } else return 0; @@ -1181,6 +1182,7 @@ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb, struct ifla_vf_vlan_info vf_vlan_info; struct ifla_vf_spoofchk vf_spoofchk; struct ifla_vf_tx_rate vf_tx_rate; + struct ifla_vf_queues vf_queues; struct ifla_vf_stats vf_stats; struct ifla_vf_trust vf_trust; struct ifla_vf_vlan vf_vlan; @@ -1217,7 +1219,8 @@ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb, vf_spoofchk.vf = vf_linkstate.vf = vf_rss_query_en.vf = - vf_trust.vf = ivi.vf; + vf_trust.vf = + vf_queues.vf = ivi.vf; memcpy(vf_mac.mac, ivi.mac, sizeof(ivi.mac)); vf_vlan.vlan = ivi.vlan; @@ -1232,6 +1235,10 @@ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb, vf_linkstate.link_state = ivi.linkstate; vf_rss_query_en.setting = ivi.rss_query_en; vf_trust.setting = ivi.trusted; + vf_queues.min_tx_queues = ivi.min_tx_queues; + vf_queues.max_tx_queues = ivi.max_tx_queues; + vf_queues.min_rx_queues = ivi.min_rx_queues; + vf_queues.max_rx_queues = ivi.max_rx_queues; vf = nla_nest_start(skb, IFLA_VF_INFO); if (!vf) goto nla_put_vfinfo_failure; @@ -1249,7 +1256,9 @@ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb, sizeof(vf_rss_query_en), &vf_rss_query_en) || nla_put(skb, IFLA_VF_TRUST, - sizeof(vf_trust), &vf_trust)) + sizeof(vf_trust), &vf_trust) || + nla_put(skb, IFLA_VF_QUEUES, + sizeof(vf_queues), &vf_queues)) goto nla_put_vf_failure; vfvlanlist = nla_nest_start(skb, IFLA_VF_VLAN_LIST); if (!vfvlanlist) @@ -1706,6 +1715,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, [IFLA_VF_TRUST] = { .len = sizeof(struct ifla_vf_trust) }, [IFLA_VF_IB_NODE_GUID] = { .len = sizeof(struct ifla_vf_guid) }, [IFLA_VF_IB_PORT_GUID] = { .len = sizeof(struct ifla_vf_guid) }, + [IFLA_VF_QUEUES] = { .len = sizeof(struct ifla_vf_queues) }, }; static const struct nla_policy ifla_port_policy[IFLA_PORT_MAX+1] = { @@ -2208,6 +2218,18 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb) return handle_vf_guid(dev, ivt, IFLA_VF_IB_PORT_GUID); } + if (tb[IFLA_VF_QUEUES]) { + struct ifla_vf_queues *ivq = nla_data(tb[IFLA_VF_QUEUES]); + + err = -EOPNOTSUPP; + if (ops->ndo_set_vf_queues) + err = ops->ndo_set_vf_queues(dev, ivq->vf, + ivq->min_tx_queues, ivq->max_tx_queues, + ivq->min_rx_queues, ivq->max_rx_queues); + if (err < 0) + return err; + } + return err; } From patchwork Wed May 9 11:21:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 910764 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=broadcom.com header.i=@broadcom.com header.b="a+jKMtO9"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40gv8H3pzfz9s47 for ; Wed, 9 May 2018 21:22:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933871AbeEILWF (ORCPT ); Wed, 9 May 2018 07:22:05 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:38050 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756252AbeEILVz (ORCPT ); Wed, 9 May 2018 07:21:55 -0400 Received: by mail-pf0-f195.google.com with SMTP id o76so25482005pfi.5 for ; Wed, 09 May 2018 04:21:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uFP6MzTcGcnrbWdNWAfZSmLmXG61EF8o5cUqkSOfaN0=; b=a+jKMtO9ioL3hApwL+5Gty0LORRj9XhT69B8HsiO37lkcQBEd5ZIHrbQJJuEjMZLAr XgSO/ViWqRUuUSuoslZxZsHFV3UaUeU2unFx8Jf4pAvyVZBqbaCEh/+6unpLvKg0aDRb KHtVGVQHpllH1IKRerHwG5Ui4w5x+L/NiK/cU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uFP6MzTcGcnrbWdNWAfZSmLmXG61EF8o5cUqkSOfaN0=; b=R7BAVvMNcp5VPdIOmD6G7MlsJmZbuP49DDYXrXXbNcIA2gh/llh9DzZX5VQfnTshOs NVzsFPy8kWXjH5y3ujQmTLoFAI1Q6Khmp/IknshihZkE/5+Hfj3HHu+8pomwUe2ehVoz pGU0L/Cc2NSK0U1MSh4aS4GrO/KGUCGyb/y/Nu2ccD1foFqmkCtdXCbs10wT966teeJc x4cXsLyVrQCdqCiudIHryTMz5jZ3qlE2/8sos62iXfUnJO39q028rK9r9V+K1JL4xNKf fHohQuRSiV2AYxUF/CT6zdyS5czdOx+YjZFNPYOc2aWAMrznfmaj+n8uvDMw2RxfwkJ9 5Nfw== X-Gm-Message-State: ALQs6tB7w4Oig/rB7SKhCzwGdqVKMz0A6k90M5oopQG97As9Md46WEM1 vKJSYZrGd8D5hYlza74j9UPHyUttB/k= X-Google-Smtp-Source: AB8JxZqvO3ptuxI8QTU7ykBNT39tU0lXBCFU5ft1NGCnAVtJZQtMfJFZ3gjoqhz9ikcpzxLnJo4H6g== X-Received: by 10.167.130.140 with SMTP id s12mr26941720pfm.136.1525864914496; Wed, 09 May 2018 04:21:54 -0700 (PDT) Received: from localhost.dhcp.broadcom.net ([192.19.223.250]) by smtp.gmail.com with ESMTPSA id z83sm23841480pfd.103.2018.05.09.04.21.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 May 2018 04:21:53 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org Subject: [PATCH net-next RFC 2/3] bnxt_en: Store min/max tx/rx rings for individual VFs. Date: Wed, 9 May 2018 07:21:42 -0400 Message-Id: <1525864903-32619-3-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525864903-32619-1-git-send-email-michael.chan@broadcom.com> References: <1525864903-32619-1-git-send-email-michael.chan@broadcom.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org With new infrastructure to configure queues differently for each VF, we need to store the current min/max rx/tx rings for each VF. Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 5 +++++ drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c | 23 +++++++++++++++++++---- 2 files changed, 24 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 9b14eb6..2f5a23c 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -837,6 +837,10 @@ struct bnxt_vf_info { u32 func_flags; /* func cfg flags */ u32 min_tx_rate; u32 max_tx_rate; + u16 min_tx_rings; + u16 max_tx_rings; + u16 min_rx_rings; + u16 max_rx_rings; void *hwrm_cmd_req_addr; dma_addr_t hwrm_cmd_req_dma_addr; }; @@ -1351,6 +1355,7 @@ struct bnxt { #ifdef CONFIG_BNXT_SRIOV int nr_vfs; struct bnxt_vf_info vf; + struct hwrm_func_vf_resource_cfg_input vf_resc_cfg_input; wait_queue_head_t sriov_cfg_wait; bool sriov_cfg; #define BNXT_SRIOV_CFG_WAIT_TMO msecs_to_jiffies(10000) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c index a649108..489e534 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c @@ -171,6 +171,10 @@ int bnxt_get_vf_config(struct net_device *dev, int vf_id, ivi->linkstate = IFLA_VF_LINK_STATE_ENABLE; else ivi->linkstate = IFLA_VF_LINK_STATE_DISABLE; + ivi->min_tx_queues = vf->min_tx_rings; + ivi->max_tx_queues = vf->max_tx_rings; + ivi->min_rx_queues = vf->min_rx_rings; + ivi->max_rx_queues = vf->max_rx_rings; return 0; } @@ -498,6 +502,8 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs) mutex_lock(&bp->hwrm_cmd_lock); for (i = 0; i < num_vfs; i++) { + struct bnxt_vf_info *vf = &pf->vf[i]; + req.vf_id = cpu_to_le16(pf->first_vf_id + i); rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); @@ -506,7 +512,11 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs) break; } pf->active_vfs = i + 1; - pf->vf[i].fw_fid = pf->first_vf_id + i; + vf->fw_fid = pf->first_vf_id + i; + vf->min_tx_rings = le16_to_cpu(req.min_tx_rings); + vf->max_tx_rings = vf_tx_rings; + vf->min_rx_rings = le16_to_cpu(req.min_rx_rings); + vf->max_rx_rings = vf_rx_rings; } mutex_unlock(&bp->hwrm_cmd_lock); if (pf->active_vfs) { @@ -521,6 +531,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs) hw_resc->max_stat_ctxs -= le16_to_cpu(req.min_stat_ctx) * n; hw_resc->max_vnics -= le16_to_cpu(req.min_vnics) * n; + memcpy(&bp->vf_resc_cfg_input, &req, sizeof(req)); rc = pf->active_vfs; } return rc; @@ -585,6 +596,7 @@ static int bnxt_hwrm_func_cfg(struct bnxt *bp, int num_vfs) mutex_lock(&bp->hwrm_cmd_lock); for (i = 0; i < num_vfs; i++) { + struct bnxt_vf_info *vf = &pf->vf[i]; int vf_tx_rsvd = vf_tx_rings; req.fid = cpu_to_le16(pf->first_vf_id + i); @@ -593,12 +605,15 @@ static int bnxt_hwrm_func_cfg(struct bnxt *bp, int num_vfs) if (rc) break; pf->active_vfs = i + 1; - pf->vf[i].fw_fid = le16_to_cpu(req.fid); - rc = __bnxt_hwrm_get_tx_rings(bp, pf->vf[i].fw_fid, - &vf_tx_rsvd); + vf->fw_fid = le16_to_cpu(req.fid); + rc = __bnxt_hwrm_get_tx_rings(bp, vf->fw_fid, &vf_tx_rsvd); if (rc) break; total_vf_tx_rings += vf_tx_rsvd; + vf->min_tx_rings = vf_tx_rsvd; + vf->max_tx_rings = vf_tx_rsvd; + vf->min_rx_rings = vf_rx_rings; + vf->max_rx_rings = vf_rx_rings; } mutex_unlock(&bp->hwrm_cmd_lock); if (rc) From patchwork Wed May 9 11:21:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 910762 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=broadcom.com header.i=@broadcom.com header.b="ET3SHlpT"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40gv890J5lz9s4t for ; Wed, 9 May 2018 21:22:01 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756280AbeEILV7 (ORCPT ); Wed, 9 May 2018 07:21:59 -0400 Received: from mail-pf0-f193.google.com ([209.85.192.193]:41232 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756189AbeEILVz (ORCPT ); Wed, 9 May 2018 07:21:55 -0400 Received: by mail-pf0-f193.google.com with SMTP id v63so25468944pfk.8 for ; Wed, 09 May 2018 04:21:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=D//RXguf7QV4lwgvYEoxlWkkA2ChiGOq3G8Z/yWKqW4=; b=ET3SHlpTf/8kbvmUAQlLufim4htAUxZjJcBuNYQLWZ1oJvSrEeB9oXMLWf/pP8Vjah yr3T4r/a7tIW3FOTXxJ6B+SUn4LrqtHqQeFPBGIZMKIztPk6TcncR7cNQJRYSAaJYGvy +fOZ58ZY/cMq7FXgg8enQPaqCsUoR0wXMguCM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=D//RXguf7QV4lwgvYEoxlWkkA2ChiGOq3G8Z/yWKqW4=; b=DYpcCP92SzXcO35ZY/+7dfUtyFunr4xSvwxQbETQNxrtvDGMKZI0N7UX11mKt739e/ wpP4yEfd/tRH+WcmqLYjjzlROmI7HACU2bhSbpMFYW2eXDAsSuTy/7ctofNRaGmjZmDp K2RxrHYI83g7Ge8zCM3wyze68fXxJ7hCknuYlSQLPLrQL/XxyU3XAgoakHh+6MTBB+Q1 +37hqVlflnTuN8zvOHrtQ2Jt4ciJmjj+yS3H2PXw0EvPtj7m98DDQiEyjgOtrmlosqpl 8EIPeDO6bVoe7s3xbOeR6eC4LeSeKZg9h4hRGcIpY5OANXKXY5l2U7Ho7MODMiidhuTC GfoQ== X-Gm-Message-State: ALQs6tDWMCrRELtO69j2fsDnD2YVtABD5Xs8TrvY5bu974i2Ekc6U6w1 UOb3N7eDkCwFyA7MeuEH8nKL/6x0lOk= X-Google-Smtp-Source: AB8JxZpdOUYxblVk5LEVAzhabvNTL29uGOxNKe5wCj+UBi58sZDOU6ROVogohfNupQJu0tTNaHtsgA== X-Received: by 10.98.78.200 with SMTP id c191mr42973475pfb.153.1525864915345; Wed, 09 May 2018 04:21:55 -0700 (PDT) Received: from localhost.dhcp.broadcom.net ([192.19.223.250]) by smtp.gmail.com with ESMTPSA id z83sm23841480pfd.103.2018.05.09.04.21.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 May 2018 04:21:54 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org Subject: [PATCH net-next RFC 3/3] bnxt_en: Implement .ndo_set_vf_queues(). Date: Wed, 9 May 2018 07:21:43 -0400 Message-Id: <1525864903-32619-4-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525864903-32619-1-git-send-email-michael.chan@broadcom.com> References: <1525864903-32619-1-git-send-email-michael.chan@broadcom.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Implement .ndo_set_vf_queues() on the PF driver to configure the queues parameters for individual VFs. This allows the admin. on the host to increase or decrease queues for individual VFs. Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 1 + drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c | 67 +++++++++++++++++++++++++ drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h | 2 + 3 files changed, 70 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index dfa0839..2ce9779 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -8373,6 +8373,7 @@ static int bnxt_swdev_port_attr_get(struct net_device *dev, .ndo_set_vf_link_state = bnxt_set_vf_link_state, .ndo_set_vf_spoofchk = bnxt_set_vf_spoofchk, .ndo_set_vf_trust = bnxt_set_vf_trust, + .ndo_set_vf_queues = bnxt_set_vf_queues, #endif #ifdef CONFIG_NET_POLL_CONTROLLER .ndo_poll_controller = bnxt_poll_controller, diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c index 489e534..f0d938c 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c @@ -138,6 +138,73 @@ int bnxt_set_vf_trust(struct net_device *dev, int vf_id, bool trusted) return 0; } +static bool bnxt_param_ok(int new, u16 curr, u16 avail) +{ + int delta; + + if (new <= curr) + return true; + + delta = new - curr; + if (delta <= avail) + return true; + return false; +} + +int bnxt_set_vf_queues(struct net_device *dev, int vf_id, int min_txq, + int max_txq, int min_rxq, int max_rxq) +{ + struct hwrm_func_vf_resource_cfg_input req = {0}; + struct bnxt *bp = netdev_priv(dev); + u16 avail_tx_rings, avail_rx_rings; + struct bnxt_hw_resc *hw_resc; + struct bnxt_vf_info *vf; + int rc; + + if (bnxt_vf_ndo_prep(bp, vf_id)) + return -EINVAL; + + if (!(bp->flags & BNXT_FLAG_NEW_RM)) + return -EOPNOTSUPP; + + vf = &bp->pf.vf[vf_id]; + hw_resc = &bp->hw_resc; + + avail_tx_rings = hw_resc->max_tx_rings - bp->tx_nr_rings; + if (bp->flags & BNXT_FLAG_AGG_RINGS) + avail_rx_rings = hw_resc->max_rx_rings - bp->rx_nr_rings * 2; + else + avail_rx_rings = hw_resc->max_rx_rings - bp->rx_nr_rings; + if (!bnxt_param_ok(min_txq, vf->min_tx_rings, avail_tx_rings)) + return -ENOBUFS; + if (!bnxt_param_ok(min_rxq, vf->min_rx_rings, avail_rx_rings)) + return -ENOBUFS; + if (!bnxt_param_ok(max_txq, vf->max_tx_rings, avail_tx_rings)) + return -ENOBUFS; + if (!bnxt_param_ok(max_rxq, vf->max_rx_rings, avail_rx_rings)) + return -ENOBUFS; + + bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_RESOURCE_CFG, -1, -1); + memcpy(&req, &bp->vf_resc_cfg_input, sizeof(req)); + req.min_tx_rings = cpu_to_le16(min_txq); + req.min_rx_rings = cpu_to_le16(min_rxq); + req.max_tx_rings = cpu_to_le16(max_txq); + req.max_rx_rings = cpu_to_le16(max_rxq); + rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); + if (rc) + return -EIO; + + hw_resc->max_tx_rings += vf->min_tx_rings; + hw_resc->max_rx_rings += vf->min_rx_rings; + vf->min_tx_rings = min_txq; + vf->max_tx_rings = max_txq; + vf->min_rx_rings = min_rxq; + vf->max_rx_rings = max_rxq; + hw_resc->max_tx_rings -= vf->min_tx_rings; + hw_resc->max_rx_rings -= vf->min_rx_rings; + return 0; +} + int bnxt_get_vf_config(struct net_device *dev, int vf_id, struct ifla_vf_info *ivi) { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h index e9b20cd..325b412 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h @@ -35,6 +35,8 @@ int bnxt_set_vf_link_state(struct net_device *, int, int); int bnxt_set_vf_spoofchk(struct net_device *, int, bool); int bnxt_set_vf_trust(struct net_device *dev, int vf_id, bool trust); +int bnxt_set_vf_queues(struct net_device *dev, int vf_id, int min_txq, + int max_txq, int min_rxq, int max_rxq); int bnxt_sriov_configure(struct pci_dev *pdev, int num_vfs); void bnxt_sriov_disable(struct bnxt *); void bnxt_hwrm_exec_fwd_req(struct bnxt *);