Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/808238/?format=api
{ "id": 808238, "url": "http://patchwork.ozlabs.org/api/patches/808238/?format=api", "web_url": "http://patchwork.ozlabs.org/project/netdev/patch/1504186749-8926-9-git-send-email-lipeng321@huawei.com/", "project": { "id": 7, "url": "http://patchwork.ozlabs.org/api/projects/7/?format=api", "name": "Linux network development", "link_name": "netdev", "list_id": "netdev.vger.kernel.org", "list_email": "netdev@vger.kernel.org", "web_url": null, "scm_url": null, "webscm_url": null, "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<1504186749-8926-9-git-send-email-lipeng321@huawei.com>", "list_archive_url": null, "date": "2017-08-31T13:39:09", "name": "[net-next,8/8] net: hns3: reimplemmentation of pkt buffer allocation", "commit_ref": null, "pull_url": null, "state": "changes-requested", "archived": true, "hash": "b5e9a95c20f99e1c947904f8c84ce15fce9de51c", "submitter": { "id": 71468, "url": "http://patchwork.ozlabs.org/api/people/71468/?format=api", "name": "lipeng (Y)", "email": "lipeng321@huawei.com" }, "delegate": { "id": 34, "url": "http://patchwork.ozlabs.org/api/users/34/?format=api", "username": "davem", "first_name": "David", "last_name": "Miller", "email": "davem@davemloft.net" }, "mbox": "http://patchwork.ozlabs.org/project/netdev/patch/1504186749-8926-9-git-send-email-lipeng321@huawei.com/mbox/", "series": [ { "id": 823, "url": "http://patchwork.ozlabs.org/api/series/823/?format=api", "web_url": "http://patchwork.ozlabs.org/project/netdev/list/?series=823", "date": "2017-08-31T13:39:02", "name": "Bug fixes & Code improvements in HNS driver", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/823/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/808238/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/808238/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<netdev-owner@vger.kernel.org>", "X-Original-To": "patchwork-incoming@ozlabs.org", "Delivered-To": "patchwork-incoming@ozlabs.org", "Authentication-Results": "ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)", "Received": [ "from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xjjTN0f7Qz9sMN\n\tfor <patchwork-incoming@ozlabs.org>;\n\tThu, 31 Aug 2017 23:12:24 +1000 (AEST)", "(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751736AbdHaNMM (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tThu, 31 Aug 2017 09:12:12 -0400", "from szxga05-in.huawei.com ([45.249.212.191]:5074 \"EHLO\n\tszxga05-in.huawei.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1751286AbdHaNL0 (ORCPT\n\t<rfc822;netdev@vger.kernel.org>); Thu, 31 Aug 2017 09:11:26 -0400", "from 172.30.72.60 (EHLO DGGEMS404-HUB.china.huawei.com)\n\t([172.30.72.60])\n\tby dggrg05-dlp.huawei.com (MOS 4.4.6-GA FastPath queued)\n\twith ESMTP id DGI62433; Thu, 31 Aug 2017 21:11:23 +0800 (CST)", "from linux-ioko.site (10.71.200.31) by\n\tDGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP\n\tServer id 14.3.301.0; Thu, 31 Aug 2017 21:11:11 +0800" ], "From": "Lipeng <lipeng321@huawei.com>", "To": "<davem@davemloft.net>", "CC": "<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,\n\t<linuxarm@huawei.com>, <yisen.zhuang@huawei.com>,\n\t<salil.mehta@huawei.com>, <lipeng321@huawei.com>", "Subject": "[PATCH net-next 8/8] net: hns3: reimplemmentation of pkt buffer\n\tallocation", "Date": "Thu, 31 Aug 2017 21:39:09 +0800", "Message-ID": "<1504186749-8926-9-git-send-email-lipeng321@huawei.com>", "X-Mailer": "git-send-email 1.9.1", "In-Reply-To": "<1504186749-8926-1-git-send-email-lipeng321@huawei.com>", "References": "<1504186749-8926-1-git-send-email-lipeng321@huawei.com>", "MIME-Version": "1.0", "Content-Type": "text/plain", "X-Originating-IP": "[10.71.200.31]", "X-CFilter-Loop": "Reflected", "X-Mirapoint-Virus-RAPID-Raw": "score=unknown(0),\n\trefid=str=0001.0A020204.59A80AFB.010C, ss=1, re=0.000, recu=0.000,\n\treip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0,\n\tso=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32", "X-Mirapoint-Loop-Id": "3764da7394cc6bbedb24038296715897", "Sender": "netdev-owner@vger.kernel.org", "Precedence": "bulk", "List-ID": "<netdev.vger.kernel.org>", "X-Mailing-List": "netdev@vger.kernel.org" }, "content": "Current implemmentation of buffer allocation in SSU do not meet\nthe requirement to do the buffer reallocation. This patch fixs\nthat in order to support buffer reallocation between Mac and\nPFC pause.\n\nSigned-off-by: Yunsheng Lin <linyunsheng@huawei.com>\nSigned-off-by: Lipeng <lipeng321@huawei.com>\n---\n .../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 32 +-\n .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 368 +++++++++++----------\n .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 5 +-\n .../net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c | 84 ++++-\n .../net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h | 9 +\n 5 files changed, 308 insertions(+), 190 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h\nindex 5887418..26e8ca6 100644\n--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h\n+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h\n@@ -141,7 +141,7 @@ enum hclge_opcode_type {\n \n \t/* Packet buffer allocate command */\n \tHCLGE_OPC_TX_BUFF_ALLOC\t\t= 0x0901,\n-\tHCLGE_OPC_RX_PRIV_BUFF_ALLOC\t= 0x0902,\n+\tHCLGE_OPC_RX_BUFF_ALLOC\t\t= 0x0902,\n \tHCLGE_OPC_RX_PRIV_WL_ALLOC\t= 0x0903,\n \tHCLGE_OPC_RX_COM_THRD_ALLOC\t= 0x0904,\n \tHCLGE_OPC_RX_COM_WL_ALLOC\t= 0x0905,\n@@ -264,14 +264,15 @@ struct hclge_ctrl_vector_chain {\n #define HCLGE_TC_NUM\t\t8\n #define HCLGE_TC0_PRI_BUF_EN_B\t15 /* Bit 15 indicate enable or not */\n #define HCLGE_BUF_UNIT_S\t7 /* Buf size is united by 128 bytes */\n-struct hclge_tx_buff_alloc {\n-\t__le16 tx_pkt_buff[HCLGE_TC_NUM];\n-\tu8 tx_buff_rsv[8];\n+struct hclge_tx_buf_alloc {\n+\t__le16 buf[HCLGE_TC_NUM];\n+\tu8 rsv[8];\n };\n \n-struct hclge_rx_priv_buff {\n-\t__le16 buf_num[HCLGE_TC_NUM];\n-\tu8 rsv[8];\n+struct hclge_rx_buf_alloc {\n+\t__le16 priv_buf[HCLGE_TC_NUM];\n+\t__le16 shared_buf;\n+\tu8 rsv[6];\n };\n \n struct hclge_query_version {\n@@ -308,19 +309,24 @@ struct hclge_tc_thrd {\n \tu32 high;\n };\n \n-struct hclge_priv_buf {\n+struct hclge_rx_priv_buf {\n \tstruct hclge_waterline wl;\t/* Waterline for low and high*/\n \tu32 buf_size;\t/* TC private buffer size */\n-\tu32 enable;\t/* Enable TC private buffer or not */\n };\n \n #define HCLGE_MAX_TC_NUM\t8\n-struct hclge_shared_buf {\n+struct hclge_rx_shared_buf {\n \tstruct hclge_waterline self;\n \tstruct hclge_tc_thrd tc_thrd[HCLGE_MAX_TC_NUM];\n \tu32 buf_size;\n };\n \n+struct hclge_pkt_buf_alloc {\n+\tu32 tx_buf_size[HCLGE_MAX_TC_NUM];\n+\tstruct hclge_rx_priv_buf rx_buf[HCLGE_MAX_TC_NUM];\n+\tstruct hclge_rx_shared_buf s_buf;\n+};\n+\n #define HCLGE_RX_COM_WL_EN_B\t15\n struct hclge_rx_com_wl_buf {\n \t__le16 high_wl;\n@@ -707,9 +713,9 @@ struct hclge_reset_tqp_queue {\n \tu8 rsv[20];\n };\n \n-#define HCLGE_DEFAULT_TX_BUF\t\t0x4000\t /* 16k bytes */\n-#define HCLGE_TOTAL_PKT_BUF\t\t0x108000 /* 1.03125M bytes */\n-#define HCLGE_DEFAULT_DV\t\t0xA000\t /* 40k byte */\n+#define HCLGE_DEFAULT_TX_BUF\t\t0x4000\t/* 16k bytes */\n+#define HCLGE_DEFAULT_DV\t\t0xA000\t/* 40k byte */\n+#define HCLGE_DEFAULT_NON_DCB_DV\t0x7800\t/* 30K byte */\n \n #define HCLGE_TYPE_CRQ\t\t\t0\n #define HCLGE_TYPE_CSQ\t\t\t1\ndiff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c\nindex d0a30f5..61073c2 100644\n--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c\n+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c\n@@ -1094,8 +1094,18 @@ static int hclge_configure(struct hclge_dev *hdev)\n \t\thdev->tm_info.num_tc = 1;\n \t}\n \n+\t/* non-DCB supported dev */\n+\tif (!hnae_get_bit(hdev->ae_dev->flag,\n+\t\t\t HNAE_DEV_SUPPORT_DCB_B)) {\n+\t\thdev->tc_cap = 1;\n+\t\thdev->pfc_cap = 0;\n+\t} else {\n+\t\thdev->tc_cap = hdev->tm_info.num_tc;\n+\t\thdev->pfc_cap = hdev->tm_info.num_tc;\n+\t}\n+\n \t/* Currently not support uncontiuous tc */\n-\tfor (i = 0; i < cfg.tc_num; i++)\n+\tfor (i = 0; i < hdev->tc_cap; i++)\n \t\thnae_set_bit(hdev->hw_tc_map, i, 1);\n \n \tif (!hdev->num_vmdq_vport && !hdev->num_req_vfs)\n@@ -1344,45 +1354,32 @@ static int hclge_alloc_vport(struct hclge_dev *hdev)\n \treturn 0;\n }\n \n-static int hclge_cmd_alloc_tx_buff(struct hclge_dev *hdev, u16 buf_size)\n+static int hclge_tx_buffer_alloc(struct hclge_dev *hdev,\n+\t\t\t\t struct hclge_pkt_buf_alloc *buf_alloc)\n {\n-/* TX buffer size is unit by 128 byte */\n-#define HCLGE_BUF_SIZE_UNIT_SHIFT\t7\n-#define HCLGE_BUF_SIZE_UPDATE_EN_MSK\tBIT(15)\n-\tstruct hclge_tx_buff_alloc *req;\n \tstruct hclge_desc desc;\n-\tint ret;\n+\tstruct hclge_tx_buf_alloc *req =\n+\t\t(struct hclge_tx_buf_alloc *)desc.data;\n+\tenum hclge_cmd_status status;\n \tu8 i;\n \n-\treq = (struct hclge_tx_buff_alloc *)desc.data;\n-\n \thclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_TX_BUFF_ALLOC, 0);\n-\tfor (i = 0; i < HCLGE_TC_NUM; i++)\n-\t\treq->tx_pkt_buff[i] =\n-\t\t\tcpu_to_le16((buf_size >> HCLGE_BUF_SIZE_UNIT_SHIFT) |\n-\t\t\t\t HCLGE_BUF_SIZE_UPDATE_EN_MSK);\n+\tfor (i = 0; i < HCLGE_TC_NUM; i++) {\n+\t\tu32 buf_size = buf_alloc->tx_buf_size[i];\n \n-\tret = hclge_cmd_send(&hdev->hw, &desc, 1);\n-\tif (ret) {\n-\t\tdev_err(&hdev->pdev->dev, \"tx buffer alloc cmd failed %d.\\n\",\n-\t\t\tret);\n-\t\treturn ret;\n+\t\treq->buf[i] =\n+\t\t\tcpu_to_le16((buf_size >> HCLGE_BUF_UNIT_S) |\n+\t\t\t\t 1 << HCLGE_TC0_PRI_BUF_EN_B);\n \t}\n \n-\treturn 0;\n-}\n-\n-static int hclge_tx_buffer_alloc(struct hclge_dev *hdev, u32 buf_size)\n-{\n-\tint ret = hclge_cmd_alloc_tx_buff(hdev, buf_size);\n+\tstatus = hclge_cmd_send(&hdev->hw, &desc, 1);\n \n-\tif (ret) {\n+\tif (status) {\n \t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\"tx buffer alloc failed %d\\n\", ret);\n-\t\treturn ret;\n+\t\t\t\"Allocat tx buff fail, ret = %d\\n\", status);\n \t}\n \n-\treturn 0;\n+\treturn status;\n }\n \n static int hclge_get_tc_num(struct hclge_dev *hdev)\n@@ -1407,15 +1404,16 @@ static int hclge_get_pfc_enalbe_num(struct hclge_dev *hdev)\n }\n \n /* Get the number of pfc enabled TCs, which have private buffer */\n-static int hclge_get_pfc_priv_num(struct hclge_dev *hdev)\n+static int hclge_get_pfc_priv_num(struct hclge_dev *hdev,\n+\t\t\t\t struct hclge_pkt_buf_alloc *buf_alloc)\n {\n-\tstruct hclge_priv_buf *priv;\n+\tstruct hclge_rx_priv_buf *priv;\n \tint i, cnt = 0;\n \n \tfor (i = 0; i < HCLGE_MAX_TC_NUM; i++) {\n-\t\tpriv = &hdev->priv_buf[i];\n+\t\tpriv = &buf_alloc->rx_buf[i];\n \t\tif ((hdev->tm_info.hw_pfc_map & BIT(i)) &&\n-\t\t priv->enable)\n+\t\t priv->buf_size > 0)\n \t\t\tcnt++;\n \t}\n \n@@ -1423,37 +1421,40 @@ static int hclge_get_pfc_priv_num(struct hclge_dev *hdev)\n }\n \n /* Get the number of pfc disabled TCs, which have private buffer */\n-static int hclge_get_no_pfc_priv_num(struct hclge_dev *hdev)\n+static int hclge_get_no_pfc_priv_num(struct hclge_dev *hdev,\n+\t\t\t\t struct hclge_pkt_buf_alloc *buf_alloc)\n {\n-\tstruct hclge_priv_buf *priv;\n+\tstruct hclge_rx_priv_buf *priv;\n \tint i, cnt = 0;\n \n \tfor (i = 0; i < HCLGE_MAX_TC_NUM; i++) {\n-\t\tpriv = &hdev->priv_buf[i];\n+\t\tpriv = &buf_alloc->rx_buf[i];\n \t\tif (hdev->hw_tc_map & BIT(i) &&\n \t\t !(hdev->tm_info.hw_pfc_map & BIT(i)) &&\n-\t\t priv->enable)\n+\t\t priv->buf_size > 0)\n \t\t\tcnt++;\n \t}\n \n \treturn cnt;\n }\n \n-static u32 hclge_get_rx_priv_buff_alloced(struct hclge_dev *hdev)\n+static u32 hclge_get_rx_priv_buff_alloced(struct hclge_dev *hdev,\n+\t\t\t\t\t struct hclge_pkt_buf_alloc *buf_alloc)\n {\n-\tstruct hclge_priv_buf *priv;\n+\tstruct hclge_rx_priv_buf *priv;\n \tu32 rx_priv = 0;\n \tint i;\n \n \tfor (i = 0; i < HCLGE_MAX_TC_NUM; i++) {\n-\t\tpriv = &hdev->priv_buf[i];\n-\t\tif (priv->enable)\n-\t\t\trx_priv += priv->buf_size;\n+\t\tpriv = &buf_alloc->rx_buf[i];\n+\t\trx_priv += priv->buf_size;\n \t}\n \treturn rx_priv;\n }\n \n-static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev, u32 rx_all)\n+static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev,\n+\t\t\t\tstruct hclge_pkt_buf_alloc *buf_alloc,\n+\t\t\t\tu32 rx_all)\n {\n \tu32 shared_buf_min, shared_buf_tc, shared_std;\n \tint tc_num, pfc_enable_num;\n@@ -1464,52 +1465,85 @@ static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev, u32 rx_all)\n \ttc_num = hclge_get_tc_num(hdev);\n \tpfc_enable_num = hclge_get_pfc_enalbe_num(hdev);\n \n-\tshared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_DV;\n+\tif (hnae_get_bit(hdev->ae_dev->flag,\n+\t\t\t HNAE_DEV_SUPPORT_DCB_B))\n+\t\tshared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_DV;\n+\telse\n+\t\tshared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_NON_DCB_DV;\n+\n \tshared_buf_tc = pfc_enable_num * hdev->mps +\n \t\t\t(tc_num - pfc_enable_num) * hdev->mps / 2 +\n \t\t\thdev->mps;\n \tshared_std = max_t(u32, shared_buf_min, shared_buf_tc);\n \n-\trx_priv = hclge_get_rx_priv_buff_alloced(hdev);\n-\tif (rx_all <= rx_priv + shared_std)\n+\trx_priv = hclge_get_rx_priv_buff_alloced(hdev, buf_alloc);\n+\tif (rx_all <= rx_priv + shared_std) {\n+\t\tdev_err(&hdev->pdev->dev,\n+\t\t\t\"pkt buffer allocted failed, total:%u, rx_all:%u\\n\",\n+\t\t\thdev->pkt_buf_size, rx_all);\n \t\treturn false;\n+\t}\n \n \tshared_buf = rx_all - rx_priv;\n-\thdev->s_buf.buf_size = shared_buf;\n-\thdev->s_buf.self.high = shared_buf;\n-\thdev->s_buf.self.low = 2 * hdev->mps;\n-\n+\tbuf_alloc->s_buf.buf_size = shared_buf;\n+\tbuf_alloc->s_buf.self.high = shared_buf;\n+\tbuf_alloc->s_buf.self.low = 2 * hdev->mps;\n \tfor (i = 0; i < HCLGE_MAX_TC_NUM; i++) {\n \t\tif ((hdev->hw_tc_map & BIT(i)) &&\n \t\t (hdev->tm_info.hw_pfc_map & BIT(i))) {\n-\t\t\thdev->s_buf.tc_thrd[i].low = hdev->mps;\n-\t\t\thdev->s_buf.tc_thrd[i].high = 2 * hdev->mps;\n+\t\t\tbuf_alloc->s_buf.tc_thrd[i].low = hdev->mps;\n+\t\t\tbuf_alloc->s_buf.tc_thrd[i].high = 2 * hdev->mps;\n \t\t} else {\n-\t\t\thdev->s_buf.tc_thrd[i].low = 0;\n-\t\t\thdev->s_buf.tc_thrd[i].high = hdev->mps;\n+\t\t\tbuf_alloc->s_buf.tc_thrd[i].low = 0;\n+\t\t\tbuf_alloc->s_buf.tc_thrd[i].high = hdev->mps;\n \t\t}\n \t}\n \n \treturn true;\n }\n \n-/* hclge_rx_buffer_calc: calculate the rx private buffer size for all TCs\n+/**\n+ * hclge_buffer_calc: calculate the private buffer size for all TCs\n * @hdev: pointer to struct hclge_dev\n * @tx_size: the allocated tx buffer for all TCs\n * @return: 0: calculate sucessful, negative: fail\n */\n-int hclge_rx_buffer_calc(struct hclge_dev *hdev, u32 tx_size)\n+int hclge_buffer_calc(struct hclge_dev *hdev,\n+\t\t struct hclge_pkt_buf_alloc *buf_alloc,\n+\t\t u32 tx_size)\n {\n-\tu32 rx_all = hdev->pkt_buf_size - tx_size;\n+\tu32 rx_all = hdev->pkt_buf_size;\n \tint no_pfc_priv_num, pfc_priv_num;\n-\tstruct hclge_priv_buf *priv;\n+\tstruct hclge_rx_priv_buf *priv;\n \tint i;\n \n-\t/* step 1, try to alloc private buffer for all enabled tc */\n+\t/* alloc tx buffer for all enabled tc */\n+\tfor (i = 0; i < HCLGE_MAX_TC_NUM; i++) {\n+\t\tif (rx_all < tx_size)\n+\t\t\treturn -ENOMEM;\n+\n+\t\tif (hdev->hw_tc_map & BIT(i)) {\n+\t\t\tbuf_alloc->tx_buf_size[i] = tx_size;\n+\t\t\trx_all -= tx_size;\n+\t\t} else {\n+\t\t\tbuf_alloc->tx_buf_size[i] = 0;\n+\t\t}\n+\t}\n+\n+\t/* If pfc is not supported, rx private\n+\t * buffer is not allocated.\n+\t */\n+\tif (hdev->pfc_cap == 0) {\n+\t\tif (!hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))\n+\t\t\treturn -ENOMEM;\n+\n+\t\treturn 0;\n+\t}\n+\n+\t/* Step 1, try to alloc private buffer for all enabled tc */\n \tfor (i = 0; i < HCLGE_MAX_TC_NUM; i++) {\n-\t\tpriv = &hdev->priv_buf[i];\n+\t\tpriv = &buf_alloc->rx_buf[i];\n \t\tif (hdev->hw_tc_map & BIT(i)) {\n-\t\t\tpriv->enable = 1;\n \t\t\tif (hdev->tm_info.hw_pfc_map & BIT(i)) {\n \t\t\t\tpriv->wl.low = hdev->mps;\n \t\t\t\tpriv->wl.high = priv->wl.low + hdev->mps;\n@@ -1520,128 +1554,133 @@ int hclge_rx_buffer_calc(struct hclge_dev *hdev, u32 tx_size)\n \t\t\t\tpriv->wl.high = 2 * hdev->mps;\n \t\t\t\tpriv->buf_size = priv->wl.high;\n \t\t\t}\n+\t\t} else {\n+\t\t\tpriv->wl.low = 0;\n+\t\t\tpriv->wl.high = 0;\n+\t\t\tpriv->buf_size = 0;\n \t\t}\n \t}\n \n-\tif (hclge_is_rx_buf_ok(hdev, rx_all))\n+\tif (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))\n \t\treturn 0;\n \n-\t/* step 2, try to decrease the buffer size of\n+\t/**\n+\t * Step 2, try to decrease the buffer size of\n \t * no pfc TC's private buffer\n-\t */\n+\t **/\n \tfor (i = 0; i < HCLGE_MAX_TC_NUM; i++) {\n-\t\tpriv = &hdev->priv_buf[i];\n-\n-\t\tif (hdev->hw_tc_map & BIT(i))\n-\t\t\tpriv->enable = 1;\n-\n-\t\tif (hdev->tm_info.hw_pfc_map & BIT(i)) {\n-\t\t\tpriv->wl.low = 128;\n-\t\t\tpriv->wl.high = priv->wl.low + hdev->mps;\n-\t\t\tpriv->buf_size = priv->wl.high + HCLGE_DEFAULT_DV;\n+\t\tpriv = &buf_alloc->rx_buf[i];\n+\t\tif (hdev->hw_tc_map & BIT(i)) {\n+\t\t\tif (hdev->tm_info.hw_pfc_map & BIT(i)) {\n+\t\t\t\tpriv->wl.low = 128;\n+\t\t\t\tpriv->wl.high = priv->wl.low + hdev->mps;\n+\t\t\t\tpriv->buf_size = priv->wl.high\n+\t\t\t\t\t+ HCLGE_DEFAULT_DV;\n+\t\t\t} else {\n+\t\t\t\tpriv->wl.low = 0;\n+\t\t\t\tpriv->wl.high = hdev->mps;\n+\t\t\t\tpriv->buf_size = priv->wl.high;\n+\t\t\t}\n \t\t} else {\n \t\t\tpriv->wl.low = 0;\n-\t\t\tpriv->wl.high = hdev->mps;\n-\t\t\tpriv->buf_size = priv->wl.high;\n+\t\t\tpriv->wl.high = 0;\n+\t\t\tpriv->buf_size = 0;\n \t\t}\n \t}\n \n-\tif (hclge_is_rx_buf_ok(hdev, rx_all))\n+\tif (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))\n \t\treturn 0;\n \n-\t/* step 3, try to reduce the number of pfc disabled TCs,\n+\t/**\n+\t * Step 3, try to reduce the number of pfc disabled TCs,\n \t * which have private buffer\n-\t */\n-\t/* get the total no pfc enable TC number, which have private buffer */\n-\tno_pfc_priv_num = hclge_get_no_pfc_priv_num(hdev);\n+\t **/\n \n-\t/* let the last to be cleared first */\n+\t/* Get the total no pfc enable TC number, which have private buffer */\n+\tno_pfc_priv_num = hclge_get_no_pfc_priv_num(hdev, buf_alloc);\n+\t/* Let the last to be cleared first */\n \tfor (i = HCLGE_MAX_TC_NUM - 1; i >= 0; i--) {\n-\t\tpriv = &hdev->priv_buf[i];\n-\n+\t\tpriv = &buf_alloc->rx_buf[i];\n \t\tif (hdev->hw_tc_map & BIT(i) &&\n \t\t !(hdev->tm_info.hw_pfc_map & BIT(i))) {\n \t\t\t/* Clear the no pfc TC private buffer */\n \t\t\tpriv->wl.low = 0;\n \t\t\tpriv->wl.high = 0;\n \t\t\tpriv->buf_size = 0;\n-\t\t\tpriv->enable = 0;\n \t\t\tno_pfc_priv_num--;\n \t\t}\n-\n-\t\tif (hclge_is_rx_buf_ok(hdev, rx_all) ||\n+\t\tif (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all) ||\n \t\t no_pfc_priv_num == 0)\n \t\t\tbreak;\n \t}\n-\n-\tif (hclge_is_rx_buf_ok(hdev, rx_all))\n+\tif (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))\n \t\treturn 0;\n \n-\t/* step 4, try to reduce the number of pfc enabled TCs\n+\t/**\n+\t * Step 4, try to reduce the number of pfc enabled TCs\n \t * which have private buffer.\n-\t */\n-\tpfc_priv_num = hclge_get_pfc_priv_num(hdev);\n-\n-\t/* let the last to be cleared first */\n+\t **/\n+\tpfc_priv_num = hclge_get_pfc_priv_num(hdev, buf_alloc);\n+\t/* Let the last to be cleared first */\n \tfor (i = HCLGE_MAX_TC_NUM - 1; i >= 0; i--) {\n-\t\tpriv = &hdev->priv_buf[i];\n-\n+\t\tpriv = &buf_alloc->rx_buf[i];\n \t\tif (hdev->hw_tc_map & BIT(i) &&\n \t\t hdev->tm_info.hw_pfc_map & BIT(i)) {\n \t\t\t/* Reduce the number of pfc TC with private buffer */\n \t\t\tpriv->wl.low = 0;\n-\t\t\tpriv->enable = 0;\n \t\t\tpriv->wl.high = 0;\n \t\t\tpriv->buf_size = 0;\n \t\t\tpfc_priv_num--;\n \t\t}\n-\n-\t\tif (hclge_is_rx_buf_ok(hdev, rx_all) ||\n+\t\tif (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all) ||\n \t\t pfc_priv_num == 0)\n \t\t\tbreak;\n \t}\n-\tif (hclge_is_rx_buf_ok(hdev, rx_all))\n+\tif (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))\n \t\treturn 0;\n \n \treturn -ENOMEM;\n }\n \n-static int hclge_rx_priv_buf_alloc(struct hclge_dev *hdev)\n+static int hclge_rx_buf_alloc(struct hclge_dev *hdev,\n+\t\t\t struct hclge_pkt_buf_alloc *buf_alloc)\n {\n-\tstruct hclge_rx_priv_buff *req;\n \tstruct hclge_desc desc;\n+\tstruct hclge_rx_buf_alloc *req =\n+\t\t\t(struct hclge_rx_buf_alloc *)desc.data;\n+\tstruct hclge_rx_shared_buf *s_buf = &buf_alloc->s_buf;\n \tint ret;\n \tint i;\n \n-\thclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RX_PRIV_BUFF_ALLOC, false);\n-\treq = (struct hclge_rx_priv_buff *)desc.data;\n+\thclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RX_BUFF_ALLOC, false);\n \n \t/* Alloc private buffer TCs */\n \tfor (i = 0; i < HCLGE_MAX_TC_NUM; i++) {\n-\t\tstruct hclge_priv_buf *priv = &hdev->priv_buf[i];\n+\t\tstruct hclge_rx_priv_buf *priv = &buf_alloc->rx_buf[i];\n \n-\t\treq->buf_num[i] =\n+\t\treq->priv_buf[i] =\n \t\t\tcpu_to_le16(priv->buf_size >> HCLGE_BUF_UNIT_S);\n-\t\treq->buf_num[i] |=\n-\t\t\tcpu_to_le16(true << HCLGE_TC0_PRI_BUF_EN_B);\n+\t\treq->priv_buf[i] |=\n+\t\t\tcpu_to_le16(1 << HCLGE_TC0_PRI_BUF_EN_B);\n \t}\n \n+\treq->shared_buf = cpu_to_le16(s_buf->buf_size >> HCLGE_BUF_UNIT_S);\n+\treq->shared_buf |= cpu_to_le16(1 << HCLGE_TC0_PRI_BUF_EN_B);\n+\n \tret = hclge_cmd_send(&hdev->hw, &desc, 1);\n-\tif (ret) {\n+\tif (ret)\n \t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\"rx private buffer alloc cmd failed %d\\n\", ret);\n-\t\treturn ret;\n-\t}\n+\t\t\t\"Set rx private buffer fail, status = %d\\n\", ret);\n \n-\treturn 0;\n+\treturn ret;\n }\n \n #define HCLGE_PRIV_ENABLE(a) ((a) > 0 ? 1 : 0)\n-\n-static int hclge_rx_priv_wl_config(struct hclge_dev *hdev)\n+static int hclge_rx_priv_wl_config(struct hclge_dev *hdev,\n+\t\t\t\t struct hclge_pkt_buf_alloc *buf_alloc)\n {\n \tstruct hclge_rx_priv_wl_buf *req;\n-\tstruct hclge_priv_buf *priv;\n+\tstruct hclge_rx_priv_buf *priv;\n \tstruct hclge_desc desc[2];\n \tint i, j;\n \tint ret;\n@@ -1658,7 +1697,9 @@ static int hclge_rx_priv_wl_config(struct hclge_dev *hdev)\n \t\t\tdesc[i].flag &= ~cpu_to_le16(HCLGE_CMD_FLAG_NEXT);\n \n \t\tfor (j = 0; j < HCLGE_TC_NUM_ONE_DESC; j++) {\n-\t\t\tpriv = &hdev->priv_buf[i * HCLGE_TC_NUM_ONE_DESC + j];\n+\t\t\tu32 idx = i * HCLGE_TC_NUM_ONE_DESC + j;\n+\n+\t\t\tpriv = &buf_alloc->rx_buf[idx];\n \t\t\treq->tc_wl[j].high =\n \t\t\t\tcpu_to_le16(priv->wl.high >> HCLGE_BUF_UNIT_S);\n \t\t\treq->tc_wl[j].high |=\n@@ -1674,18 +1715,17 @@ static int hclge_rx_priv_wl_config(struct hclge_dev *hdev)\n \n \t/* Send 2 descriptor at one time */\n \tret = hclge_cmd_send(&hdev->hw, desc, 2);\n-\tif (ret) {\n+\tif (ret)\n \t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\"rx private waterline config cmd failed %d\\n\",\n-\t\t\tret);\n-\t\treturn ret;\n-\t}\n-\treturn 0;\n+\t\t\t\"Set rx private waterline fail, status %d\\n\", ret);\n+\n+\treturn ret;\n }\n \n-static int hclge_common_thrd_config(struct hclge_dev *hdev)\n+static int hclge_common_thrd_config(struct hclge_dev *hdev,\n+\t\t\t\t struct hclge_pkt_buf_alloc *buf_alloc)\n {\n-\tstruct hclge_shared_buf *s_buf = &hdev->s_buf;\n+\tstruct hclge_rx_shared_buf *s_buf = &buf_alloc->s_buf;\n \tstruct hclge_rx_com_thrd *req;\n \tstruct hclge_desc desc[2];\n \tstruct hclge_tc_thrd *tc;\n@@ -1721,104 +1761,100 @@ static int hclge_common_thrd_config(struct hclge_dev *hdev)\n \n \t/* Send 2 descriptors at one time */\n \tret = hclge_cmd_send(&hdev->hw, desc, 2);\n-\tif (ret) {\n+\tif (ret)\n \t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\"common threshold config cmd failed %d\\n\", ret);\n-\t\treturn ret;\n-\t}\n-\treturn 0;\n+\t\t\t\"Set rx private waterline fail, status %d\\n\", ret);\n+\n+\treturn ret;\n }\n \n-static int hclge_common_wl_config(struct hclge_dev *hdev)\n+static int hclge_common_wl_config(struct hclge_dev *hdev,\n+\t\t\t\t struct hclge_pkt_buf_alloc *buf_alloc)\n {\n-\tstruct hclge_shared_buf *buf = &hdev->s_buf;\n-\tstruct hclge_rx_com_wl *req;\n \tstruct hclge_desc desc;\n+\tstruct hclge_rx_com_wl *req = (struct hclge_rx_com_wl *)desc.data;\n+\tstruct hclge_rx_shared_buf *s_buf = &buf_alloc->s_buf;\n \tint ret;\n \n \thclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RX_COM_WL_ALLOC, false);\n \n-\treq = (struct hclge_rx_com_wl *)desc.data;\n-\treq->com_wl.high = cpu_to_le16(buf->self.high >> HCLGE_BUF_UNIT_S);\n+\treq->com_wl.high = cpu_to_le16(s_buf->self.high >> HCLGE_BUF_UNIT_S);\n \treq->com_wl.high |=\n-\t\tcpu_to_le16(HCLGE_PRIV_ENABLE(buf->self.high) <<\n+\t\tcpu_to_le16(HCLGE_PRIV_ENABLE(s_buf->self.high) <<\n \t\t\t HCLGE_RX_PRIV_EN_B);\n \n-\treq->com_wl.low = cpu_to_le16(buf->self.low >> HCLGE_BUF_UNIT_S);\n+\treq->com_wl.low = cpu_to_le16(s_buf->self.low >> HCLGE_BUF_UNIT_S);\n \treq->com_wl.low |=\n-\t\tcpu_to_le16(HCLGE_PRIV_ENABLE(buf->self.low) <<\n+\t\tcpu_to_le16(HCLGE_PRIV_ENABLE(s_buf->self.low) <<\n \t\t\t HCLGE_RX_PRIV_EN_B);\n \n \tret = hclge_cmd_send(&hdev->hw, &desc, 1);\n-\tif (ret) {\n+\tif (ret)\n \t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\"common waterline config cmd failed %d\\n\", ret);\n-\t\treturn ret;\n-\t}\n+\t\t\t\"Set rx private waterline fail, status %d\\n\", ret);\n \n-\treturn 0;\n+\treturn ret;\n }\n \n int hclge_buffer_alloc(struct hclge_dev *hdev)\n {\n+\tstruct hclge_pkt_buf_alloc *pkt_buf;\n \tu32 tx_buf_size = HCLGE_DEFAULT_TX_BUF;\n \tint ret;\n \n-\thdev->priv_buf = devm_kmalloc_array(&hdev->pdev->dev, HCLGE_MAX_TC_NUM,\n-\t\t\t\t\t sizeof(struct hclge_priv_buf),\n-\t\t\t\t\t GFP_KERNEL | __GFP_ZERO);\n-\tif (!hdev->priv_buf)\n+\tpkt_buf = kzalloc(sizeof(*pkt_buf), GFP_KERNEL);\n+\tif (!pkt_buf)\n \t\treturn -ENOMEM;\n \n-\tret = hclge_tx_buffer_alloc(hdev, tx_buf_size);\n+\tret = hclge_buffer_calc(hdev, pkt_buf, tx_buf_size);\n \tif (ret) {\n \t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\"could not alloc tx buffers %d\\n\", ret);\n-\t\treturn ret;\n+\t\t\t\"Calculate Rx buffer error ret =%d.\\n\", ret);\n+\t\tgoto err;\n \t}\n \n-\tret = hclge_rx_buffer_calc(hdev, tx_buf_size);\n+\tret = hclge_tx_buffer_alloc(hdev, pkt_buf);\n \tif (ret) {\n \t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\"could not calc rx priv buffer size for all TCs %d\\n\",\n-\t\t\tret);\n-\t\treturn ret;\n+\t\t\t\"Allocate Tx buffer fail, ret =%d\\n\", ret);\n+\t\tgoto err;\n \t}\n \n-\tret = hclge_rx_priv_buf_alloc(hdev);\n+\tret = hclge_rx_buf_alloc(hdev, pkt_buf);\n \tif (ret) {\n-\t\tdev_err(&hdev->pdev->dev, \"could not alloc rx priv buffer %d\\n\",\n-\t\t\tret);\n-\t\treturn ret;\n+\t\tdev_err(&hdev->pdev->dev,\n+\t\t\t\"Private buffer config fail, ret = %d\\n\", ret);\n+\t\tgoto err;\n \t}\n \n \tif (hnae_get_bit(hdev->ae_dev->flag,\n \t\t\t HNAE_DEV_SUPPORT_DCB_B)) {\n-\t\tret = hclge_rx_priv_wl_config(hdev);\n+\t\tret = hclge_rx_priv_wl_config(hdev, pkt_buf);\n \t\tif (ret) {\n \t\t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\t\"could not configure rx private waterline %d\\n\",\n+\t\t\t\t\"Private waterline config fail, ret = %d\\n\",\n \t\t\t\tret);\n-\t\t\treturn ret;\n+\t\t\tgoto err;\n \t\t}\n \n-\t\tret = hclge_common_thrd_config(hdev);\n+\t\tret = hclge_common_thrd_config(hdev, pkt_buf);\n \t\tif (ret) {\n \t\t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\t\"could not configure common threshold %d\\n\",\n+\t\t\t\t\"Common threshold config fail, ret = %d\\n\",\n \t\t\t\tret);\n-\t\t\treturn ret;\n+\t\t\tgoto err;\n \t\t}\n \t}\n \n-\tret = hclge_common_wl_config(hdev);\n+\tret = hclge_common_wl_config(hdev, pkt_buf);\n \tif (ret) {\n \t\tdev_err(&hdev->pdev->dev,\n-\t\t\t\"could not configure common waterline %d\\n\", ret);\n-\t\treturn ret;\n+\t\t\t\"Common waterline config fail, ret = %d\\n\", ret);\n \t}\n \n-\treturn 0;\n+err:\n+\tkfree(pkt_buf);\n+\treturn ret;\n }\n \n static int hclge_init_roce_base_info(struct hclge_vport *vport)\ndiff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h\nindex 0905ae5..4bdec1f 100644\n--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h\n+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h\n@@ -430,6 +430,9 @@ struct hclge_dev {\n #define HCLGE_FLAG_TC_BASE_SCH_MODE\t\t1\n #define HCLGE_FLAG_VNET_BASE_SCH_MODE\t\t2\n \tu8 tx_sch_mode;\n+\tu8 pg_cap;\n+\tu8 tc_cap;\n+\tu8 pfc_cap;\n \n \tu8 default_up;\n \tstruct hclge_tm_info tm_info;\n@@ -472,8 +475,6 @@ struct hclge_dev {\n \n \tu32 pkt_buf_size; /* Total pf buf size for tx/rx */\n \tu32 mps; /* Max packet size */\n-\tstruct hclge_priv_buf *priv_buf;\n-\tstruct hclge_shared_buf s_buf;\n \n \tenum hclge_mta_dmac_sel_type mta_mac_sel_type;\n \tbool enable_mta; /* Mutilcast filter enable */\ndiff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c\nindex 1c577d2..59b0cfb 100644\n--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c\n+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c\n@@ -364,7 +364,8 @@ static int hclge_tm_qs_schd_mode_cfg(struct hclge_dev *hdev, u16 qs_id)\n \treturn hclge_cmd_send(&hdev->hw, &desc, 1);\n }\n \n-static int hclge_tm_qs_bp_cfg(struct hclge_dev *hdev, u8 tc)\n+static int hclge_tm_qs_bp_cfg(struct hclge_dev *hdev,\n+\t\t\t u8 tc, u8 grp_id, u32 bit_map)\n {\n \tstruct hclge_bp_to_qs_map_cmd *bp_to_qs_map_cmd;\n \tstruct hclge_desc desc;\n@@ -375,9 +376,8 @@ static int hclge_tm_qs_bp_cfg(struct hclge_dev *hdev, u8 tc)\n \tbp_to_qs_map_cmd = (struct hclge_bp_to_qs_map_cmd *)desc.data;\n \n \tbp_to_qs_map_cmd->tc_id = tc;\n-\n-\t/* Qset and tc is one by one mapping */\n-\tbp_to_qs_map_cmd->qs_bit_map = cpu_to_le32(1 << tc);\n+\tbp_to_qs_map_cmd->qs_group_id = grp_id;\n+\tbp_to_qs_map_cmd->qs_bit_map = cpu_to_le32(bit_map);\n \n \treturn hclge_cmd_send(&hdev->hw, &desc, 1);\n }\n@@ -836,6 +836,10 @@ static int hclge_tm_map_cfg(struct hclge_dev *hdev)\n {\n \tint ret;\n \n+\tret = hclge_up_to_tc_map(hdev);\n+\tif (ret)\n+\t\treturn ret;\n+\n \tret = hclge_tm_pg_to_pri_map(hdev);\n \tif (ret)\n \t\treturn ret;\n@@ -966,23 +970,85 @@ static int hclge_tm_schd_setup_hw(struct hclge_dev *hdev)\n \treturn hclge_tm_schd_mode_hw(hdev);\n }\n \n+/* Each Tc has a 1024 queue sets to backpress, it divides to\n+ * 32 group, each group contains 32 queue sets, which can be\n+ * represented by u32 bitmap.\n+ */\n+static int hclge_bp_setup_hw(struct hclge_dev *hdev, u8 tc)\n+{\n+\tstruct hclge_vport *vport = hdev->vport;\n+\tu32 i, k, qs_bitmap;\n+\tint ret;\n+\n+\tfor (i = 0; i < HCLGE_BP_GRP_NUM; i++) {\n+\t\tqs_bitmap = 0;\n+\n+\t\tfor (k = 0; k < hdev->num_alloc_vport; k++) {\n+\t\t\tu16 qs_id = vport->qs_offset + tc;\n+\t\t\tu8 grp, sub_grp;\n+\n+\t\t\tgrp = hnae_get_field(qs_id, HCLGE_BP_GRP_ID_M,\n+\t\t\t\t\t HCLGE_BP_GRP_ID_S);\n+\t\t\tsub_grp = hnae_get_field(qs_id, HCLGE_BP_SUB_GRP_ID_M,\n+\t\t\t\t\t\t HCLGE_BP_SUB_GRP_ID_S);\n+\t\t\tif (i == grp)\n+\t\t\t\tqs_bitmap |= (1 << sub_grp);\n+\n+\t\t\tvport++;\n+\t\t}\n+\n+\t\tret = hclge_tm_qs_bp_cfg(hdev, tc, i, qs_bitmap);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\n+\treturn 0;\n+}\n+\n int hclge_pause_setup_hw(struct hclge_dev *hdev)\n {\n-\tbool en = hdev->tm_info.fc_mode != HCLGE_FC_PFC;\n \tint ret;\n \tu8 i;\n \n-\tret = hclge_mac_pause_en_cfg(hdev, en, en);\n-\tif (ret)\n+\tif (hdev->tm_info.fc_mode != HCLGE_FC_PFC) {\n+\t\tbool tx_en, rx_en;\n+\n+\t\tswitch (hdev->tm_info.fc_mode) {\n+\t\tcase HCLGE_FC_NONE:\n+\t\t\ttx_en = false;\n+\t\t\trx_en = false;\n+\t\t\tbreak;\n+\t\tcase HCLGE_FC_RX_PAUSE:\n+\t\t\ttx_en = false;\n+\t\t\trx_en = true;\n+\t\t\tbreak;\n+\t\tcase HCLGE_FC_TX_PAUSE:\n+\t\t\ttx_en = true;\n+\t\t\trx_en = false;\n+\t\t\tbreak;\n+\t\tcase HCLGE_FC_FULL:\n+\t\t\ttx_en = true;\n+\t\t\trx_en = true;\n+\t\t\tbreak;\n+\t\tdefault:\n+\t\t\ttx_en = true;\n+\t\t\trx_en = true;\n+\t\t}\n+\t\tret = hclge_mac_pause_en_cfg(hdev, tx_en, rx_en);\n \t\treturn ret;\n+\t}\n+\n+\t/* Only DCB-supported port supports qset back pressure setting */\n+\tif (!hnae_get_bit(hdev->ae_dev->flag, HNAE_DEV_SUPPORT_DCB_B))\n+\t\treturn 0;\n \n \tfor (i = 0; i < hdev->tm_info.num_tc; i++) {\n-\t\tret = hclge_tm_qs_bp_cfg(hdev, i);\n+\t\tret = hclge_bp_setup_hw(hdev, i);\n \t\tif (ret)\n \t\t\treturn ret;\n \t}\n \n-\treturn hclge_up_to_tc_map(hdev);\n+\treturn 0;\n }\n \n int hclge_tm_init_hw(struct hclge_dev *hdev)\ndiff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h\nindex 7e67337..dbaa3b5 100644\n--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h\n+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h\n@@ -86,6 +86,15 @@ struct hclge_pg_shapping_cmd {\n \t__le32 pg_shapping_para;\n };\n \n+struct hclge_port_shapping_cmd {\n+\t__le32 port_shapping_para;\n+};\n+\n+#define HCLGE_BP_GRP_NUM\t\t32\n+#define HCLGE_BP_SUB_GRP_ID_S\t\t0\n+#define HCLGE_BP_SUB_GRP_ID_M\t\tGENMASK(4, 0)\n+#define HCLGE_BP_GRP_ID_S\t\t5\n+#define HCLGE_BP_GRP_ID_M\t\tGENMASK(9, 5)\n struct hclge_bp_to_qs_map_cmd {\n \tu8 tc_id;\n \tu8 rsvd[2];\n", "prefixes": [ "net-next", "8/8" ] }