Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/585407/?format=api
{ "id": 585407, "url": "http://patchwork.ozlabs.org/api/patches/585407/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20160219201130.16927.24329.stgit@localhost.localdomain/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20160219201130.16927.24329.stgit@localhost.localdomain>", "list_archive_url": null, "date": "2016-02-19T20:17:08", "name": "[next,v2] i40e/i40evf: Allow up to 12K bytes of data per Tx descriptor instead of 8K", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "647cbbb4b345acdf173136eaf320312048ec387c", "submitter": { "id": 67293, "url": "http://patchwork.ozlabs.org/api/people/67293/?format=api", "name": "Alexander Duyck", "email": "aduyck@mirantis.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20160219201130.16927.24329.stgit@localhost.localdomain/mbox/", "series": [], "comments": "http://patchwork.ozlabs.org/api/patches/585407/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/585407/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@lists.osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Received": [ "from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n\tby ozlabs.org (Postfix) with ESMTP id 36C5B140783\n\tfor <incoming@patchwork.ozlabs.org>;\n\tSat, 20 Feb 2016 07:17:16 +1100 (AEDT)", "from localhost (localhost [127.0.0.1])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id 3F228925D9;\n\tFri, 19 Feb 2016 20:17:15 +0000 (UTC)", "from whitealder.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id gixSpXEoVEUJ; Fri, 19 Feb 2016 20:17:12 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id BF4D8925EA;\n\tFri, 19 Feb 2016 20:17:12 +0000 (UTC)", "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\tby ash.osuosl.org (Postfix) with ESMTP id 77C6E1BFDFF\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tFri, 19 Feb 2016 20:17:11 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id 725A195B12\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tFri, 19 Feb 2016 20:17:11 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id 0DJwTLs-utqL for <intel-wired-lan@lists.osuosl.org>;\n\tFri, 19 Feb 2016 20:17:10 +0000 (UTC)", "from mail-pf0-f170.google.com (mail-pf0-f170.google.com\n\t[209.85.192.170])\n\tby hemlock.osuosl.org (Postfix) with ESMTPS id 7149D95AFE\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tFri, 19 Feb 2016 20:17:10 +0000 (UTC)", "by mail-pf0-f170.google.com with SMTP id q63so56524231pfb.0\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tFri, 19 Feb 2016 12:17:10 -0800 (PST)", "from localhost.localdomain\n\t(static-50-53-29-36.bvtn.or.frontiernet.net. [50.53.29.36])\n\tby smtp.gmail.com with ESMTPSA id\n\tn68sm19673626pfj.46.2016.02.19.12.17.09\n\tfor <intel-wired-lan@lists.osuosl.org>\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tFri, 19 Feb 2016 12:17:09 -0800 (PST)" ], "Authentication-Results": "ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n\tunprotected) header.d=mirantis.com header.i=@mirantis.com\n\theader.b=KPcqXQve; dkim-atps=neutral", "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "from auto-whitelisted by SQLgrey-1.7.6", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=mirantis.com;\n\ts=google; \n\th=subject:from:to:date:message-id:user-agent:mime-version\n\t:content-type:content-transfer-encoding;\n\tbh=NpDcEV33jvgiV3B1s8puH+OpGljbFp6ddOc2g+NapC8=;\n\tb=KPcqXQveAYYH4AWihOULXd6RFtEvmyY0WP1imen+XdsplYXbzSJqDzR9ZCn+Sls55q\n\tu1rNI+BegkuatHYLQwtwrIXu3MxVZr3Wryoe3LuEWTH3RxVTh2tFlSWIgKLHoyHYJ1t3\n\tad/1tW+CTHr0zzLYvn5LPzfolFYI/B7aXsPE8=", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20130820;\n\th=x-gm-message-state:subject:from:to:date:message-id:user-agent\n\t:mime-version:content-type:content-transfer-encoding;\n\tbh=NpDcEV33jvgiV3B1s8puH+OpGljbFp6ddOc2g+NapC8=;\n\tb=YlsMwjyNd2s6t9z5l633jbS0WOONnTnWo9jTHNwwD1hF/nOYPZ6BU69Uuc2rul3nov\n\t1aZ0/hpYukalQBzwBimuT9iRyRKBOVgUfZsXPAeKNvsGVRKYiFF3fQhMbD436VRNPQL5\n\tAY/oi3uw01bJyoMujKGIaMHttGcKen6h+iXiqFNOuuNnanApDcQpqVAclp5Wq2mVDWvx\n\tLu2TLjDMfiZdVDSBfoWV4Uqye12Xd/uSqiJX8O0H/FqkMA3smruIs7mioaw7biA4x0M3\n\tzpy98BIZa6DczJ6OovAxOQ6oIoVUyMHtOnhsTslmHBXlcnZ2mfWLrFqZ/MYgYXMPIMRD\n\tzWlg==", "X-Gm-Message-State": "AG10YOQe90PwoQBzt4cezdhoDrjyLMD8R6eag3Tt8GVVav6xDkmELZZ7k7xn/BsCCUtIidpt", "X-Received": "by 10.98.17.129 with SMTP id 1mr20781016pfr.30.1455913030019;\n\tFri, 19 Feb 2016 12:17:10 -0800 (PST)", "From": "Alexander Duyck <aduyck@mirantis.com>", "To": "intel-wired-lan@lists.osuosl.org", "Date": "Fri, 19 Feb 2016 12:17:08 -0800", "Message-ID": "<20160219201130.16927.24329.stgit@localhost.localdomain>", "User-Agent": "StGit/0.17.1-dirty", "MIME-Version": "1.0", "Subject": "[Intel-wired-lan] [next PATCH v2] i40e/i40evf: Allow up to 12K\n\tbytes of data per Tx descriptor instead of 8K", "X-BeenThere": "intel-wired-lan@lists.osuosl.org", "X-Mailman-Version": "2.1.18-1", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.lists.osuosl.org>", "List-Unsubscribe": "<http://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@lists.osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@lists.osuosl.org?subject=help>", "List-Subscribe": "<http://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=subscribe>", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@lists.osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@lists.osuosl.org>" }, "content": ">From what I can tell the practical limitation on the size of the Tx data\nbuffer is the fact that the Tx descriptor is limited to 14 bits. As such\nwe cannot use 16K as is typically used on the other Intel drivers. However\nartificially limiting ourselves to 8K can be expensive as this means that\nwe will consume up to 10 descriptors (1 context, 1 for header, and 9 for\npayload, non-8K aligned) in a single send.\n\nI propose that we can reduce this by increasing the maximum data for a 4K\naligned block to 12K. We can reduce the descriptors used for a 32K aligned\nblock by 1 by increasing the size like this. In addition we still have the\n4K - 1 of space that is still unused. We can use this as a bit of extra\npadding when dealing with data that is not aligned to 4K.\n\nBy aligning the descriptors after the first to 4K we can improve the\nefficiency of PCIe accesses as we can avoid using byte enables and can fetch\nfull TLP transactions after the first fetch of the buffer. This helps to\nimprove PCIe efficiency. Below is the results of testing before and after\nwith this patch:\n\nRecv Send Send Utilization Service Demand\nSocket Socket Message Elapsed Send Recv Send Recv\nSize Size Size Time Throughput local remote local remote\nbytes bytes bytes secs. 10^6bits/s % S % U us/KB us/KB\nBefore:\n87380 16384 16384 10.00 33682.24 20.27 -1.00 0.592 -1.00\nAfter:\n87380 16384 16384 10.00 34204.08 20.54 -1.00 0.590 -1.00\n\nSo the net result of this patch is that we have a small gain in throughput\ndue to a reduction in overhead for putting together the frame.\n\nSigned-off-by: Alexander Duyck <aduyck@mirantis.com>\n---\n\nv2: Fixed build issue when FCoE was enabled for i40e.\n\nTesting-hints:\nThis primarily requires just basic throughput testing for regression.\n\nThe performance improvement can be difficult to see. In order for me to\nbe able to reproduce the results above it was necessary to manually adjust\nthe ITR value down to something on the order of about 25us as otherwise\nother items such as the socket buffer would limit the throughput.\n\n drivers/net/ethernet/intel/i40e/i40e_fcoe.c | 2 +\n drivers/net/ethernet/intel/i40e/i40e_txrx.c | 13 ++++++---\n drivers/net/ethernet/intel/i40e/i40e_txrx.h | 35 +++++++++++++++++++++++--\n drivers/net/ethernet/intel/i40evf/i40e_txrx.c | 13 ++++++---\n drivers/net/ethernet/intel/i40evf/i40e_txrx.h | 35 +++++++++++++++++++++++--\n 5 files changed, 83 insertions(+), 15 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/i40e/i40e_fcoe.c b/drivers/net/ethernet/intel/i40e/i40e_fcoe.c\nindex 8ad162c16f61..92d2208d13c7 100644\n--- a/drivers/net/ethernet/intel/i40e/i40e_fcoe.c\n+++ b/drivers/net/ethernet/intel/i40e/i40e_fcoe.c\n@@ -1371,7 +1371,7 @@ static netdev_tx_t i40e_fcoe_xmit_frame(struct sk_buff *skb,\n \tif (i40e_chk_linearize(skb, count)) {\n \t\tif (__skb_linearize(skb))\n \t\t\tgoto out_drop;\n-\t\tcount = TXD_USE_COUNT(skb->len);\n+\t\tcount = i40e_txd_use_count(skb->len);\n \t\ttx_ring->tx_stats.tx_linearize++;\n \t}\n \ndiff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c\nindex cb52f39d514a..f870b8da4551 100644\n--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c\n+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c\n@@ -2716,6 +2716,8 @@ static inline void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \ttx_bi = first;\n \n \tfor (frag = &skb_shinfo(skb)->frags[0];; frag++) {\n+\t\tunsigned int max_data = I40E_MAX_DATA_PER_TXD_ALIGNED;\n+\n \t\tif (dma_mapping_error(tx_ring->dev, dma))\n \t\t\tgoto dma_error;\n \n@@ -2723,12 +2725,14 @@ static inline void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \t\tdma_unmap_len_set(tx_bi, len, size);\n \t\tdma_unmap_addr_set(tx_bi, dma, dma);\n \n+\t\t/* align size to end of page */\n+\t\tmax_data += -dma & (I40E_MAX_READ_REQ_SIZE - 1);\n \t\ttx_desc->buffer_addr = cpu_to_le64(dma);\n \n \t\twhile (unlikely(size > I40E_MAX_DATA_PER_TXD)) {\n \t\t\ttx_desc->cmd_type_offset_bsz =\n \t\t\t\tbuild_ctob(td_cmd, td_offset,\n-\t\t\t\t\t I40E_MAX_DATA_PER_TXD, td_tag);\n+\t\t\t\t\t max_data, td_tag);\n \n \t\t\ttx_desc++;\n \t\t\ti++;\n@@ -2739,9 +2743,10 @@ static inline void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \t\t\t\ti = 0;\n \t\t\t}\n \n-\t\t\tdma += I40E_MAX_DATA_PER_TXD;\n-\t\t\tsize -= I40E_MAX_DATA_PER_TXD;\n+\t\t\tdma += max_data;\n+\t\t\tsize -= max_data;\n \n+\t\t\tmax_data = I40E_MAX_DATA_PER_TXD_ALIGNED;\n \t\t\ttx_desc->buffer_addr = cpu_to_le64(dma);\n \t\t}\n \n@@ -2891,7 +2896,7 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,\n \tif (i40e_chk_linearize(skb, count)) {\n \t\tif (__skb_linearize(skb))\n \t\t\tgoto out_drop;\n-\t\tcount = TXD_USE_COUNT(skb->len);\n+\t\tcount = i40e_txd_use_count(skb->len);\n \t\ttx_ring->tx_stats.tx_linearize++;\n \t}\n \ndiff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h\nindex 8a3a163cc475..8b049cd77064 100644\n--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h\n+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h\n@@ -146,10 +146,39 @@ enum i40e_dyn_idx_t {\n \n #define I40E_MAX_BUFFER_TXD\t8\n #define I40E_MIN_TX_LEN\t\t17\n-#define I40E_MAX_DATA_PER_TXD\t8192\n+\n+/* The size limit for a transmit buffer in a descriptor is (16K - 1).\n+ * In order to align with the read requests we will align the value to\n+ * the nearest 4K which represents our maximum read request size.\n+ */\n+#define I40E_MAX_READ_REQ_SIZE\t\t4096\n+#define I40E_MAX_DATA_PER_TXD\t\t(16 * 1024 - 1)\n+#define I40E_MAX_DATA_PER_TXD_ALIGNED \\\n+\t(I40E_MAX_DATA_PER_TXD & ~(I40E_MAX_READ_REQ_SIZE - 1))\n+\n+/* This ugly bit of math is equivilent to DIV_ROUNDUP(size, X) where X is\n+ * the value I40E_MAX_DATA_PER_TXD_ALIGNED. It is needed due to the fact\n+ * that 12K is not a power of 2 and division is expensive. It is used to\n+ * approximate the number of descriptors used per linear buffer. Note\n+ * that this will overestimate in some cases as it doesn't account for the\n+ * fact that we will add up to 4K - 1 in aligning the 12K buffer, however\n+ * the error should not impact things much as large buffers usually mean\n+ * we will use fewer descriptors then there are frags in an skb.\n+ */\n+static inline unsigned int i40e_txd_use_count(unsigned int size)\n+{\n+\tconst unsigned int max = I40E_MAX_DATA_PER_TXD_ALIGNED;\n+\tconst unsigned int reciprocal = ((1ull << 32) - 1 + (max / 2)) / max;\n+\tunsigned int adjust = ~(u32)0;\n+\n+\t/* if we rounded up on the reciprprocal pull down the adjustment */\n+\tif ((max * reciprocal) > adjust)\n+\t\tadjust = ~(u32)(reciprocal - 1);\n+\n+\treturn (u32)((((u64)size * reciprocal) + adjust) >> 32);\n+}\n \n /* Tx Descriptors needed, worst case */\n-#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), I40E_MAX_DATA_PER_TXD)\n #define DESC_NEEDED (MAX_SKB_FRAGS + 4)\n #define I40E_MIN_DESC_PENDING\t4\n \n@@ -369,7 +398,7 @@ static inline int i40e_xmit_descriptor_count(struct sk_buff *skb)\n \tint count = 0, size = skb_headlen(skb);\n \n \tfor (;;) {\n-\t\tcount += TXD_USE_COUNT(size);\n+\t\tcount += i40e_txd_use_count(size);\n \n \t\tif (!nr_frags--)\n \t\t\tbreak;\ndiff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\nindex ebcc25c05796..5f9c1bbab1fa 100644\n--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\n+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\n@@ -1936,6 +1936,8 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \ttx_bi = first;\n \n \tfor (frag = &skb_shinfo(skb)->frags[0];; frag++) {\n+\t\tunsigned int max_data = I40E_MAX_DATA_PER_TXD_ALIGNED;\n+\n \t\tif (dma_mapping_error(tx_ring->dev, dma))\n \t\t\tgoto dma_error;\n \n@@ -1943,12 +1945,14 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \t\tdma_unmap_len_set(tx_bi, len, size);\n \t\tdma_unmap_addr_set(tx_bi, dma, dma);\n \n+\t\t/* align size to end of page */\n+\t\tmax_data += -dma & (I40E_MAX_READ_REQ_SIZE - 1);\n \t\ttx_desc->buffer_addr = cpu_to_le64(dma);\n \n \t\twhile (unlikely(size > I40E_MAX_DATA_PER_TXD)) {\n \t\t\ttx_desc->cmd_type_offset_bsz =\n \t\t\t\tbuild_ctob(td_cmd, td_offset,\n-\t\t\t\t\t I40E_MAX_DATA_PER_TXD, td_tag);\n+\t\t\t\t\t max_data, td_tag);\n \n \t\t\ttx_desc++;\n \t\t\ti++;\n@@ -1959,9 +1963,10 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \t\t\t\ti = 0;\n \t\t\t}\n \n-\t\t\tdma += I40E_MAX_DATA_PER_TXD;\n-\t\t\tsize -= I40E_MAX_DATA_PER_TXD;\n+\t\t\tdma += max_data;\n+\t\t\tsize -= max_data;\n \n+\t\t\tmax_data = I40E_MAX_DATA_PER_TXD_ALIGNED;\n \t\t\ttx_desc->buffer_addr = cpu_to_le64(dma);\n \t\t}\n \n@@ -2110,7 +2115,7 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,\n \tif (i40e_chk_linearize(skb, count)) {\n \t\tif (__skb_linearize(skb))\n \t\t\tgoto out_drop;\n-\t\tcount = TXD_USE_COUNT(skb->len);\n+\t\tcount = i40e_txd_use_count(skb->len);\n \t\ttx_ring->tx_stats.tx_linearize++;\n \t}\n \ndiff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h\nindex 8c5da4f89fd0..34096b1e4782 100644\n--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h\n+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h\n@@ -146,10 +146,39 @@ enum i40e_dyn_idx_t {\n \n #define I40E_MAX_BUFFER_TXD\t8\n #define I40E_MIN_TX_LEN\t\t17\n-#define I40E_MAX_DATA_PER_TXD\t8192\n+\n+/* The size limit for a transmit buffer in a descriptor is (16K - 1).\n+ * In order to align with the read requests we will align the value to\n+ * the nearest 4K which represents our maximum read request size.\n+ */\n+#define I40E_MAX_READ_REQ_SIZE\t\t4096\n+#define I40E_MAX_DATA_PER_TXD\t\t(16 * 1024 - 1)\n+#define I40E_MAX_DATA_PER_TXD_ALIGNED \\\n+\t(I40E_MAX_DATA_PER_TXD & ~(I40E_MAX_READ_REQ_SIZE - 1))\n+\n+/* This ugly bit of math is equivilent to DIV_ROUNDUP(size, X) where X is\n+ * the value I40E_MAX_DATA_PER_TXD_ALIGNED. It is needed due to the fact\n+ * that 12K is not a power of 2 and division is expensive. It is used to\n+ * approximate the number of descriptors used per linear buffer. Note\n+ * that this will overestimate in some cases as it doesn't account for the\n+ * fact that we will add up to 4K - 1 in aligning the 12K buffer, however\n+ * the error should not impact things much as large buffers usually mean\n+ * we will use fewer descriptors then there are frags in an skb.\n+ */\n+static inline unsigned int i40e_txd_use_count(unsigned int size)\n+{\n+\tconst unsigned int max = I40E_MAX_DATA_PER_TXD_ALIGNED;\n+\tconst unsigned int reciprocal = ((1ull << 32) - 1 + (max / 2)) / max;\n+\tunsigned int adjust = ~(u32)0;\n+\n+\t/* if we rounded up on the reciprprocal pull down the adjustment */\n+\tif ((max * reciprocal) > adjust)\n+\t\tadjust = ~(u32)(reciprocal - 1);\n+\n+\treturn (u32)((((u64)size * reciprocal) + adjust) >> 32);\n+}\n \n /* Tx Descriptors needed, worst case */\n-#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), I40E_MAX_DATA_PER_TXD)\n #define DESC_NEEDED (MAX_SKB_FRAGS + 4)\n #define I40E_MIN_DESC_PENDING\t4\n \n@@ -359,7 +388,7 @@ static inline int i40e_xmit_descriptor_count(struct sk_buff *skb)\n \tint count = 0, size = skb_headlen(skb);\n \n \tfor (;;) {\n-\t\tcount += TXD_USE_COUNT(size);\n+\t\tcount += i40e_txd_use_count(size);\n \n \t\tif (!nr_frags--)\n \t\t\tbreak;\n", "prefixes": [ "next", "v2" ] }