Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/584321/?format=api
{ "id": 584321, "url": "http://patchwork.ozlabs.org/api/patches/584321/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20160217190243.10339.65965.stgit@localhost.localdomain/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20160217190243.10339.65965.stgit@localhost.localdomain>", "list_archive_url": null, "date": "2016-02-17T19:02:43", "name": "[next,1/4] i40e/i40evf: Break up xmit_descriptor_count from maybe_stop_tx", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "2ae6da1f6653607cb6a4acd4fd11899b34294f34", "submitter": { "id": 67293, "url": "http://patchwork.ozlabs.org/api/people/67293/?format=api", "name": "Alexander Duyck", "email": "aduyck@mirantis.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20160217190243.10339.65965.stgit@localhost.localdomain/mbox/", "series": [], "comments": "http://patchwork.ozlabs.org/api/patches/584321/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/584321/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@lists.osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Received": [ "from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137])\n\tby ozlabs.org (Postfix) with ESMTP id 788581401CA\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 18 Feb 2016 06:02:50 +1100 (AEDT)", "from localhost (localhost [127.0.0.1])\n\tby fraxinus.osuosl.org (Postfix) with ESMTP id C567DA5E42;\n\tWed, 17 Feb 2016 19:02:49 +0000 (UTC)", "from fraxinus.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id 66S-wb-RiIqI; Wed, 17 Feb 2016 19:02:48 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby fraxinus.osuosl.org (Postfix) with ESMTP id B3908A5DFF;\n\tWed, 17 Feb 2016 19:02:48 +0000 (UTC)", "from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n\tby ash.osuosl.org (Postfix) with ESMTP id D29CE1C0BC2\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tWed, 17 Feb 2016 19:02:46 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id CF1929219D\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tWed, 17 Feb 2016 19:02:46 +0000 (UTC)", "from whitealder.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id 7q11CMvCiXP1 for <intel-wired-lan@lists.osuosl.org>;\n\tWed, 17 Feb 2016 19:02:45 +0000 (UTC)", "from mail-pa0-f42.google.com (mail-pa0-f42.google.com\n\t[209.85.220.42])\n\tby whitealder.osuosl.org (Postfix) with ESMTPS id 3A00B92180\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tWed, 17 Feb 2016 19:02:45 +0000 (UTC)", "by mail-pa0-f42.google.com with SMTP id fy10so15878369pac.1\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tWed, 17 Feb 2016 11:02:45 -0800 (PST)", "from localhost.localdomain\n\t(static-50-53-29-36.bvtn.or.frontiernet.net. [50.53.29.36])\n\tby smtp.gmail.com with ESMTPSA id\n\ta21sm4441660pfj.40.2016.02.17.11.02.44\n\tfor <intel-wired-lan@lists.osuosl.org>\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tWed, 17 Feb 2016 11:02:44 -0800 (PST)" ], "Authentication-Results": "ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n\tunprotected) header.d=mirantis.com header.i=@mirantis.com\n\theader.b=QV22xx1U; dkim-atps=neutral", "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "from auto-whitelisted by SQLgrey-1.7.6", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=mirantis.com;\n\ts=google; \n\th=subject:from:to:date:message-id:in-reply-to:references:user-agent\n\t:mime-version:content-type:content-transfer-encoding;\n\tbh=9QptQ8DIreSQYCOF+M64LPWqbvIl5tWdbwvuFHzotjs=;\n\tb=QV22xx1URqdBVx+4rLRJz+oVPxMsRwFm6sys/uVL+KuovGR14gNVKVlRivE9yZuQp0\n\txfDun77NC+mtoGHOSP+B37x/hgdyAxngLbVzpUD0vyyNH5Sd1AQf6entoiLU1KMmpPVR\n\tuXE5bU37Q5b7NlUXJ07Cojgtn8FparG4sEi3w=", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20130820;\n\th=x-gm-message-state:subject:from:to:date:message-id:in-reply-to\n\t:references:user-agent:mime-version:content-type\n\t:content-transfer-encoding;\n\tbh=9QptQ8DIreSQYCOF+M64LPWqbvIl5tWdbwvuFHzotjs=;\n\tb=ZRbRmwRrWt9JKbhaNwOLab4nRqMw3M/qNj5H+oI8IlyqZUDww3U1fi/SNh/daUegwB\n\tFoZhfQsut1Thhkz39ScRK3zv60ru6PfWI9HBwmyhPNID3nPKMIu3uzJvVOAe2GGYDHOS\n\t1Jg3h1UgaIoeqzqMTMRFYga3H6D/6xWv/9sIAzdp0y1grid+bqPV/jhbBu5vRXJnVQ0t\n\tY+Y7RsXNiQY16OTy50Nyo4QTob65CefXyT6/c2JiJj8EScbEjCm+ILK56ghci0yBAtce\n\tNI8ZqhA9+2iQ4Urm4C50tQmGuD6a+zSYYqxkyduJn0tpuESaCWBBdeeRFuztstTKv1vr\n\toI5A==", "X-Gm-Message-State": "AG10YOSWnMIbcikZTNwhzVv5CZVyMNNzMn2/nEedKLUcCiFjwDfCSUOPJr4n6wzluXQLePtt", "X-Received": "by 10.66.100.228 with SMTP id fb4mr4336648pab.84.1455735764920; \n\tWed, 17 Feb 2016 11:02:44 -0800 (PST)", "From": "Alexander Duyck <aduyck@mirantis.com>", "To": "intel-wired-lan@lists.osuosl.org", "Date": "Wed, 17 Feb 2016 11:02:43 -0800", "Message-ID": "<20160217190243.10339.65965.stgit@localhost.localdomain>", "In-Reply-To": "<20160217185838.10339.68543.stgit@localhost.localdomain>", "References": "<20160217185838.10339.68543.stgit@localhost.localdomain>", "User-Agent": "StGit/0.17.1-dirty", "MIME-Version": "1.0", "Subject": "[Intel-wired-lan] [next PATCH 1/4] i40e/i40evf: Break up\n\txmit_descriptor_count from maybe_stop_tx", "X-BeenThere": "intel-wired-lan@lists.osuosl.org", "X-Mailman-Version": "2.1.18-1", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.lists.osuosl.org>", "List-Unsubscribe": "<http://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@lists.osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@lists.osuosl.org?subject=help>", "List-Subscribe": "<http://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=subscribe>", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@lists.osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@lists.osuosl.org>" }, "content": "In an upcoming patch I would like to have access to the descriptor count\nused for the data portion of the frame. For this reason I am splitting up\nthe descriptor count function from the function that stops the ring.\n\nAlso in order to try and reduce unnecessary duplication of code I am moving\nthe slow-path portions of the code out of being inline calls so that we can\njust jump to them and process them instead of having to build them into\neach function that calls them.\n\nSigned-off-by: Alexander Duyck <aduyck@mirantis.com>\n---\n drivers/net/ethernet/intel/i40e/i40e_fcoe.c | 14 ++++-\n drivers/net/ethernet/intel/i40e/i40e_txrx.c | 71 +++++--------------------\n drivers/net/ethernet/intel/i40e/i40e_txrx.h | 44 +++++++++++++++\n drivers/net/ethernet/intel/i40evf/i40e_txrx.c | 64 +++++------------------\n drivers/net/ethernet/intel/i40evf/i40e_txrx.h | 42 +++++++++++++++\n 5 files changed, 123 insertions(+), 112 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/i40e/i40e_fcoe.c b/drivers/net/ethernet/intel/i40e/i40e_fcoe.c\nindex 7c66ce416ec7..518d72ea1059 100644\n--- a/drivers/net/ethernet/intel/i40e/i40e_fcoe.c\n+++ b/drivers/net/ethernet/intel/i40e/i40e_fcoe.c\n@@ -1359,16 +1359,26 @@ static netdev_tx_t i40e_fcoe_xmit_frame(struct sk_buff *skb,\n \tstruct i40e_ring *tx_ring = vsi->tx_rings[skb->queue_mapping];\n \tstruct i40e_tx_buffer *first;\n \tu32 tx_flags = 0;\n+\tint fso, count;\n \tu8 hdr_len = 0;\n \tu8 sof = 0;\n \tu8 eof = 0;\n-\tint fso;\n \n \tif (i40e_fcoe_set_skb_header(skb))\n \t\tgoto out_drop;\n \n-\tif (!i40e_xmit_descriptor_count(skb, tx_ring))\n+\tcount = i40e_xmit_descriptor_count(skb);\n+\n+\t/* need: 1 descriptor per page * PAGE_SIZE/I40E_MAX_DATA_PER_TXD,\n+\t * + 1 desc for skb_head_len/I40E_MAX_DATA_PER_TXD,\n+\t * + 4 desc gap to avoid the cache line where head is,\n+\t * + 1 desc for context descriptor,\n+\t * otherwise try next time\n+\t */\n+\tif (i40e_maybe_stop_tx(tx_ring, count + 4 + 1)) {\n+\t\ttx_ring->tx_stats.tx_busy++;\n \t\treturn NETDEV_TX_BUSY;\n+\t}\n \n \t/* prepare the xmit flags */\n \tif (i40e_tx_prepare_vlan_flags(skb, tx_ring, &tx_flags))\ndiff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c\nindex 1d3afa7dda18..f03657022b0f 100644\n--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c\n+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c\n@@ -2576,7 +2576,7 @@ static void i40e_create_tx_ctx(struct i40e_ring *tx_ring,\n *\n * Returns -EBUSY if a stop is needed, else 0\n **/\n-static inline int __i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n+int __i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n {\n \tnetif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);\n \t/* Memory barrier before checking head and tail */\n@@ -2593,24 +2593,6 @@ static inline int __i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n }\n \n /**\n- * i40e_maybe_stop_tx - 1st level check for tx stop conditions\n- * @tx_ring: the ring to be checked\n- * @size: the size buffer we want to assure is available\n- *\n- * Returns 0 if stop is not needed\n- **/\n-#ifdef I40E_FCOE\n-inline int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n-#else\n-static inline int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n-#endif\n-{\n-\tif (likely(I40E_DESC_UNUSED(tx_ring) >= size))\n-\t\treturn 0;\n-\treturn __i40e_maybe_stop_tx(tx_ring, size);\n-}\n-\n-/**\n * i40e_chk_linearize - Check if there are more than 8 fragments per packet\n * @skb: send buffer\n * @tx_flags: collected send information\n@@ -2870,43 +2852,6 @@ dma_error:\n }\n \n /**\n- * i40e_xmit_descriptor_count - calculate number of tx descriptors needed\n- * @skb: send buffer\n- * @tx_ring: ring to send buffer on\n- *\n- * Returns number of data descriptors needed for this skb. Returns 0 to indicate\n- * there is not enough descriptors available in this ring since we need at least\n- * one descriptor.\n- **/\n-#ifdef I40E_FCOE\n-inline int i40e_xmit_descriptor_count(struct sk_buff *skb,\n-\t\t\t\t struct i40e_ring *tx_ring)\n-#else\n-static inline int i40e_xmit_descriptor_count(struct sk_buff *skb,\n-\t\t\t\t\t struct i40e_ring *tx_ring)\n-#endif\n-{\n-\tunsigned int f;\n-\tint count = 0;\n-\n-\t/* need: 1 descriptor per page * PAGE_SIZE/I40E_MAX_DATA_PER_TXD,\n-\t * + 1 desc for skb_head_len/I40E_MAX_DATA_PER_TXD,\n-\t * + 4 desc gap to avoid the cache line where head is,\n-\t * + 1 desc for context descriptor,\n-\t * otherwise try next time\n-\t */\n-\tfor (f = 0; f < skb_shinfo(skb)->nr_frags; f++)\n-\t\tcount += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].size);\n-\n-\tcount += TXD_USE_COUNT(skb_headlen(skb));\n-\tif (i40e_maybe_stop_tx(tx_ring, count + 4 + 1)) {\n-\t\ttx_ring->tx_stats.tx_busy++;\n-\t\treturn 0;\n-\t}\n-\treturn count;\n-}\n-\n-/**\n * i40e_xmit_frame_ring - Sends buffer on Tx ring\n * @skb: send buffer\n * @tx_ring: ring to send buffer on\n@@ -2924,14 +2869,24 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,\n \t__be16 protocol;\n \tu32 td_cmd = 0;\n \tu8 hdr_len = 0;\n+\tint tso, count;\n \tint tsyn;\n-\tint tso;\n \n \t/* prefetch the data, we'll need it later */\n \tprefetch(skb->data);\n \n-\tif (0 == i40e_xmit_descriptor_count(skb, tx_ring))\n+\tcount = i40e_xmit_descriptor_count(skb);\n+\n+\t/* need: 1 descriptor per page * PAGE_SIZE/I40E_MAX_DATA_PER_TXD,\n+\t * + 1 desc for skb_head_len/I40E_MAX_DATA_PER_TXD,\n+\t * + 4 desc gap to avoid the cache line where head is,\n+\t * + 1 desc for context descriptor,\n+\t * otherwise try next time\n+\t */\n+\tif (i40e_maybe_stop_tx(tx_ring, count + 4 + 1)) {\n+\t\ttx_ring->tx_stats.tx_busy++;\n \t\treturn NETDEV_TX_BUSY;\n+\t}\n \n \t/* prepare the xmit flags */\n \tif (i40e_tx_prepare_vlan_flags(skb, tx_ring, &tx_flags))\ndiff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h\nindex fde5f42524fb..48a2ab8a8ec7 100644\n--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h\n+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h\n@@ -331,13 +331,12 @@ int i40e_napi_poll(struct napi_struct *napi, int budget);\n void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \t\t struct i40e_tx_buffer *first, u32 tx_flags,\n \t\t const u8 hdr_len, u32 td_cmd, u32 td_offset);\n-int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size);\n-int i40e_xmit_descriptor_count(struct sk_buff *skb, struct i40e_ring *tx_ring);\n int i40e_tx_prepare_vlan_flags(struct sk_buff *skb,\n \t\t\t struct i40e_ring *tx_ring, u32 *flags);\n #endif\n void i40e_force_wb(struct i40e_vsi *vsi, struct i40e_q_vector *q_vector);\n u32 i40e_get_tx_pending(struct i40e_ring *ring, bool in_sw);\n+int __i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size);\n \n /**\n * i40e_get_head - Retrieve head from head writeback\n@@ -352,4 +351,45 @@ static inline u32 i40e_get_head(struct i40e_ring *tx_ring)\n \n \treturn le32_to_cpu(*(volatile __le32 *)head);\n }\n+\n+/**\n+ * i40e_xmit_descriptor_count - calculate number of tx descriptors needed\n+ * @skb: send buffer\n+ * @tx_ring: ring to send buffer on\n+ *\n+ * Returns number of data descriptors needed for this skb. Returns 0 to indicate\n+ * there is not enough descriptors available in this ring since we need at least\n+ * one descriptor.\n+ **/\n+static inline int i40e_xmit_descriptor_count(struct sk_buff *skb)\n+{\n+\tconst struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];\n+\tunsigned int nr_frags = skb_shinfo(skb)->nr_frags;\n+\tint count = 0, size = skb_headlen(skb);\n+\n+\tfor (;;) {\n+\t\tcount += TXD_USE_COUNT(size);\n+\n+\t\tif (!nr_frags--)\n+\t\t\tbreak;\n+\n+\t\tsize = skb_frag_size(frag++);\n+\t}\n+\n+\treturn count;\n+}\n+\n+/**\n+ * i40e_maybe_stop_tx - 1st level check for tx stop conditions\n+ * @tx_ring: the ring to be checked\n+ * @size: the size buffer we want to assure is available\n+ *\n+ * Returns 0 if stop is not needed\n+ **/\n+static inline int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n+{\n+\tif (likely(I40E_DESC_UNUSED(tx_ring) >= size))\n+\t\treturn 0;\n+\treturn __i40e_maybe_stop_tx(tx_ring, size);\n+}\n #endif /* _I40E_TXRX_H_ */\ndiff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\nindex 16589c0b79a3..78d9ce4693c6 100644\n--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\n+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\n@@ -1856,7 +1856,7 @@ linearize_chk_done:\n *\n * Returns -EBUSY if a stop is needed, else 0\n **/\n-static inline int __i40evf_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n+int __i40evf_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n {\n \tnetif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);\n \t/* Memory barrier before checking head and tail */\n@@ -1873,20 +1873,6 @@ static inline int __i40evf_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n }\n \n /**\n- * i40evf_maybe_stop_tx - 1st level check for tx stop conditions\n- * @tx_ring: the ring to be checked\n- * @size: the size buffer we want to assure is available\n- *\n- * Returns 0 if stop is not needed\n- **/\n-static inline int i40evf_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n-{\n-\tif (likely(I40E_DESC_UNUSED(tx_ring) >= size))\n-\t\treturn 0;\n-\treturn __i40evf_maybe_stop_tx(tx_ring, size);\n-}\n-\n-/**\n * i40evf_tx_map - Build the Tx descriptor\n * @tx_ring: ring to send buffer on\n * @skb: send buffer\n@@ -2001,7 +1987,7 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \tnetdev_tx_sent_queue(netdev_get_tx_queue(tx_ring->netdev,\n \t\t\t\t\t\t tx_ring->queue_index),\n \t\t\t\t\t\t first->bytecount);\n-\ti40evf_maybe_stop_tx(tx_ring, DESC_NEEDED);\n+\ti40e_maybe_stop_tx(tx_ring, DESC_NEEDED);\n \n \t/* Algorithm to optimize tail and RS bit setting:\n \t * if xmit_more is supported\n@@ -2084,38 +2070,6 @@ dma_error:\n }\n \n /**\n- * i40evf_xmit_descriptor_count - calculate number of tx descriptors needed\n- * @skb: send buffer\n- * @tx_ring: ring to send buffer on\n- *\n- * Returns number of data descriptors needed for this skb. Returns 0 to indicate\n- * there is not enough descriptors available in this ring since we need at least\n- * one descriptor.\n- **/\n-static inline int i40evf_xmit_descriptor_count(struct sk_buff *skb,\n-\t\t\t\t\t struct i40e_ring *tx_ring)\n-{\n-\tunsigned int f;\n-\tint count = 0;\n-\n-\t/* need: 1 descriptor per page * PAGE_SIZE/I40E_MAX_DATA_PER_TXD,\n-\t * + 1 desc for skb_head_len/I40E_MAX_DATA_PER_TXD,\n-\t * + 4 desc gap to avoid the cache line where head is,\n-\t * + 1 desc for context descriptor,\n-\t * otherwise try next time\n-\t */\n-\tfor (f = 0; f < skb_shinfo(skb)->nr_frags; f++)\n-\t\tcount += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].size);\n-\n-\tcount += TXD_USE_COUNT(skb_headlen(skb));\n-\tif (i40evf_maybe_stop_tx(tx_ring, count + 4 + 1)) {\n-\t\ttx_ring->tx_stats.tx_busy++;\n-\t\treturn 0;\n-\t}\n-\treturn count;\n-}\n-\n-/**\n * i40e_xmit_frame_ring - Sends buffer on Tx ring\n * @skb: send buffer\n * @tx_ring: ring to send buffer on\n@@ -2133,13 +2087,23 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,\n \t__be16 protocol;\n \tu32 td_cmd = 0;\n \tu8 hdr_len = 0;\n-\tint tso;\n+\tint tso, count;\n \n \t/* prefetch the data, we'll need it later */\n \tprefetch(skb->data);\n \n-\tif (0 == i40evf_xmit_descriptor_count(skb, tx_ring))\n+\tcount = i40e_xmit_descriptor_count(skb);\n+\n+\t/* need: 1 descriptor per page * PAGE_SIZE/I40E_MAX_DATA_PER_TXD,\n+\t * + 1 desc for skb_head_len/I40E_MAX_DATA_PER_TXD,\n+\t * + 4 desc gap to avoid the cache line where head is,\n+\t * + 1 desc for context descriptor,\n+\t * otherwise try next time\n+\t */\n+\tif (i40e_maybe_stop_tx(tx_ring, count + 4 + 1)) {\n+\t\ttx_ring->tx_stats.tx_busy++;\n \t\treturn NETDEV_TX_BUSY;\n+\t}\n \n \t/* prepare the xmit flags */\n \tif (i40evf_tx_prepare_vlan_flags(skb, tx_ring, &tx_flags))\ndiff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h\nindex 6ea8701cf066..228cc76be6f6 100644\n--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h\n+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h\n@@ -326,6 +326,7 @@ void i40evf_free_rx_resources(struct i40e_ring *rx_ring);\n int i40evf_napi_poll(struct napi_struct *napi, int budget);\n void i40evf_force_wb(struct i40e_vsi *vsi, struct i40e_q_vector *q_vector);\n u32 i40evf_get_tx_pending(struct i40e_ring *ring, bool in_sw);\n+int __i40evf_maybe_stop_tx(struct i40e_ring *tx_ring, int size);\n \n /**\n * i40e_get_head - Retrieve head from head writeback\n@@ -340,4 +341,45 @@ static inline u32 i40e_get_head(struct i40e_ring *tx_ring)\n \n \treturn le32_to_cpu(*(volatile __le32 *)head);\n }\n+\n+/**\n+ * i40e_xmit_descriptor_count - calculate number of tx descriptors needed\n+ * @skb: send buffer\n+ * @tx_ring: ring to send buffer on\n+ *\n+ * Returns number of data descriptors needed for this skb. Returns 0 to indicate\n+ * there is not enough descriptors available in this ring since we need at least\n+ * one descriptor.\n+ **/\n+static inline int i40e_xmit_descriptor_count(struct sk_buff *skb)\n+{\n+\tconst struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];\n+\tunsigned int nr_frags = skb_shinfo(skb)->nr_frags;\n+\tint count = 0, size = skb_headlen(skb);\n+\n+\tfor (;;) {\n+\t\tcount += TXD_USE_COUNT(size);\n+\n+\t\tif (!nr_frags--)\n+\t\t\tbreak;\n+\n+\t\tsize = skb_frag_size(frag++);\n+\t}\n+\n+\treturn count;\n+}\n+\n+/**\n+ * i40e_maybe_stop_tx - 1st level check for tx stop conditions\n+ * @tx_ring: the ring to be checked\n+ * @size: the size buffer we want to assure is available\n+ *\n+ * Returns 0 if stop is not needed\n+ **/\n+static inline int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)\n+{\n+\tif (likely(I40E_DESC_UNUSED(tx_ring) >= size))\n+\t\treturn 0;\n+\treturn __i40evf_maybe_stop_tx(tx_ring, size);\n+}\n #endif /* _I40E_TXRX_H_ */\n", "prefixes": [ "next", "1/4" ] }