Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/680963/?format=api
{ "id": 680963, "url": "http://patchwork.ozlabs.org/api/patches/680963/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/1476224818-16844-3-git-send-email-bimmy.pujari@intel.com/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<1476224818-16844-3-git-send-email-bimmy.pujari@intel.com>", "list_archive_url": null, "date": "2016-10-11T22:26:54", "name": "[next,S50,2/6] i40e: Reorder logic for coalescing RS bits", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "891334c317534edc3933b23781c0dba9395368c4", "submitter": { "id": 68919, "url": "http://patchwork.ozlabs.org/api/people/68919/?format=api", "name": "Pujari, Bimmy", "email": "bimmy.pujari@intel.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/1476224818-16844-3-git-send-email-bimmy.pujari@intel.com/mbox/", "series": [], "comments": "http://patchwork.ozlabs.org/api/patches/680963/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/680963/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@lists.osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Received": [ "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3sts8j302jz9s9N\n\tfor <incoming@patchwork.ozlabs.org>;\n\tWed, 12 Oct 2016 09:28:37 +1100 (AEDT)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id 09D85898D0;\n\tTue, 11 Oct 2016 22:28:36 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id iwVE+HTQRO73; Tue, 11 Oct 2016 22:28:32 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id 007E48A4D9;\n\tTue, 11 Oct 2016 22:28:32 +0000 (UTC)", "from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137])\n\tby ash.osuosl.org (Postfix) with ESMTP id 3EB201CEB01\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tTue, 11 Oct 2016 22:28:29 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby fraxinus.osuosl.org (Postfix) with ESMTP id 3A0E0C150F\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tTue, 11 Oct 2016 22:28:29 +0000 (UTC)", "from fraxinus.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id GVPd88XpyxEy for <intel-wired-lan@lists.osuosl.org>;\n\tTue, 11 Oct 2016 22:28:26 +0000 (UTC)", "from mga05.intel.com (mga05.intel.com [192.55.52.43])\n\tby fraxinus.osuosl.org (Postfix) with ESMTPS id 40729C1512\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tTue, 11 Oct 2016 22:28:26 +0000 (UTC)", "from orsmga001.jf.intel.com ([10.7.209.18])\n\tby fmsmga105.fm.intel.com with ESMTP; 11 Oct 2016 15:28:25 -0700", "from bimmy.jf.intel.com (HELO bimmy.linux1.jf.intel.com)\n\t([134.134.2.167])\n\tby orsmga001.jf.intel.com with ESMTP; 11 Oct 2016 15:28:25 -0700" ], "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos; i=\"5.31,479,1473145200\"; d=\"scan'208\";\n\ta=\"1043428129\"", "From": "Bimmy Pujari <bimmy.pujari@intel.com>", "To": "intel-wired-lan@lists.osuosl.org", "Date": "Tue, 11 Oct 2016 15:26:54 -0700", "Message-Id": "<1476224818-16844-3-git-send-email-bimmy.pujari@intel.com>", "X-Mailer": "git-send-email 2.4.11", "In-Reply-To": "<1476224818-16844-1-git-send-email-bimmy.pujari@intel.com>", "References": "<1476224818-16844-1-git-send-email-bimmy.pujari@intel.com>", "Subject": "[Intel-wired-lan] [next PATCH S50 2/6] i40e: Reorder logic for\n\tcoalescing RS bits", "X-BeenThere": "intel-wired-lan@lists.osuosl.org", "X-Mailman-Version": "2.1.18-1", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.lists.osuosl.org>", "List-Unsubscribe": "<http://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@lists.osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@lists.osuosl.org?subject=help>", "List-Subscribe": "<http://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=subscribe>", "MIME-Version": "1.0", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@lists.osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@lists.osuosl.org>" }, "content": "From: Alexander Duyck <alexander.h.duyck@intel.com>\n\nThis patch reorders the logic at the end of i40e_tx_map to address the\nfact that the logic was rather convoluted and much larger than it needed\nto be.\n\nIn order to try and coalesce the code paths I have updated some of the\ncomments and repurposed some of the variables in order to reduce\nunnecessary overhead.\n\nThis patch does the following:\n1. Quit tracking skb->xmit_more with a flag, just max out packet_stride\n2. Drop tail_bump and do_rs and instead just use desc_count and td_cmd\n3. Pull comments from ixgbe that make need for wmb() more explicit.\n\nSigned-off-by: Alexander Duyck <alexander.h.duyck@intel.com>\nChange-ID: Ic7da85ec75043c634e87fef958109789bcc6317c\n---\nTesting Hints:\n The big area to test for this is performance. This should perform\n as well as the original code if not better. If any signficant\n performance regressions are seen or hangs are seen then this patch\n has failed.\n\n drivers/net/ethernet/intel/i40e/i40e_txrx.c | 105 +++++++++++++-------------\n drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 -\n drivers/net/ethernet/intel/i40evf/i40e_txrx.c | 105 +++++++++++++-------------\n drivers/net/ethernet/intel/i40evf/i40e_txrx.h | 1 -\n 4 files changed, 108 insertions(+), 104 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c\nindex 75b8f5b..5544b50 100644\n--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c\n+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c\n@@ -616,7 +616,7 @@ u32 i40e_get_tx_pending(struct i40e_ring *ring, bool in_sw)\n \treturn 0;\n }\n \n-#define WB_STRIDE 0x3\n+#define WB_STRIDE 4\n \n /**\n * i40e_clean_tx_irq - Reclaim resources after transmit completes\n@@ -732,7 +732,7 @@ static bool i40e_clean_tx_irq(struct i40e_vsi *vsi,\n \t\tunsigned int j = i40e_get_tx_pending(tx_ring, false);\n \n \t\tif (budget &&\n-\t\t ((j / (WB_STRIDE + 1)) == 0) && (j != 0) &&\n+\t\t ((j / WB_STRIDE) == 0) && (j > 0) &&\n \t\t !test_bit(__I40E_DOWN, &vsi->state) &&\n \t\t (I40E_DESC_UNUSED(tx_ring) != tx_ring->count))\n \t\t\ttx_ring->arm_wb = true;\n@@ -2700,9 +2700,7 @@ static inline void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \tu32 td_tag = 0;\n \tdma_addr_t dma;\n \tu16 gso_segs;\n-\tu16 desc_count = 0;\n-\tbool tail_bump = true;\n-\tbool do_rs = false;\n+\tu16 desc_count = 1;\n \n \tif (tx_flags & I40E_TX_FLAGS_HW_VLAN) {\n \t\ttd_cmd |= I40E_TX_DESC_CMD_IL2TAG1;\n@@ -2785,8 +2783,7 @@ static inline void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \t\ttx_bi = &tx_ring->tx_bi[i];\n \t}\n \n-\t/* set next_to_watch value indicating a packet is present */\n-\tfirst->next_to_watch = tx_desc;\n+\tnetdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);\n \n \ti++;\n \tif (i == tx_ring->count)\n@@ -2794,66 +2791,72 @@ static inline void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \n \ttx_ring->next_to_use = i;\n \n-\tnetdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);\n \ti40e_maybe_stop_tx(tx_ring, DESC_NEEDED);\n \n+\t/* write last descriptor with EOP bit */\n+\ttd_cmd |= I40E_TX_DESC_CMD_EOP;\n+\n+\t/* We can OR these values together as they both are checked against\n+\t * 4 below and at this point desc_count will be used as a boolean value\n+\t * after this if/else block.\n+\t */\n+\tdesc_count |= ++tx_ring->packet_stride;\n+\n \t/* Algorithm to optimize tail and RS bit setting:\n-\t * if xmit_more is supported\n-\t *\tif xmit_more is true\n-\t *\t\tdo not update tail and do not mark RS bit.\n-\t *\tif xmit_more is false and last xmit_more was false\n-\t *\t\tif every packet spanned less than 4 desc\n-\t *\t\t\tthen set RS bit on 4th packet and update tail\n-\t *\t\t\ton every packet\n-\t *\t\telse\n-\t *\t\t\tupdate tail and set RS bit on every packet.\n-\t *\tif xmit_more is false and last_xmit_more was true\n-\t *\t\tupdate tail and set RS bit.\n+\t * if queue is stopped\n+\t *\tmark RS bit\n+\t *\treset packet counter\n+\t * else if xmit_more is supported and is true\n+\t *\tadvance packet counter to 4\n+\t *\treset desc_count to 0\n \t *\n-\t * Optimization: wmb to be issued only in case of tail update.\n-\t * Also optimize the Descriptor WB path for RS bit with the same\n-\t * algorithm.\n+\t * if desc_count >= 4\n+\t *\tmark RS bit\n+\t *\treset packet counter\n+\t * if desc_count > 0\n+\t *\tupdate tail\n \t *\n-\t * Note: If there are less than 4 packets\n+\t * Note: If there are less than 4 descriptors\n \t * pending and interrupts were disabled the service task will\n \t * trigger a force WB.\n \t */\n-\tif (skb->xmit_more &&\n-\t !netif_xmit_stopped(txring_txq(tx_ring))) {\n-\t\ttx_ring->flags |= I40E_TXR_FLAGS_LAST_XMIT_MORE_SET;\n-\t\ttail_bump = false;\n-\t} else if (!skb->xmit_more &&\n-\t\t !netif_xmit_stopped(txring_txq(tx_ring)) &&\n-\t\t (!(tx_ring->flags & I40E_TXR_FLAGS_LAST_XMIT_MORE_SET)) &&\n-\t\t (tx_ring->packet_stride < WB_STRIDE) &&\n-\t\t (desc_count < WB_STRIDE)) {\n-\t\ttx_ring->packet_stride++;\n-\t} else {\n+\tif (netif_xmit_stopped(txring_txq(tx_ring))) {\n+\t\tgoto do_rs;\n+\t} else if (skb->xmit_more) {\n+\t\t/* set stride to arm on next packet and reset desc_count */\n+\t\ttx_ring->packet_stride = WB_STRIDE;\n+\t\tdesc_count = 0;\n+\t} else if (desc_count >= WB_STRIDE) {\n+do_rs:\n+\t\t/* write last descriptor with RS bit set */\n+\t\ttd_cmd |= I40E_TX_DESC_CMD_RS;\n \t\ttx_ring->packet_stride = 0;\n-\t\ttx_ring->flags &= ~I40E_TXR_FLAGS_LAST_XMIT_MORE_SET;\n-\t\tdo_rs = true;\n \t}\n-\tif (do_rs)\n-\t\ttx_ring->packet_stride = 0;\n \n \ttx_desc->cmd_type_offset_bsz =\n-\t\t\tbuild_ctob(td_cmd, td_offset, size, td_tag) |\n-\t\t\tcpu_to_le64((u64)(do_rs ? I40E_TXD_CMD :\n-\t\t\t\t\t\t I40E_TX_DESC_CMD_EOP) <<\n-\t\t\t\t\t\t I40E_TXD_QW1_CMD_SHIFT);\n+\t\t\tbuild_ctob(td_cmd, td_offset, size, td_tag);\n+\n+\t/* Force memory writes to complete before letting h/w know there\n+\t * are new descriptors to fetch.\n+\t *\n+\t * We also use this memory barrier to make certain all of the\n+\t * status bits have been updated before next_to_watch is written.\n+\t */\n+\twmb();\n+\n+\t/* set next_to_watch value indicating a packet is present */\n+\tfirst->next_to_watch = tx_desc;\n \n \t/* notify HW of packet */\n-\tif (!tail_bump) {\n-\t\tprefetchw(tx_desc + 1);\n-\t} else {\n-\t\t/* Force memory writes to complete before letting h/w\n-\t\t * know there are new descriptors to fetch. (Only\n-\t\t * applicable for weak-ordered memory model archs,\n-\t\t * such as IA-64).\n-\t\t */\n-\t\twmb();\n+\tif (desc_count) {\n \t\twritel(i, tx_ring->tail);\n+\n+\t\t/* we need this if more than one processor can write to our tail\n+\t\t * at a time, it synchronizes IO on IA64/Altix systems\n+\t\t */\n+\t\tmmiowb();\n \t}\n+\n \treturn;\n \n dma_error:\ndiff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h\nindex 42f04d6..de8550f 100644\n--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h\n+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h\n@@ -313,7 +313,6 @@ struct i40e_ring {\n \n \tu16 flags;\n #define I40E_TXR_FLAGS_WB_ON_ITR\tBIT(0)\n-#define I40E_TXR_FLAGS_LAST_XMIT_MORE_SET BIT(2)\n \n \t/* stats structs */\n \tstruct i40e_queue_stats\tstats;\ndiff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\nindex e2d3622..c4b174a 100644\n--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\n+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\n@@ -150,7 +150,7 @@ u32 i40evf_get_tx_pending(struct i40e_ring *ring, bool in_sw)\n \treturn 0;\n }\n \n-#define WB_STRIDE 0x3\n+#define WB_STRIDE 4\n \n /**\n * i40e_clean_tx_irq - Reclaim resources after transmit completes\n@@ -266,7 +266,7 @@ static bool i40e_clean_tx_irq(struct i40e_vsi *vsi,\n \t\tunsigned int j = i40evf_get_tx_pending(tx_ring, false);\n \n \t\tif (budget &&\n-\t\t ((j / (WB_STRIDE + 1)) == 0) && (j > 0) &&\n+\t\t ((j / WB_STRIDE) == 0) && (j > 0) &&\n \t\t !test_bit(__I40E_DOWN, &vsi->state) &&\n \t\t (I40E_DESC_UNUSED(tx_ring) != tx_ring->count))\n \t\t\ttx_ring->arm_wb = true;\n@@ -1950,9 +1950,7 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \tu32 td_tag = 0;\n \tdma_addr_t dma;\n \tu16 gso_segs;\n-\tu16 desc_count = 0;\n-\tbool tail_bump = true;\n-\tbool do_rs = false;\n+\tu16 desc_count = 1;\n \n \tif (tx_flags & I40E_TX_FLAGS_HW_VLAN) {\n \t\ttd_cmd |= I40E_TX_DESC_CMD_IL2TAG1;\n@@ -2035,8 +2033,7 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \t\ttx_bi = &tx_ring->tx_bi[i];\n \t}\n \n-\t/* set next_to_watch value indicating a packet is present */\n-\tfirst->next_to_watch = tx_desc;\n+\tnetdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);\n \n \ti++;\n \tif (i == tx_ring->count)\n@@ -2044,66 +2041,72 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,\n \n \ttx_ring->next_to_use = i;\n \n-\tnetdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);\n \ti40e_maybe_stop_tx(tx_ring, DESC_NEEDED);\n \n+\t/* write last descriptor with EOP bit */\n+\ttd_cmd |= I40E_TX_DESC_CMD_EOP;\n+\n+\t/* We can OR these values together as they both are checked against\n+\t * 4 below and at this point desc_count will be used as a boolean value\n+\t * after this if/else block.\n+\t */\n+\tdesc_count |= ++tx_ring->packet_stride;\n+\n \t/* Algorithm to optimize tail and RS bit setting:\n-\t * if xmit_more is supported\n-\t *\tif xmit_more is true\n-\t *\t\tdo not update tail and do not mark RS bit.\n-\t *\tif xmit_more is false and last xmit_more was false\n-\t *\t\tif every packet spanned less than 4 desc\n-\t *\t\t\tthen set RS bit on 4th packet and update tail\n-\t *\t\t\ton every packet\n-\t *\t\telse\n-\t *\t\t\tupdate tail and set RS bit on every packet.\n-\t *\tif xmit_more is false and last_xmit_more was true\n-\t *\t\tupdate tail and set RS bit.\n+\t * if queue is stopped\n+\t *\tmark RS bit\n+\t *\treset packet counter\n+\t * else if xmit_more is supported and is true\n+\t *\tadvance packet counter to 4\n+\t *\treset desc_count to 0\n \t *\n-\t * Optimization: wmb to be issued only in case of tail update.\n-\t * Also optimize the Descriptor WB path for RS bit with the same\n-\t * algorithm.\n+\t * if desc_count >= 4\n+\t *\tmark RS bit\n+\t *\treset packet counter\n+\t * if desc_count > 0\n+\t *\tupdate tail\n \t *\n-\t * Note: If there are less than 4 packets\n+\t * Note: If there are less than 4 descriptors\n \t * pending and interrupts were disabled the service task will\n \t * trigger a force WB.\n \t */\n-\tif (skb->xmit_more &&\n-\t !netif_xmit_stopped(txring_txq(tx_ring))) {\n-\t\ttx_ring->flags |= I40E_TXR_FLAGS_LAST_XMIT_MORE_SET;\n-\t\ttail_bump = false;\n-\t} else if (!skb->xmit_more &&\n-\t\t !netif_xmit_stopped(txring_txq(tx_ring)) &&\n-\t\t (!(tx_ring->flags & I40E_TXR_FLAGS_LAST_XMIT_MORE_SET)) &&\n-\t\t (tx_ring->packet_stride < WB_STRIDE) &&\n-\t\t (desc_count < WB_STRIDE)) {\n-\t\ttx_ring->packet_stride++;\n-\t} else {\n+\tif (netif_xmit_stopped(txring_txq(tx_ring))) {\n+\t\tgoto do_rs;\n+\t} else if (skb->xmit_more) {\n+\t\t/* set stride to arm on next packet and reset desc_count */\n+\t\ttx_ring->packet_stride = WB_STRIDE;\n+\t\tdesc_count = 0;\n+\t} else if (desc_count >= WB_STRIDE) {\n+do_rs:\n+\t\t/* write last descriptor with RS bit set */\n+\t\ttd_cmd |= I40E_TX_DESC_CMD_RS;\n \t\ttx_ring->packet_stride = 0;\n-\t\ttx_ring->flags &= ~I40E_TXR_FLAGS_LAST_XMIT_MORE_SET;\n-\t\tdo_rs = true;\n \t}\n-\tif (do_rs)\n-\t\ttx_ring->packet_stride = 0;\n \n \ttx_desc->cmd_type_offset_bsz =\n-\t\t\tbuild_ctob(td_cmd, td_offset, size, td_tag) |\n-\t\t\tcpu_to_le64((u64)(do_rs ? I40E_TXD_CMD :\n-\t\t\t\t\t\t I40E_TX_DESC_CMD_EOP) <<\n-\t\t\t\t\t\t I40E_TXD_QW1_CMD_SHIFT);\n+\t\t\tbuild_ctob(td_cmd, td_offset, size, td_tag);\n+\n+\t/* Force memory writes to complete before letting h/w know there\n+\t * are new descriptors to fetch.\n+\t *\n+\t * We also use this memory barrier to make certain all of the\n+\t * status bits have been updated before next_to_watch is written.\n+\t */\n+\twmb();\n+\n+\t/* set next_to_watch value indicating a packet is present */\n+\tfirst->next_to_watch = tx_desc;\n \n \t/* notify HW of packet */\n-\tif (!tail_bump) {\n-\t\tprefetchw(tx_desc + 1);\n-\t} else {\n-\t\t/* Force memory writes to complete before letting h/w\n-\t\t * know there are new descriptors to fetch. (Only\n-\t\t * applicable for weak-ordered memory model archs,\n-\t\t * such as IA-64).\n-\t\t */\n-\t\twmb();\n+\tif (desc_count) {\n \t\twritel(i, tx_ring->tail);\n+\n+\t\t/* we need this if more than one processor can write to our tail\n+\t\t * at a time, it synchronizes IO on IA64/Altix systems\n+\t\t */\n+\t\tmmiowb();\n \t}\n+\n \treturn;\n \n dma_error:\ndiff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h\nindex abcdeca..a586e19 100644\n--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h\n+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h\n@@ -309,7 +309,6 @@ struct i40e_ring {\n \tbool ring_active;\t\t/* is ring online or not */\n \tbool arm_wb;\t\t/* do something to arm write back */\n \tu8 packet_stride;\n-#define I40E_TXR_FLAGS_LAST_XMIT_MORE_SET BIT(2)\n \n \tu16 flags;\n #define I40E_TXR_FLAGS_WB_ON_ITR\tBIT(0)\n", "prefixes": [ "next", "S50", "2/6" ] }