Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/738798/?format=api
{ "id": 738798, "url": "http://patchwork.ozlabs.org/api/patches/738798/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/1489511727-10959-4-git-send-email-bimmy.pujari@intel.com/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<1489511727-10959-4-git-send-email-bimmy.pujari@intel.com>", "list_archive_url": null, "date": "2017-03-14T17:15:25", "name": "[next,S63,4/6] i40e/i40evf: Break i40e_fetch_rx_buffer up to allow for reuse of frag code", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "cb8ed8303374544081335e78c4bbd8cd279d66cd", "submitter": { "id": 68919, "url": "http://patchwork.ozlabs.org/api/people/68919/?format=api", "name": "Pujari, Bimmy", "email": "bimmy.pujari@intel.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/1489511727-10959-4-git-send-email-bimmy.pujari@intel.com/mbox/", "series": [], "comments": "http://patchwork.ozlabs.org/api/patches/738798/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/738798/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@lists.osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Received": [ "from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3vjKdn0Xcmz9rxm\n\tfor <incoming@patchwork.ozlabs.org>;\n\tWed, 15 Mar 2017 03:17:49 +1100 (AEDT)", "from localhost (localhost [127.0.0.1])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id 085CA8972B;\n\tTue, 14 Mar 2017 16:17:47 +0000 (UTC)", "from whitealder.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id 7e0iI6PKtnbB; Tue, 14 Mar 2017 16:17:41 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id 7D1978972D;\n\tTue, 14 Mar 2017 16:17:40 +0000 (UTC)", "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\tby ash.osuosl.org (Postfix) with ESMTP id F0F881BFEBB\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tTue, 14 Mar 2017 16:17:37 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id ECB4F8A1F3\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tTue, 14 Mar 2017 16:17:37 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id 6eeNmxqhG8QT for <intel-wired-lan@lists.osuosl.org>;\n\tTue, 14 Mar 2017 16:17:36 +0000 (UTC)", "from mga09.intel.com (mga09.intel.com [134.134.136.24])\n\tby hemlock.osuosl.org (Postfix) with ESMTPS id 9FA8282AE1\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tTue, 14 Mar 2017 16:17:36 +0000 (UTC)", "from orsmga001.jf.intel.com ([10.7.209.18])\n\tby orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t14 Mar 2017 09:17:35 -0700", "from bimmy.jf.intel.com (HELO bimmy.linux1.jf.intel.com)\n\t([10.166.35.87])\n\tby orsmga001.jf.intel.com with ESMTP; 14 Mar 2017 09:17:35 -0700" ], "Authentication-Results": "ozlabs.org;\n\tdkim=fail reason=\"key not found in DNS\" (0-bit key;\n\tunprotected) header.d=intel.com header.i=@intel.com\n\theader.b=\"rUuRARTg\"; dkim-atps=neutral", "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6", "DKIM-Signature": "v=1; a=rsa-sha256; c=simple/simple;\n\td=intel.com; i=@intel.com; q=dns/txt; s=intel;\n\tt=1489508256; x=1521044256;\n\th=from:to:cc:subject:date:message-id:in-reply-to: references;\n\tbh=KG2DcX7nytVz7ssurq9lxhu87XvHBVJlwfR4mwuc4ZY=;\n\tb=rUuRARTgqa9z13mRNwGQf9I/HEx3rDfaHOTaCnzTNPn4098h/ffjgkWb\n\tk1B3sr0O7mFPqkBJ/KE0PcDsIY98dQ==;", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos; i=\"5.36,164,1486454400\"; d=\"scan'208\";\n\ta=\"1108362401\"", "From": "Bimmy Pujari <bimmy.pujari@intel.com>", "To": "intel-wired-lan@lists.osuosl.org", "Date": "Tue, 14 Mar 2017 10:15:25 -0700", "Message-Id": "<1489511727-10959-4-git-send-email-bimmy.pujari@intel.com>", "X-Mailer": "git-send-email 2.4.11", "In-Reply-To": "<1489511727-10959-1-git-send-email-bimmy.pujari@intel.com>", "References": "<1489511727-10959-1-git-send-email-bimmy.pujari@intel.com>", "Subject": "[Intel-wired-lan] [next PATCH S63 4/6] i40e/i40evf: Break\n\ti40e_fetch_rx_buffer up to allow for reuse of frag code", "X-BeenThere": "intel-wired-lan@lists.osuosl.org", "X-Mailman-Version": "2.1.18-1", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.lists.osuosl.org>", "List-Unsubscribe": "<http://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@lists.osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@lists.osuosl.org?subject=help>", "List-Subscribe": "<http://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=subscribe>", "MIME-Version": "1.0", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@lists.osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@lists.osuosl.org>" }, "content": "From: Alexander Duyck <alexander.h.duyck@intel.com>\n\nThis patch is meant to clean up the code in preparation for us adding\nsupport for build_skb. Specifically we deconstruct i40e_fetch_buffer into\nseveral functions so that those functions can later be reused when we add a\npath for build_skb.\n\nSpecifically with this change we split out the code for adding a page to an\nexiting skb.\n\nSigned-off-by: Alexander Duyck <alexander.h.duyck@intel.com>\nChange-ID: Iab1efbab6b8b97cb60ab9fdd0be1d37a056a154d\n---\nTesting Hints:\n The greatest risk with this patch is a memory leak of some sort.\n I already caught one spot where I hadn't fully thought things out\n in regards to the path where we don't support bulk page updates.\n My advice would be to test on a RHEL 6.X kernel as well as a RHEL\n 7.X kernel as the 6.X won't support bulk page count updates while\n the 7.3 and later kernels do.\n\n drivers/net/ethernet/intel/i40e/i40e_txrx.c | 138 ++++++++++++--------------\n drivers/net/ethernet/intel/i40evf/i40e_txrx.c | 138 ++++++++++++--------------\n 2 files changed, 130 insertions(+), 146 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c\nindex d7c4e1e..433309f 100644\n--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c\n+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c\n@@ -1687,61 +1687,23 @@ static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer)\n * @size: packet length from rx_desc\n *\n * This function will add the data contained in rx_buffer->page to the skb.\n- * This is done either through a direct copy if the data in the buffer is\n- * less than the skb header size, otherwise it will just attach the page as\n- * a frag to the skb.\n+ * It will just attach the page as a frag to the skb.\n *\n- * The function will then update the page offset if necessary and return\n- * true if the buffer can be reused by the adapter.\n+ * The function will then update the page offset.\n **/\n static void i40e_add_rx_frag(struct i40e_ring *rx_ring,\n \t\t\t struct i40e_rx_buffer *rx_buffer,\n \t\t\t struct sk_buff *skb,\n \t\t\t unsigned int size)\n {\n-\tstruct page *page = rx_buffer->page;\n-\tunsigned char *va = page_address(page) + rx_buffer->page_offset;\n #if (PAGE_SIZE < 8192)\n \tunsigned int truesize = I40E_RXBUFFER_2048;\n #else\n-\tunsigned int truesize = ALIGN(size, L1_CACHE_BYTES);\n+\tunsigned int truesize = SKB_DATA_ALIGN(size);\n #endif\n-\tunsigned int pull_len;\n-\n-\tif (unlikely(skb_is_nonlinear(skb)))\n-\t\tgoto add_tail_frag;\n-\n-\t/* will the data fit in the skb we allocated? if so, just\n-\t * copy it as it is pretty small anyway\n-\t */\n-\tif (size <= I40E_RX_HDR_SIZE) {\n-\t\tmemcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));\n-\n-\t\t/* page is to be freed, increase pagecnt_bias instead of\n-\t\t * decreasing page count.\n-\t\t */\n-\t\trx_buffer->pagecnt_bias++;\n-\t\treturn;\n-\t}\n-\n-\t/* we need the header to contain the greater of either\n-\t * ETH_HLEN or 60 bytes if the skb->len is less than\n-\t * 60 for skb_pad.\n-\t */\n-\tpull_len = eth_get_headlen(va, I40E_RX_HDR_SIZE);\n-\n-\t/* align pull length to size of long to optimize\n-\t * memcpy performance\n-\t */\n-\tmemcpy(__skb_put(skb, pull_len), va, ALIGN(pull_len, sizeof(long)));\n-\n-\t/* update all of the pointers */\n-\tva += pull_len;\n-\tsize -= pull_len;\n \n-add_tail_frag:\n-\tskb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,\n-\t\t\t(unsigned long)va & ~PAGE_MASK, size, truesize);\n+\tskb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,\n+\t\t\trx_buffer->page_offset, size, truesize);\n \n \t/* page is being used so we must update the page offset */\n #if (PAGE_SIZE < 8192)\n@@ -1781,45 +1743,66 @@ static struct i40e_rx_buffer *i40e_get_rx_buffer(struct i40e_ring *rx_ring,\n }\n \n /**\n- * i40e_fetch_rx_buffer - Allocate skb and populate it\n+ * i40e_construct_skb - Allocate skb and populate it\n * @rx_ring: rx descriptor ring to transact packets on\n * @rx_buffer: rx buffer to pull data from\n * @size: size of buffer to add to skb\n *\n- * This function allocates an skb on the fly, and populates it with the page\n- * data from the current receive descriptor, taking care to set up the skb\n- * correctly, as well as handling calling the page recycle function if\n- * necessary.\n+ * This function allocates an skb. It then populates it with the page\n+ * data from the current receive descriptor, taking care to set up the\n+ * skb correctly.\n */\n-static inline\n-struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,\n-\t\t\t\t struct i40e_rx_buffer *rx_buffer,\n-\t\t\t\t struct sk_buff *skb,\n-\t\t\t\t unsigned int size)\n+static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring,\n+\t\t\t\t\t struct i40e_rx_buffer *rx_buffer,\n+\t\t\t\t\t unsigned int size)\n {\n-\tif (likely(!skb)) {\n-\t\tvoid *page_addr = page_address(rx_buffer->page) +\n-\t\t\t\t rx_buffer->page_offset;\n+\tvoid *va = page_address(rx_buffer->page) + rx_buffer->page_offset;\n+#if (PAGE_SIZE < 8192)\n+\tunsigned int truesize = I40E_RXBUFFER_2048;\n+#else\n+\tunsigned int truesize = SKB_DATA_ALIGN(size);\n+#endif\n+\tunsigned int headlen;\n+\tstruct sk_buff *skb;\n \n-\t\t/* prefetch first cache line of first page */\n-\t\tprefetch(page_addr);\n+\t/* prefetch first cache line of first page */\n+\tprefetch(va);\n #if L1_CACHE_BYTES < 128\n-\t\tprefetch(page_addr + L1_CACHE_BYTES);\n+\tprefetch(va + L1_CACHE_BYTES);\n #endif\n \n-\t\t/* allocate a skb to store the frags */\n-\t\tskb = __napi_alloc_skb(&rx_ring->q_vector->napi,\n-\t\t\t\t I40E_RX_HDR_SIZE,\n-\t\t\t\t GFP_ATOMIC | __GFP_NOWARN);\n-\t\tif (unlikely(!skb)) {\n-\t\t\trx_ring->rx_stats.alloc_buff_failed++;\n-\t\t\trx_buffer->pagecnt_bias++;\n-\t\t\treturn NULL;\n-\t\t}\n-\t}\n+\t/* allocate a skb to store the frags */\n+\tskb = __napi_alloc_skb(&rx_ring->q_vector->napi,\n+\t\t\t I40E_RX_HDR_SIZE,\n+\t\t\t GFP_ATOMIC | __GFP_NOWARN);\n+\tif (unlikely(!skb))\n+\t\treturn NULL;\n+\n+\t/* Determine available headroom for copy */\n+\theadlen = size;\n+\tif (headlen > I40E_RX_HDR_SIZE)\n+\t\theadlen = eth_get_headlen(va, I40E_RX_HDR_SIZE);\n \n-\t/* pull page into skb */\n-\ti40e_add_rx_frag(rx_ring, rx_buffer, skb, size);\n+\t/* align pull length to size of long to optimize memcpy performance */\n+\tmemcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long)));\n+\n+\t/* update all of the pointers */\n+\tsize -= headlen;\n+\tif (size) {\n+\t\tskb_add_rx_frag(skb, 0, rx_buffer->page,\n+\t\t\t\trx_buffer->page_offset + headlen,\n+\t\t\t\tsize, truesize);\n+\n+\t\t/* buffer is used by skb, update page_offset */\n+#if (PAGE_SIZE < 8192)\n+\t\trx_buffer->page_offset ^= truesize;\n+#else\n+\t\trx_buffer->page_offset += truesize;\n+#endif\n+\t} else {\n+\t\t/* buffer is unused, reset bias back to rx_buffer */\n+\t\trx_buffer->pagecnt_bias++;\n+\t}\n \n \treturn skb;\n }\n@@ -1944,9 +1927,18 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)\n \n \t\trx_buffer = i40e_get_rx_buffer(rx_ring, size);\n \n-\t\tskb = i40e_fetch_rx_buffer(rx_ring, rx_buffer, skb, size);\n-\t\tif (!skb)\n+\t\t/* retrieve a buffer from the ring */\n+\t\tif (skb)\n+\t\t\ti40e_add_rx_frag(rx_ring, rx_buffer, skb, size);\n+\t\telse\n+\t\t\tskb = i40e_construct_skb(rx_ring, rx_buffer, size);\n+\n+\t\t/* exit if we failed to retrieve a buffer */\n+\t\tif (!skb) {\n+\t\t\trx_ring->rx_stats.alloc_buff_failed++;\n+\t\t\trx_buffer->pagecnt_bias++;\n \t\t\tbreak;\n+\t\t}\n \n \t\ti40e_put_rx_buffer(rx_ring, rx_buffer);\n \t\tcleaned_count++;\ndiff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\nindex 06b3779..95e383a 100644\n--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\n+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c\n@@ -1045,61 +1045,23 @@ static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer)\n * @size: packet length from rx_desc\n *\n * This function will add the data contained in rx_buffer->page to the skb.\n- * This is done either through a direct copy if the data in the buffer is\n- * less than the skb header size, otherwise it will just attach the page as\n- * a frag to the skb.\n+ * It will just attach the page as a frag to the skb.\n *\n- * The function will then update the page offset if necessary and return\n- * true if the buffer can be reused by the adapter.\n+ * The function will then update the page offset.\n **/\n static void i40e_add_rx_frag(struct i40e_ring *rx_ring,\n \t\t\t struct i40e_rx_buffer *rx_buffer,\n \t\t\t struct sk_buff *skb,\n \t\t\t unsigned int size)\n {\n-\tstruct page *page = rx_buffer->page;\n-\tunsigned char *va = page_address(page) + rx_buffer->page_offset;\n #if (PAGE_SIZE < 8192)\n \tunsigned int truesize = I40E_RXBUFFER_2048;\n #else\n-\tunsigned int truesize = ALIGN(size, L1_CACHE_BYTES);\n+\tunsigned int truesize = SKB_DATA_ALIGN(size);\n #endif\n-\tunsigned int pull_len;\n-\n-\tif (unlikely(skb_is_nonlinear(skb)))\n-\t\tgoto add_tail_frag;\n-\n-\t/* will the data fit in the skb we allocated? if so, just\n-\t * copy it as it is pretty small anyway\n-\t */\n-\tif (size <= I40E_RX_HDR_SIZE) {\n-\t\tmemcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));\n-\n-\t\t/* page is to be freed, increase pagecnt_bias instead of\n-\t\t * decreasing page count.\n-\t\t */\n-\t\trx_buffer->pagecnt_bias++;\n-\t\treturn;\n-\t}\n-\n-\t/* we need the header to contain the greater of either\n-\t * ETH_HLEN or 60 bytes if the skb->len is less than\n-\t * 60 for skb_pad.\n-\t */\n-\tpull_len = eth_get_headlen(va, I40E_RX_HDR_SIZE);\n-\n-\t/* align pull length to size of long to optimize\n-\t * memcpy performance\n-\t */\n-\tmemcpy(__skb_put(skb, pull_len), va, ALIGN(pull_len, sizeof(long)));\n-\n-\t/* update all of the pointers */\n-\tva += pull_len;\n-\tsize -= pull_len;\n \n-add_tail_frag:\n-\tskb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,\n-\t\t\t(unsigned long)va & ~PAGE_MASK, size, truesize);\n+\tskb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,\n+\t\t\trx_buffer->page_offset, size, truesize);\n \n \t/* page is being used so we must update the page offset */\n #if (PAGE_SIZE < 8192)\n@@ -1139,45 +1101,66 @@ static struct i40e_rx_buffer *i40e_get_rx_buffer(struct i40e_ring *rx_ring,\n }\n \n /**\n- * i40evf_fetch_rx_buffer - Allocate skb and populate it\n+ * i40e_construct_skb - Allocate skb and populate it\n * @rx_ring: rx descriptor ring to transact packets on\n * @rx_buffer: rx buffer to pull data from\n * @size: size of buffer to add to skb\n *\n- * This function allocates an skb on the fly, and populates it with the page\n- * data from the current receive descriptor, taking care to set up the skb\n- * correctly, as well as handling calling the page recycle function if\n- * necessary.\n+ * This function allocates an skb. It then populates it with the page\n+ * data from the current receive descriptor, taking care to set up the\n+ * skb correctly.\n */\n-static inline\n-struct sk_buff *i40evf_fetch_rx_buffer(struct i40e_ring *rx_ring,\n-\t\t\t\t struct i40e_rx_buffer *rx_buffer,\n-\t\t\t\t struct sk_buff *skb,\n-\t\t\t\t unsigned int size)\n+static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring,\n+\t\t\t\t\t struct i40e_rx_buffer *rx_buffer,\n+\t\t\t\t\t unsigned int size)\n {\n-\tif (likely(!skb)) {\n-\t\tvoid *page_addr = page_address(rx_buffer->page) +\n-\t\t\t\t rx_buffer->page_offset;\n+\tvoid *va = page_address(rx_buffer->page) + rx_buffer->page_offset;\n+#if (PAGE_SIZE < 8192)\n+\tunsigned int truesize = I40E_RXBUFFER_2048;\n+#else\n+\tunsigned int truesize = SKB_DATA_ALIGN(size);\n+#endif\n+\tunsigned int headlen;\n+\tstruct sk_buff *skb;\n \n-\t\t/* prefetch first cache line of first page */\n-\t\tprefetch(page_addr);\n+\t/* prefetch first cache line of first page */\n+\tprefetch(va);\n #if L1_CACHE_BYTES < 128\n-\t\tprefetch(page_addr + L1_CACHE_BYTES);\n+\tprefetch(va + L1_CACHE_BYTES);\n #endif\n \n-\t\t/* allocate a skb to store the frags */\n-\t\tskb = __napi_alloc_skb(&rx_ring->q_vector->napi,\n-\t\t\t\t I40E_RX_HDR_SIZE,\n-\t\t\t\t GFP_ATOMIC | __GFP_NOWARN);\n-\t\tif (unlikely(!skb)) {\n-\t\t\trx_ring->rx_stats.alloc_buff_failed++;\n-\t\t\trx_buffer->pagecnt_bias++;\n-\t\t\treturn NULL;\n-\t\t}\n-\t}\n+\t/* allocate a skb to store the frags */\n+\tskb = __napi_alloc_skb(&rx_ring->q_vector->napi,\n+\t\t\t I40E_RX_HDR_SIZE,\n+\t\t\t GFP_ATOMIC | __GFP_NOWARN);\n+\tif (unlikely(!skb))\n+\t\treturn NULL;\n+\n+\t/* Determine available headroom for copy */\n+\theadlen = size;\n+\tif (headlen > I40E_RX_HDR_SIZE)\n+\t\theadlen = eth_get_headlen(va, I40E_RX_HDR_SIZE);\n \n-\t/* pull page into skb */\n-\ti40e_add_rx_frag(rx_ring, rx_buffer, skb, size);\n+\t/* align pull length to size of long to optimize memcpy performance */\n+\tmemcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long)));\n+\n+\t/* update all of the pointers */\n+\tsize -= headlen;\n+\tif (size) {\n+\t\tskb_add_rx_frag(skb, 0, rx_buffer->page,\n+\t\t\t\trx_buffer->page_offset + headlen,\n+\t\t\t\tsize, truesize);\n+\n+\t\t/* buffer is used by skb, update page_offset */\n+#if (PAGE_SIZE < 8192)\n+\t\trx_buffer->page_offset ^= truesize;\n+#else\n+\t\trx_buffer->page_offset += truesize;\n+#endif\n+\t} else {\n+\t\t/* buffer is unused, reset bias back to rx_buffer */\n+\t\trx_buffer->pagecnt_bias++;\n+\t}\n \n \treturn skb;\n }\n@@ -1297,9 +1280,18 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)\n \n \t\trx_buffer = i40e_get_rx_buffer(rx_ring, size);\n \n-\t\tskb = i40evf_fetch_rx_buffer(rx_ring, rx_buffer, skb, size);\n-\t\tif (!skb)\n+\t\t/* retrieve a buffer from the ring */\n+\t\tif (skb)\n+\t\t\ti40e_add_rx_frag(rx_ring, rx_buffer, skb, size);\n+\t\telse\n+\t\t\tskb = i40e_construct_skb(rx_ring, rx_buffer, size);\n+\n+\t\t/* exit if we failed to retrieve a buffer */\n+\t\tif (!skb) {\n+\t\t\trx_ring->rx_stats.alloc_buff_failed++;\n+\t\t\trx_buffer->pagecnt_bias++;\n \t\t\tbreak;\n+\t\t}\n \n \t\ti40e_put_rx_buffer(rx_ring, rx_buffer);\n \t\tcleaned_count++;\n", "prefixes": [ "next", "S63", "4/6" ] }