Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/669040/?format=api
{ "id": 669040, "url": "http://patchwork.ozlabs.org/api/patches/669040/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20160912221351.5610.29043.stgit@john-Precision-Tower-5810/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20160912221351.5610.29043.stgit@john-Precision-Tower-5810>", "list_archive_url": null, "date": "2016-09-12T22:13:51", "name": "[net-next,v3,2/3] e1000: add initial XDP support", "commit_ref": null, "pull_url": null, "state": "changes-requested", "archived": false, "hash": "523c05de81a9e19e249cb82a32d2fd42d579aa0b", "submitter": { "id": 20028, "url": "http://patchwork.ozlabs.org/api/people/20028/?format=api", "name": "John Fastabend", "email": "john.fastabend@gmail.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20160912221351.5610.29043.stgit@john-Precision-Tower-5810/mbox/", "series": [], "comments": "http://patchwork.ozlabs.org/api/patches/669040/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/669040/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@lists.osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Received": [ "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3sY2Cc09Qcz9s9c\n\tfor <incoming@patchwork.ozlabs.org>;\n\tTue, 13 Sep 2016 08:14:19 +1000 (AEST)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id 57F0889A02;\n\tMon, 12 Sep 2016 22:14:18 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id bywuJuiYnKp9; Mon, 12 Sep 2016 22:14:16 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id CC25289936;\n\tMon, 12 Sep 2016 22:14:16 +0000 (UTC)", "from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n\tby ash.osuosl.org (Postfix) with ESMTP id 055971C2D29\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tMon, 12 Sep 2016 22:14:16 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby silver.osuosl.org (Postfix) with ESMTP id F054D31597\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tMon, 12 Sep 2016 22:14:15 +0000 (UTC)", "from silver.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id FhXtTicbLz6y for <intel-wired-lan@lists.osuosl.org>;\n\tMon, 12 Sep 2016 22:14:14 +0000 (UTC)", "from mail-oi0-f67.google.com (mail-oi0-f67.google.com\n\t[209.85.218.67])\n\tby silver.osuosl.org (Postfix) with ESMTPS id D453426878\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tMon, 12 Sep 2016 22:14:13 +0000 (UTC)", "by mail-oi0-f67.google.com with SMTP id o7so5654120oif.3\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tMon, 12 Sep 2016 15:14:13 -0700 (PDT)", "from [127.0.1.1] ([72.168.145.204])\n\tby smtp.gmail.com with ESMTPSA id\n\tf101sm6630874otf.12.2016.09.12.15.13.59\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tMon, 12 Sep 2016 15:14:12 -0700 (PDT)" ], "Authentication-Results": "ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=gmail.com header.i=@gmail.com header.b=fKwA/gLl;\n\tdkim-atps=neutral", "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;\n\th=from:subject:to:cc:date:message-id:in-reply-to:references\n\t:user-agent:mime-version:content-transfer-encoding;\n\tbh=86bKT/DVsQZ1lUONYvBc5dqV0BYv6AiTu3Q0QoNkcu4=;\n\tb=fKwA/gLliz5W3ldZYBMmME9+oQVXcc2Mjlvb/PlisryLtsN+cV5iJIKjqR5TLvDYxP\n\thn0phJi7y0XAR5fW7GoiL56UNSAxSbmnCHvyegb/IWhSTarlUMZKaoEhKKpnpCZTzYLa\n\t6pwNqz7C2HKOrpB5YY9mcZyy7kKjtTHACKQlcXrk71okeDaVQtpF/vFKAmGUHAGgCxCJ\n\tNq0sfduw2OCzTD5QZxSMBPHJQ2MNgMrDynQ17925w3ag4z/8LJ0BJ72Lqb7kDo6uUnbg\n\thFwqaMg1p7eYPokVvp4A09f6pp3Y8jVdpCvtQee8XOsSAVqp0bNsE6fZqQMbikIFcJLQ\n\tPZBA==", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20130820;\n\th=x-gm-message-state:from:subject:to:cc:date:message-id:in-reply-to\n\t:references:user-agent:mime-version:content-transfer-encoding;\n\tbh=86bKT/DVsQZ1lUONYvBc5dqV0BYv6AiTu3Q0QoNkcu4=;\n\tb=cyTl9yuMDn5c+hi9sDtRjAywNDV7BKWEyR1hDx/z3aj3CJkzPxCcfDC1kWYJ4zVozx\n\t59JVo5Ehb1DIbwgVPw88PSyQZfCOTVsZhLZ/tNtTYwhUH2OMzvOIPjMNX/mYKYMw0Z/8\n\ttsRcgW7yjTgSOrMYrRHldPTNvf1wCDMlhQDDUKHq34bVLRkIOapv5aJ9cvOzvQ5emEep\n\tjiTK/xeb4dyUdb7aDqKxWaSBGScWqHM1acXQXRm7+L22BVWGQNAx+LCNq0UzUsjfavFR\n\tnkhAmd0yZRN2NLDKUe6uTLHY3LWgXMcaXyba+GXISwdEHi3GqHX2a/l/FnpegfamhSwT\n\t6OTg==", "X-Gm-Message-State": "AE9vXwMglSIC1PD+M5Li37m/Zd4kSjp5j3nHlA0vc4MPaTwrnsGll7aMY4BJDHnDRZoARA==", "X-Received": "by 10.157.2.71 with SMTP id 65mr29961074otb.120.1473718453140;\n\tMon, 12 Sep 2016 15:14:13 -0700 (PDT)", "From": "John Fastabend <john.fastabend@gmail.com>", "X-Google-Original-From": "John Fastabend <john.r.fastabend@intel.com>", "To": "bblanco@plumgrid.com, john.fastabend@gmail.com,\n\talexei.starovoitov@gmail.com, jeffrey.t.kirsher@intel.com,\n\tbrouer@redhat.com, davem@davemloft.net", "Date": "Mon, 12 Sep 2016 15:13:51 -0700", "Message-ID": "<20160912221351.5610.29043.stgit@john-Precision-Tower-5810>", "In-Reply-To": "<20160912220312.5610.77528.stgit@john-Precision-Tower-5810>", "References": "<20160912220312.5610.77528.stgit@john-Precision-Tower-5810>", "User-Agent": "StGit/0.17.1-dirty", "MIME-Version": "1.0", "Cc": "xiyou.wangcong@gmail.com, intel-wired-lan@lists.osuosl.org,\n\tu9012063@gmail.com, netdev@vger.kernel.org", "Subject": "[Intel-wired-lan] [net-next PATCH v3 2/3] e1000: add initial XDP\n\tsupport", "X-BeenThere": "intel-wired-lan@lists.osuosl.org", "X-Mailman-Version": "2.1.18-1", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.lists.osuosl.org>", "List-Unsubscribe": "<http://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@lists.osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@lists.osuosl.org?subject=help>", "List-Subscribe": "<http://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@lists.osuosl.org?subject=subscribe>", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@lists.osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@lists.osuosl.org>" }, "content": "From: Alexei Starovoitov <ast@fb.com>\n\nThis patch adds initial support for XDP on e1000 driver. Note e1000\ndriver does not support page recycling in general which could be\nadded as a further improvement. However XDP_DROP case will recycle.\nXDP_TX and XDP_PASS do not support recycling.\n\ne1000 only supports a single tx queue at this time so the queue\nis shared between xdp program and Linux stack. It is possible for\nan XDP program to starve the stack in this model.\n\nThe XDP program will drop packets on XDP_TX errors. This can occur\nwhen the tx descriptors are exhausted. This behavior is the same\nfor both shared queue models like e1000 and dedicated tx queue\nmodels used in multiqueue devices. However if both the stack and\nXDP are transmitting packets it is perhaps more likely to occur in\nthe shared queue model. Further refinement to the XDP model may be\npossible in the future.\n\nI tested this patch running e1000 in a VM using KVM over a tap\ndevice.\n\nCC: William Tu <u9012063@gmail.com>\nSigned-off-by: Alexei Starovoitov <ast@kernel.org>\nSigned-off-by: John Fastabend <john.r.fastabend@intel.com>\n---\n drivers/net/ethernet/intel/e1000/e1000.h | 2 \n drivers/net/ethernet/intel/e1000/e1000_main.c | 176 +++++++++++++++++++++++++\n 2 files changed, 175 insertions(+), 3 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/e1000/e1000.h b/drivers/net/ethernet/intel/e1000/e1000.h\nindex d7bdea7..5cf8a0a 100644\n--- a/drivers/net/ethernet/intel/e1000/e1000.h\n+++ b/drivers/net/ethernet/intel/e1000/e1000.h\n@@ -150,6 +150,7 @@ struct e1000_adapter;\n */\n struct e1000_tx_buffer {\n \tstruct sk_buff *skb;\n+\tstruct page *page;\n \tdma_addr_t dma;\n \tunsigned long time_stamp;\n \tu16 length;\n@@ -279,6 +280,7 @@ struct e1000_adapter {\n \t\t\t struct e1000_rx_ring *rx_ring,\n \t\t\t int cleaned_count);\n \tstruct e1000_rx_ring *rx_ring; /* One per active queue */\n+\tstruct bpf_prog *prog;\n \tstruct napi_struct napi;\n \n \tint num_tx_queues;\ndiff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c\nindex 62a7f8d..232b927 100644\n--- a/drivers/net/ethernet/intel/e1000/e1000_main.c\n+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c\n@@ -32,6 +32,7 @@\n #include <linux/prefetch.h>\n #include <linux/bitops.h>\n #include <linux/if_vlan.h>\n+#include <linux/bpf.h>\n \n char e1000_driver_name[] = \"e1000\";\n static char e1000_driver_string[] = \"Intel(R) PRO/1000 Network Driver\";\n@@ -842,6 +843,44 @@ static int e1000_set_features(struct net_device *netdev,\n \treturn 0;\n }\n \n+static int e1000_xdp_set(struct net_device *netdev, struct bpf_prog *prog)\n+{\n+\tstruct e1000_adapter *adapter = netdev_priv(netdev);\n+\tstruct bpf_prog *old_prog;\n+\n+\told_prog = xchg(&adapter->prog, prog);\n+\tif (old_prog) {\n+\t\tsynchronize_net();\n+\t\tbpf_prog_put(old_prog);\n+\t}\n+\n+\tif (netif_running(netdev))\n+\t\te1000_reinit_locked(adapter);\n+\telse\n+\t\te1000_reset(adapter);\n+\treturn 0;\n+}\n+\n+static bool e1000_xdp_attached(struct net_device *dev)\n+{\n+\tstruct e1000_adapter *priv = netdev_priv(dev);\n+\n+\treturn !!priv->prog;\n+}\n+\n+static int e1000_xdp(struct net_device *dev, struct netdev_xdp *xdp)\n+{\n+\tswitch (xdp->command) {\n+\tcase XDP_SETUP_PROG:\n+\t\treturn e1000_xdp_set(dev, xdp->prog);\n+\tcase XDP_QUERY_PROG:\n+\t\txdp->prog_attached = e1000_xdp_attached(dev);\n+\t\treturn 0;\n+\tdefault:\n+\t\treturn -EINVAL;\n+\t}\n+}\n+\n static const struct net_device_ops e1000_netdev_ops = {\n \t.ndo_open\t\t= e1000_open,\n \t.ndo_stop\t\t= e1000_close,\n@@ -860,6 +899,7 @@ static const struct net_device_ops e1000_netdev_ops = {\n #endif\n \t.ndo_fix_features\t= e1000_fix_features,\n \t.ndo_set_features\t= e1000_set_features,\n+\t.ndo_xdp\t\t= e1000_xdp,\n };\n \n /**\n@@ -1276,6 +1316,9 @@ static void e1000_remove(struct pci_dev *pdev)\n \te1000_down_and_stop(adapter);\n \te1000_release_manageability(adapter);\n \n+\tif (adapter->prog)\n+\t\tbpf_prog_put(adapter->prog);\n+\n \tunregister_netdev(netdev);\n \n \te1000_phy_hw_reset(hw);\n@@ -1859,7 +1902,7 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)\n \tstruct e1000_hw *hw = &adapter->hw;\n \tu32 rdlen, rctl, rxcsum;\n \n-\tif (adapter->netdev->mtu > ETH_DATA_LEN) {\n+\tif (adapter->netdev->mtu > ETH_DATA_LEN || adapter->prog) {\n \t\trdlen = adapter->rx_ring[0].count *\n \t\t\tsizeof(struct e1000_rx_desc);\n \t\tadapter->clean_rx = e1000_clean_jumbo_rx_irq;\n@@ -1973,6 +2016,11 @@ e1000_unmap_and_free_tx_resource(struct e1000_adapter *adapter,\n \t\tdev_kfree_skb_any(buffer_info->skb);\n \t\tbuffer_info->skb = NULL;\n \t}\n+\tif (buffer_info->page) {\n+\t\tput_page(buffer_info->page);\n+\t\tbuffer_info->page = NULL;\n+\t}\n+\n \tbuffer_info->time_stamp = 0;\n \t/* buffer_info must be completely set up in the transmit path */\n }\n@@ -3298,6 +3346,69 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb,\n \treturn NETDEV_TX_OK;\n }\n \n+static void e1000_tx_map_rxpage(struct e1000_tx_ring *tx_ring,\n+\t\t\t\tstruct e1000_rx_buffer *rx_buffer_info,\n+\t\t\t\tunsigned int len)\n+{\n+\tstruct e1000_tx_buffer *buffer_info;\n+\tunsigned int i = tx_ring->next_to_use;\n+\n+\tbuffer_info = &tx_ring->buffer_info[i];\n+\n+\tbuffer_info->length = len;\n+\tbuffer_info->time_stamp = jiffies;\n+\tbuffer_info->mapped_as_page = false;\n+\tbuffer_info->dma = rx_buffer_info->dma;\n+\tbuffer_info->next_to_watch = i;\n+\tbuffer_info->page = rx_buffer_info->rxbuf.page;\n+\n+\ttx_ring->buffer_info[i].skb = NULL;\n+\ttx_ring->buffer_info[i].segs = 1;\n+\ttx_ring->buffer_info[i].bytecount = len;\n+\ttx_ring->buffer_info[i].next_to_watch = i;\n+\n+\trx_buffer_info->rxbuf.page = NULL;\n+}\n+\n+static void e1000_xmit_raw_frame(struct e1000_rx_buffer *rx_buffer_info,\n+\t\t\t\t u32 len,\n+\t\t\t\t struct net_device *netdev,\n+\t\t\t\t struct e1000_adapter *adapter)\n+{\n+\tstruct netdev_queue *txq = netdev_get_tx_queue(netdev, 0);\n+\tstruct e1000_hw *hw = &adapter->hw;\n+\tstruct e1000_tx_ring *tx_ring;\n+\n+\tif (len > E1000_MAX_DATA_PER_TXD)\n+\t\treturn;\n+\n+\t/* e1000 only support a single txq at the moment so the queue is being\n+\t * shared with stack. To support this requires locking to ensure the\n+\t * stack and XDP are not running at the same time. Devices with\n+\t * multiple queues should allocate a separate queue space.\n+\t */\n+\tHARD_TX_LOCK(netdev, txq, smp_processor_id());\n+\n+\ttx_ring = adapter->tx_ring;\n+\n+\tif (E1000_DESC_UNUSED(tx_ring) < 2) {\n+\t\tHARD_TX_UNLOCK(netdev, txq);\n+\t\treturn;\n+\t}\n+\n+\tif (netif_xmit_frozen_or_stopped(txq))\n+\t\treturn;\n+\n+\te1000_tx_map_rxpage(tx_ring, rx_buffer_info, len);\n+\tnetdev_sent_queue(netdev, len);\n+\te1000_tx_queue(adapter, tx_ring, 0/*tx_flags*/, 1);\n+\n+\twritel(tx_ring->next_to_use, hw->hw_addr + tx_ring->tdt);\n+\tmmiowb();\n+\n+\tHARD_TX_UNLOCK(netdev, txq);\n+}\n+\n #define NUM_REGS 38 /* 1 based count */\n static void e1000_regdump(struct e1000_adapter *adapter)\n {\n@@ -4139,6 +4250,19 @@ static struct sk_buff *e1000_alloc_rx_skb(struct e1000_adapter *adapter,\n \treturn skb;\n }\n \n+static inline int e1000_call_bpf(struct bpf_prog *prog, void *data,\n+\t\t\t\t unsigned int length)\n+{\n+\tstruct xdp_buff xdp;\n+\tint ret;\n+\n+\txdp.data = data;\n+\txdp.data_end = data + length;\n+\tret = BPF_PROG_RUN(prog, (void *)&xdp);\n+\n+\treturn ret;\n+}\n+\n /**\n * e1000_clean_jumbo_rx_irq - Send received data up the network stack; legacy\n * @adapter: board private structure\n@@ -4157,12 +4281,15 @@ static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,\n \tstruct pci_dev *pdev = adapter->pdev;\n \tstruct e1000_rx_desc *rx_desc, *next_rxd;\n \tstruct e1000_rx_buffer *buffer_info, *next_buffer;\n+\tstruct bpf_prog *prog;\n \tu32 length;\n \tunsigned int i;\n \tint cleaned_count = 0;\n \tbool cleaned = false;\n \tunsigned int total_rx_bytes = 0, total_rx_packets = 0;\n \n+\trcu_read_lock(); /* rcu lock needed here to protect xdp programs */\n+\tprog = READ_ONCE(adapter->prog);\n \ti = rx_ring->next_to_clean;\n \trx_desc = E1000_RX_DESC(*rx_ring, i);\n \tbuffer_info = &rx_ring->buffer_info[i];\n@@ -4188,12 +4315,54 @@ static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,\n \n \t\tcleaned = true;\n \t\tcleaned_count++;\n+\t\tlength = le16_to_cpu(rx_desc->length);\n+\n+\t\tif (prog) {\n+\t\t\tstruct page *p = buffer_info->rxbuf.page;\n+\t\t\tdma_addr_t dma = buffer_info->dma;\n+\t\t\tint act;\n+\n+\t\t\tif (unlikely(!(status & E1000_RXD_STAT_EOP))) {\n+\t\t\t\t/* attached bpf disallows larger than page\n+\t\t\t\t * packets, so this is hw error or corruption\n+\t\t\t\t */\n+\t\t\t\tpr_info_once(\"%s buggy !eop\\n\", netdev->name);\n+\t\t\t\tbreak;\n+\t\t\t}\n+\t\t\tif (unlikely(rx_ring->rx_skb_top)) {\n+\t\t\t\tpr_info_once(\"%s ring resizing bug\\n\",\n+\t\t\t\t\t netdev->name);\n+\t\t\t\tbreak;\n+\t\t\t}\n+\t\t\tdma_sync_single_for_cpu(&pdev->dev, dma,\n+\t\t\t\t\t\tlength, DMA_FROM_DEVICE);\n+\t\t\tact = e1000_call_bpf(prog, page_address(p), length);\n+\t\t\tswitch (act) {\n+\t\t\tcase XDP_PASS:\n+\t\t\t\tbreak;\n+\t\t\tcase XDP_TX:\n+\t\t\t\tdma_sync_single_for_device(&pdev->dev,\n+\t\t\t\t\t\t\t dma,\n+\t\t\t\t\t\t\t length,\n+\t\t\t\t\t\t\t DMA_TO_DEVICE);\n+\t\t\t\te1000_xmit_raw_frame(buffer_info, length,\n+\t\t\t\t\t\t netdev, adapter);\n+\t\t\tcase XDP_DROP:\n+\t\t\tdefault:\n+\t\t\t\t/* re-use mapped page. keep buffer_info->dma\n+\t\t\t\t * as-is, so that e1000_alloc_jumbo_rx_buffers\n+\t\t\t\t * only needs to put it back into rx ring\n+\t\t\t\t */\n+\t\t\t\ttotal_rx_bytes += length;\n+\t\t\t\ttotal_rx_packets++;\n+\t\t\t\tgoto next_desc;\n+\t\t\t}\n+\t\t}\n+\n \t\tdma_unmap_page(&pdev->dev, buffer_info->dma,\n \t\t\t adapter->rx_buffer_len, DMA_FROM_DEVICE);\n \t\tbuffer_info->dma = 0;\n \n-\t\tlength = le16_to_cpu(rx_desc->length);\n-\n \t\t/* errors is only valid for DD + EOP descriptors */\n \t\tif (unlikely((status & E1000_RXD_STAT_EOP) &&\n \t\t (rx_desc->errors & E1000_RXD_ERR_FRAME_ERR_MASK))) {\n@@ -4327,6 +4496,7 @@ next_desc:\n \t\trx_desc = next_rxd;\n \t\tbuffer_info = next_buffer;\n \t}\n+\trcu_read_unlock();\n \trx_ring->next_to_clean = i;\n \n \tcleaned_count = E1000_DESC_UNUSED(rx_ring);\n", "prefixes": [ "net-next", "v3", "2/3" ] }