From patchwork Tue Jun 23 17:32:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1315393 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49rtdc6c2wz9sRh; Wed, 24 Jun 2020 03:32:36 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1jnmmf-0002xc-2F; Tue, 23 Jun 2020 17:32:33 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jnmmc-0002wx-Ek for kernel-team@lists.ubuntu.com; Tue, 23 Jun 2020 17:32:30 +0000 Received: from mail-qt1-f199.google.com ([209.85.160.199]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jnmmc-0004L1-0o for kernel-team@lists.ubuntu.com; Tue, 23 Jun 2020 17:32:30 +0000 Received: by mail-qt1-f199.google.com with SMTP id z26so9978559qto.15 for ; Tue, 23 Jun 2020 10:32:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SHUFQQ4ThnmcozAy2QLy6UqeJ49YOy9nef4uq8A8WpQ=; b=VJ5iN4JxjVyXAx2/UZXBCSujSD0MVXe2YUefgxtRO1T08nRkYVtk8w7HAIfY0tNWjC wz/b9w8PaO99qbJd7FWOBMUt4GxxuUYOfUys//kBdeti/MezAEMsGHrmnqZ/n45p/14L WImI2GEURadFYULFqCHIXF5pW2oM0ux04F4/1n1yR+QPtlN8zZxNS+/u1EBPMSBGJztM Qa2NUCxOh6pWnhGtEowdO3qn0y0tKDPbyHYgegkslg/9ZvC2cm3F+bIm43dfcY9T6wV4 ocMTu9srwIMp+FhfaAPrZl6jmRnLQmoeoiNgfw+7AeF1/V2WTMSuOC8bNIrQ6NbX31V4 XloQ== X-Gm-Message-State: AOAM533Mk4NmLy2iPriv35oWgyf0C9QInnrFSacW+G4Iv/7RrmN765jo KmhLDg5DF/GsEzeocFGyWEQYBdycjWMgDBzu2RoIvHgHOa7zpL+BatSkmt5tOMD7uqAUrOAkWvD ZiSHwvWCWnhf68DYWNiNXp9JI0i1PyJ7qlhh5UlMe X-Received: by 2002:ae9:e911:: with SMTP id x17mr15471710qkf.59.1592933548029; Tue, 23 Jun 2020 10:32:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxlsG4yse97+VzrRjWmLBpzyxaGpo9s2dlwEQxZNREc++CbMxNFB+Mq8RolBG7Yx9o4ns37nA== X-Received: by 2002:ae9:e911:: with SMTP id x17mr15471668qkf.59.1592933547533; Tue, 23 Jun 2020 10:32:27 -0700 (PDT) Received: from localhost.localdomain ([2001:67c:1562:8007::aac:44d8]) by smtp.gmail.com with ESMTPSA id u6sm1267636qtc.34.2020.06.23.10.32.25 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Jun 2020 10:32:26 -0700 (PDT) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [bionic:linux-azure][PATCH 1/3] hv_netvsc: Add XDP support Date: Tue, 23 Jun 2020 14:32:13 -0300 Message-Id: <20200623173215.346858-2-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200623173215.346858-1-marcelo.cerri@canonical.com> References: <20200623173215.346858-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/1877654 This patch adds support of XDP in native mode for hv_netvsc driver, and transparently sets the XDP program on the associated VF NIC as well. Setting / unsetting XDP program on synthetic NIC (netvsc) propagates to VF NIC automatically. Setting / unsetting XDP program on VF NIC directly is not recommended, also not propagated to synthetic NIC, and may be overwritten by setting of synthetic NIC. The Azure/Hyper-V synthetic NIC receive buffer doesn't provide headroom for XDP. We thought about re-use the RNDIS header space, but it's too small. So we decided to copy the packets to a page buffer for XDP. And, most of our VMs on Azure have Accelerated Network (SRIOV) enabled, so most of the packets run on VF NIC. The synthetic NIC is considered as a fallback data-path. So the data copy on netvsc won't impact performance significantly. XDP program cannot run with LRO (RSC) enabled, so you need to disable LRO before running XDP: ethtool -K eth0 lro off XDP actions not yet supported: XDP_REDIRECT Signed-off-by: Haiyang Zhang Signed-off-by: David S. Miller (cherry picked from commit 351e1581395fcc7fb952bbd7dda01238f69968fd) Signed-off-by: Marcelo Henrique Cerri --- drivers/net/hyperv/Makefile | 2 +- drivers/net/hyperv/hyperv_net.h | 21 ++- drivers/net/hyperv/netvsc.c | 31 ++++- drivers/net/hyperv/netvsc_bpf.c | 209 ++++++++++++++++++++++++++++++ drivers/net/hyperv/netvsc_drv.c | 183 +++++++++++++++++++++----- drivers/net/hyperv/rndis_filter.c | 2 +- 6 files changed, 409 insertions(+), 39 deletions(-) create mode 100644 drivers/net/hyperv/netvsc_bpf.c diff --git a/drivers/net/hyperv/Makefile b/drivers/net/hyperv/Makefile index 3a2aa0708166..0db7ccaec4a4 100644 --- a/drivers/net/hyperv/Makefile +++ b/drivers/net/hyperv/Makefile @@ -1,4 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-only obj-$(CONFIG_HYPERV_NET) += hv_netvsc.o -hv_netvsc-y := netvsc_drv.o netvsc.o rndis_filter.o netvsc_trace.o +hv_netvsc-y := netvsc_drv.o netvsc.o rndis_filter.o netvsc_trace.o netvsc_bpf.o diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h index 64b90451443a..7b4dc546bd45 100644 --- a/drivers/net/hyperv/hyperv_net.h +++ b/drivers/net/hyperv/hyperv_net.h @@ -145,6 +145,8 @@ struct netvsc_device_info { u32 send_section_size; u32 recv_section_size; + struct bpf_prog *bprog; + u8 rss_key[NETVSC_HASH_KEYLEN]; }; @@ -192,7 +194,8 @@ int netvsc_send(struct net_device *net, struct hv_netvsc_packet *packet, struct rndis_message *rndis_msg, struct hv_page_buffer *page_buffer, - struct sk_buff *skb); + struct sk_buff *skb, + bool xdp_tx); void netvsc_linkstatus_callback(struct net_device *net, struct rndis_message *resp); int netvsc_recv_callback(struct net_device *net, @@ -201,6 +204,16 @@ int netvsc_recv_callback(struct net_device *net, void netvsc_channel_cb(void *context); int netvsc_poll(struct napi_struct *napi, int budget); +u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan, + struct xdp_buff *xdp); +unsigned int netvsc_xdp_fraglen(unsigned int len); +struct bpf_prog *netvsc_xdp_get(struct netvsc_device *nvdev); +int netvsc_xdp_set(struct net_device *dev, struct bpf_prog *prog, + struct netlink_ext_ack *extack, + struct netvsc_device *nvdev); +int netvsc_vf_setxdp(struct net_device *vf_netdev, struct bpf_prog *prog); +int netvsc_bpf(struct net_device *dev, struct netdev_bpf *bpf); + int rndis_set_subchannel(struct net_device *ndev, struct netvsc_device *nvdev, struct netvsc_device_info *dev_info); @@ -834,6 +847,8 @@ struct nvsp_message { #define RNDIS_MAX_PKT_DEFAULT 8 #define RNDIS_PKT_ALIGN_DEFAULT 8 +#define NETVSC_XDP_HDRM 256 + struct multi_send_data { struct sk_buff *skb; /* skb containing the pkt */ struct hv_netvsc_packet *pkt; /* netvsc pkt pending */ @@ -868,6 +883,7 @@ struct netvsc_stats { u64 bytes; u64 broadcast; u64 multicast; + u64 xdp_drop; struct u64_stats_sync syncp; }; @@ -973,6 +989,9 @@ struct netvsc_channel { atomic_t queue_sends; struct nvsc_rsc rsc; + struct bpf_prog __rcu *bpf_prog; + struct xdp_rxq_info xdp_rxq; + struct netvsc_stats tx_stats; struct netvsc_stats rx_stats; }; diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index 6c0732fc8c25..1b320bcf150a 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -122,8 +122,10 @@ static void free_netvsc_device(struct rcu_head *head) vfree(nvdev->send_buf); kfree(nvdev->send_section_map); - for (i = 0; i < VRSS_CHANNEL_MAX; i++) + for (i = 0; i < VRSS_CHANNEL_MAX; i++) { + xdp_rxq_info_unreg(&nvdev->chan_table[i].xdp_rxq); vfree(nvdev->chan_table[i].mrc.slots); + } kfree(nvdev); } @@ -900,7 +902,8 @@ int netvsc_send(struct net_device *ndev, struct hv_netvsc_packet *packet, struct rndis_message *rndis_msg, struct hv_page_buffer *pb, - struct sk_buff *skb) + struct sk_buff *skb, + bool xdp_tx) { struct net_device_context *ndev_ctx = netdev_priv(ndev); struct netvsc_device *net_device @@ -923,10 +926,11 @@ int netvsc_send(struct net_device *ndev, packet->send_buf_index = NETVSC_INVALID_INDEX; packet->cp_partial = false; - /* Send control message directly without accessing msd (Multi-Send - * Data) field which may be changed during data packet processing. + /* Send a control message or XDP packet directly without accessing + * msd (Multi-Send Data) field which may be changed during data packet + * processing. */ - if (!skb) + if (!skb || xdp_tx) return netvsc_send_pkt(device, packet, net_device, pb, skb); /* batch packets in send buffer if possible */ @@ -1392,6 +1396,21 @@ struct netvsc_device *netvsc_device_add(struct hv_device *device, nvchan->net_device = net_device; u64_stats_init(&nvchan->tx_stats.syncp); u64_stats_init(&nvchan->rx_stats.syncp); + + ret = xdp_rxq_info_reg(&nvchan->xdp_rxq, ndev, i); + + if (ret) { + netdev_err(ndev, "xdp_rxq_info_reg fail: %d\n", ret); + goto cleanup2; + } + + ret = xdp_rxq_info_reg_mem_model(&nvchan->xdp_rxq, + MEM_TYPE_PAGE_SHARED, NULL); + + if (ret) { + netdev_err(ndev, "xdp reg_mem_model fail: %d\n", ret); + goto cleanup2; + } } /* Enable NAPI handler before init callbacks */ @@ -1437,6 +1456,8 @@ struct netvsc_device *netvsc_device_add(struct hv_device *device, cleanup: netif_napi_del(&net_device->chan_table[0].napi); + +cleanup2: free_netvsc_device(&net_device->rcu); return ERR_PTR(ret); diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c new file mode 100644 index 000000000000..20adfe544294 --- /dev/null +++ b/drivers/net/hyperv/netvsc_bpf.c @@ -0,0 +1,209 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2019, Microsoft Corporation. + * + * Author: + * Haiyang Zhang + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include "hyperv_net.h" + +u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan, + struct xdp_buff *xdp) +{ + void *data = nvchan->rsc.data[0]; + u32 len = nvchan->rsc.len[0]; + struct page *page = NULL; + struct bpf_prog *prog; + u32 act = XDP_PASS; + + xdp->data_hard_start = NULL; + + rcu_read_lock(); + prog = rcu_dereference(nvchan->bpf_prog); + + if (!prog) + goto out; + + /* allocate page buffer for data */ + page = alloc_page(GFP_ATOMIC); + if (!page) { + act = XDP_DROP; + goto out; + } + + xdp->data_hard_start = page_address(page); + xdp->data = xdp->data_hard_start + NETVSC_XDP_HDRM; + xdp_set_data_meta_invalid(xdp); + xdp->data_end = xdp->data + len; + xdp->rxq = &nvchan->xdp_rxq; + xdp->handle = 0; + + memcpy(xdp->data, data, len); + + act = bpf_prog_run_xdp(prog, xdp); + + switch (act) { + case XDP_PASS: + case XDP_TX: + case XDP_DROP: + break; + + case XDP_ABORTED: + trace_xdp_exception(ndev, prog, act); + break; + + default: + bpf_warn_invalid_xdp_action(act); + } + +out: + rcu_read_unlock(); + + if (page && act != XDP_PASS && act != XDP_TX) { + __free_page(page); + xdp->data_hard_start = NULL; + } + + return act; +} + +unsigned int netvsc_xdp_fraglen(unsigned int len) +{ + return SKB_DATA_ALIGN(len) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); +} + +struct bpf_prog *netvsc_xdp_get(struct netvsc_device *nvdev) +{ + return rtnl_dereference(nvdev->chan_table[0].bpf_prog); +} + +int netvsc_xdp_set(struct net_device *dev, struct bpf_prog *prog, + struct netlink_ext_ack *extack, + struct netvsc_device *nvdev) +{ + struct bpf_prog *old_prog; + int buf_max, i; + + old_prog = netvsc_xdp_get(nvdev); + + if (!old_prog && !prog) + return 0; + + buf_max = NETVSC_XDP_HDRM + netvsc_xdp_fraglen(dev->mtu + ETH_HLEN); + if (prog && buf_max > PAGE_SIZE) { + netdev_err(dev, "XDP: mtu:%u too large, buf_max:%u\n", + dev->mtu, buf_max); + NL_SET_ERR_MSG_MOD(extack, "XDP: mtu too large"); + + return -EOPNOTSUPP; + } + + if (prog && (dev->features & NETIF_F_LRO)) { + netdev_err(dev, "XDP: not support LRO\n"); + NL_SET_ERR_MSG_MOD(extack, "XDP: not support LRO"); + + return -EOPNOTSUPP; + } + + if (prog) + bpf_prog_add(prog, nvdev->num_chn); + + for (i = 0; i < nvdev->num_chn; i++) + rcu_assign_pointer(nvdev->chan_table[i].bpf_prog, prog); + + if (old_prog) + for (i = 0; i < nvdev->num_chn; i++) + bpf_prog_put(old_prog); + + return 0; +} + +int netvsc_vf_setxdp(struct net_device *vf_netdev, struct bpf_prog *prog) +{ + struct netdev_bpf xdp; + bpf_op_t ndo_bpf; + + ASSERT_RTNL(); + + if (!vf_netdev) + return 0; + + ndo_bpf = vf_netdev->netdev_ops->ndo_bpf; + if (!ndo_bpf) + return 0; + + memset(&xdp, 0, sizeof(xdp)); + + xdp.command = XDP_SETUP_PROG; + xdp.prog = prog; + + return ndo_bpf(vf_netdev, &xdp); +} + +static u32 netvsc_xdp_query(struct netvsc_device *nvdev) +{ + struct bpf_prog *prog = netvsc_xdp_get(nvdev); + + if (prog) + return prog->aux->id; + + return 0; +} + +int netvsc_bpf(struct net_device *dev, struct netdev_bpf *bpf) +{ + struct net_device_context *ndevctx = netdev_priv(dev); + struct netvsc_device *nvdev = rtnl_dereference(ndevctx->nvdev); + struct net_device *vf_netdev = rtnl_dereference(ndevctx->vf_netdev); + struct netlink_ext_ack *extack = bpf->extack; + int ret; + + if (!nvdev || nvdev->destroy) { + if (bpf->command == XDP_QUERY_PROG) { + bpf->prog_id = 0; + return 0; /* Query must always succeed */ + } else { + return -ENODEV; + } + } + + switch (bpf->command) { + case XDP_SETUP_PROG: + ret = netvsc_xdp_set(dev, bpf->prog, extack, nvdev); + + if (ret) + return ret; + + ret = netvsc_vf_setxdp(vf_netdev, bpf->prog); + + if (ret) { + netdev_err(dev, "vf_setxdp failed:%d\n", ret); + NL_SET_ERR_MSG_MOD(extack, "vf_setxdp failed"); + + netvsc_xdp_set(dev, NULL, extack, nvdev); + } + + return ret; + + case XDP_QUERY_PROG: + bpf->prog_id = netvsc_xdp_query(nvdev); + return 0; + + default: + return -EINVAL; + } +} diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index 06185e3eacaa..a33b088fda24 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -519,7 +520,7 @@ static int netvsc_vf_xmit(struct net_device *net, struct net_device *vf_netdev, return rc; } -static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net) +static int netvsc_xmit(struct sk_buff *skb, struct net_device *net, bool xdp_tx) { struct net_device_context *net_device_ctx = netdev_priv(net); struct hv_netvsc_packet *packet = NULL; @@ -686,7 +687,7 @@ static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net) /* timestamp packet in software */ skb_tx_timestamp(skb); - ret = netvsc_send(net, packet, rndis_msg, pb, skb); + ret = netvsc_send(net, packet, rndis_msg, pb, skb, xdp_tx); if (likely(ret == 0)) return NETDEV_TX_OK; @@ -709,6 +710,11 @@ static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net) goto drop; } +static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *ndev) +{ + return netvsc_xmit(skb, ndev, false); +} + /* * netvsc_linkstatus_callback - Link up/down notification */ @@ -751,6 +757,22 @@ void netvsc_linkstatus_callback(struct net_device *net, schedule_delayed_work(&ndev_ctx->dwork, 0); } +static void netvsc_xdp_xmit(struct sk_buff *skb, struct net_device *ndev) +{ + int rc; + + skb->queue_mapping = skb_get_rx_queue(skb); + __skb_push(skb, ETH_HLEN); + + rc = netvsc_xmit(skb, ndev, true); + + if (dev_xmit_complete(rc)) + return; + + dev_kfree_skb_any(skb); + ndev->stats.tx_dropped++; +} + static void netvsc_comp_ipcsum(struct sk_buff *skb) { struct iphdr *iph = (struct iphdr *)skb->data; @@ -760,25 +782,45 @@ static void netvsc_comp_ipcsum(struct sk_buff *skb) } static struct sk_buff *netvsc_alloc_recv_skb(struct net_device *net, - struct netvsc_channel *nvchan) + struct netvsc_channel *nvchan, + struct xdp_buff *xdp) { struct napi_struct *napi = &nvchan->napi; const struct ndis_pkt_8021q_info *vlan = nvchan->rsc.vlan; const struct ndis_tcp_ip_checksum_info *csum_info = nvchan->rsc.csum_info; struct sk_buff *skb; + void *xbuf = xdp->data_hard_start; int i; - skb = napi_alloc_skb(napi, nvchan->rsc.pktlen); - if (!skb) - return skb; + if (xbuf) { + unsigned int hdroom = xdp->data - xdp->data_hard_start; + unsigned int xlen = xdp->data_end - xdp->data; + unsigned int frag_size = netvsc_xdp_fraglen(hdroom + xlen); - /* - * Copy to skb. This copy is needed here since the memory pointed by - * hv_netvsc_packet cannot be deallocated - */ - for (i = 0; i < nvchan->rsc.cnt; i++) - skb_put_data(skb, nvchan->rsc.data[i], nvchan->rsc.len[i]); + skb = build_skb(xbuf, frag_size); + + if (!skb) { + __free_page(virt_to_page(xbuf)); + return NULL; + } + + skb_reserve(skb, hdroom); + skb_put(skb, xlen); + skb->dev = napi->dev; + } else { + skb = napi_alloc_skb(napi, nvchan->rsc.pktlen); + + if (!skb) + return NULL; + + /* Copy to skb. This copy is needed here since the memory + * pointed by hv_netvsc_packet cannot be deallocated. + */ + for (i = 0; i < nvchan->rsc.cnt; i++) + skb_put_data(skb, nvchan->rsc.data[i], + nvchan->rsc.len[i]); + } skb->protocol = eth_type_trans(skb, net); @@ -825,13 +867,25 @@ int netvsc_recv_callback(struct net_device *net, struct vmbus_channel *channel = nvchan->channel; u16 q_idx = channel->offermsg.offer.sub_channel_index; struct sk_buff *skb; - struct netvsc_stats *rx_stats; + struct netvsc_stats *rx_stats = &nvchan->rx_stats; + struct xdp_buff xdp; + u32 act; if (net->reg_state != NETREG_REGISTERED) return NVSP_STAT_FAIL; + act = netvsc_run_xdp(net, nvchan, &xdp); + + if (act != XDP_PASS && act != XDP_TX) { + u64_stats_update_begin(&rx_stats->syncp); + rx_stats->xdp_drop++; + u64_stats_update_end(&rx_stats->syncp); + + return NVSP_STAT_SUCCESS; /* consumed by XDP */ + } + /* Allocate a skb - TODO direct I/O to pages? */ - skb = netvsc_alloc_recv_skb(net, nvchan); + skb = netvsc_alloc_recv_skb(net, nvchan, &xdp); if (unlikely(!skb)) { ++net_device_ctx->eth_stats.rx_no_memory; @@ -845,7 +899,6 @@ int netvsc_recv_callback(struct net_device *net, * on the synthetic device because modifying the VF device * statistics will not work correctly. */ - rx_stats = &nvchan->rx_stats; u64_stats_update_begin(&rx_stats->syncp); rx_stats->packets++; rx_stats->bytes += nvchan->rsc.pktlen; @@ -856,6 +909,11 @@ int netvsc_recv_callback(struct net_device *net, ++rx_stats->multicast; u64_stats_update_end(&rx_stats->syncp); + if (act == XDP_TX) { + netvsc_xdp_xmit(skb, net); + return NVSP_STAT_SUCCESS; + } + napi_gro_receive(&nvchan->napi, skb); return NVSP_STAT_SUCCESS; } @@ -882,10 +940,11 @@ static void netvsc_get_channels(struct net_device *net, /* Alloc struct netvsc_device_info, and initialize it from either existing * struct netvsc_device, or from default values. */ -static struct netvsc_device_info *netvsc_devinfo_get - (struct netvsc_device *nvdev) +static +struct netvsc_device_info *netvsc_devinfo_get(struct netvsc_device *nvdev) { struct netvsc_device_info *dev_info; + struct bpf_prog *prog; dev_info = kzalloc(sizeof(*dev_info), GFP_ATOMIC); @@ -893,6 +952,8 @@ static struct netvsc_device_info *netvsc_devinfo_get return NULL; if (nvdev) { + ASSERT_RTNL(); + dev_info->num_chn = nvdev->num_chn; dev_info->send_sections = nvdev->send_section_cnt; dev_info->send_section_size = nvdev->send_section_size; @@ -901,6 +962,12 @@ static struct netvsc_device_info *netvsc_devinfo_get memcpy(dev_info->rss_key, nvdev->extension->rss_key, NETVSC_HASH_KEYLEN); + + prog = netvsc_xdp_get(nvdev); + if (prog) { + bpf_prog_inc(prog); + dev_info->bprog = prog; + } } else { dev_info->num_chn = VRSS_CHANNEL_DEFAULT; dev_info->send_sections = NETVSC_DEFAULT_TX; @@ -912,6 +979,17 @@ static struct netvsc_device_info *netvsc_devinfo_get return dev_info; } +/* Free struct netvsc_device_info */ +static void netvsc_devinfo_put(struct netvsc_device_info *dev_info) +{ + if (dev_info->bprog) { + ASSERT_RTNL(); + bpf_prog_put(dev_info->bprog); + } + + kfree(dev_info); +} + static int netvsc_detach(struct net_device *ndev, struct netvsc_device *nvdev) { @@ -923,6 +1001,8 @@ static int netvsc_detach(struct net_device *ndev, if (cancel_work_sync(&nvdev->subchan_work)) nvdev->num_chn = 1; + netvsc_xdp_set(ndev, NULL, NULL, nvdev); + /* If device was up (receiving) then shutdown */ if (netif_running(ndev)) { netvsc_tx_disable(nvdev, ndev); @@ -956,7 +1036,8 @@ static int netvsc_attach(struct net_device *ndev, struct hv_device *hdev = ndev_ctx->device_ctx; struct netvsc_device *nvdev; struct rndis_device *rdev; - int ret; + struct bpf_prog *prog; + int ret = 0; nvdev = rndis_filter_device_add(hdev, dev_info); if (IS_ERR(nvdev)) @@ -972,6 +1053,13 @@ static int netvsc_attach(struct net_device *ndev, } } + prog = dev_info->bprog; + if (prog) { + ret = netvsc_xdp_set(ndev, prog, NULL, nvdev); + if (ret) + goto err1; + } + /* In any case device is now ready */ nvdev->tx_disable = false; netif_device_attach(ndev); @@ -982,7 +1070,7 @@ static int netvsc_attach(struct net_device *ndev, if (netif_running(ndev)) { ret = rndis_filter_open(nvdev); if (ret) - goto err; + goto err2; rdev = nvdev->extension; if (!rdev->link_state) @@ -991,9 +1079,10 @@ static int netvsc_attach(struct net_device *ndev, return 0; -err: +err2: netif_device_detach(ndev); +err1: rndis_filter_device_remove(hdev, nvdev); return ret; @@ -1043,7 +1132,7 @@ static int netvsc_set_channels(struct net_device *net, } out: - kfree(device_info); + netvsc_devinfo_put(device_info); return ret; } @@ -1150,7 +1239,7 @@ static int netvsc_change_mtu(struct net_device *ndev, int mtu) dev_set_mtu(vf_netdev, orig_mtu); out: - kfree(device_info); + netvsc_devinfo_put(device_info); return ret; } @@ -1375,8 +1464,8 @@ static const struct { /* statistics per queue (rx/tx packets/bytes) */ #define NETVSC_PCPU_STATS_LEN (num_present_cpus() * ARRAY_SIZE(pcpu_stats)) -/* 4 statistics per queue (rx/tx packets/bytes) */ -#define NETVSC_QUEUE_STATS_LEN(dev) ((dev)->num_chn * 4) +/* 5 statistics per queue (rx/tx packets/bytes, rx xdp_drop) */ +#define NETVSC_QUEUE_STATS_LEN(dev) ((dev)->num_chn * 5) static int netvsc_get_sset_count(struct net_device *dev, int string_set) { @@ -1408,6 +1497,7 @@ static void netvsc_get_ethtool_stats(struct net_device *dev, struct netvsc_ethtool_pcpu_stats *pcpu_sum; unsigned int start; u64 packets, bytes; + u64 xdp_drop; int i, j, cpu; if (!nvdev) @@ -1436,9 +1526,11 @@ static void netvsc_get_ethtool_stats(struct net_device *dev, start = u64_stats_fetch_begin_irq(&qstats->syncp); packets = qstats->packets; bytes = qstats->bytes; + xdp_drop = qstats->xdp_drop; } while (u64_stats_fetch_retry_irq(&qstats->syncp, start)); data[i++] = packets; data[i++] = bytes; + data[i++] = xdp_drop; } pcpu_sum = kvmalloc_array(num_possible_cpus(), @@ -1486,6 +1578,8 @@ static void netvsc_get_strings(struct net_device *dev, u32 stringset, u8 *data) p += ETH_GSTRING_LEN; sprintf(p, "rx_queue_%u_bytes", i); p += ETH_GSTRING_LEN; + sprintf(p, "rx_queue_%u_xdp_drop", i); + p += ETH_GSTRING_LEN; } for_each_present_cpu(cpu) { @@ -1782,10 +1876,27 @@ static int netvsc_set_ringparam(struct net_device *ndev, } out: - kfree(device_info); + netvsc_devinfo_put(device_info); return ret; } +static netdev_features_t netvsc_fix_features(struct net_device *ndev, + netdev_features_t features) +{ + struct net_device_context *ndevctx = netdev_priv(ndev); + struct netvsc_device *nvdev = rtnl_dereference(ndevctx->nvdev); + + if (!nvdev || nvdev->destroy) + return features; + + if ((features & NETIF_F_LRO) && netvsc_xdp_get(nvdev)) { + features ^= NETIF_F_LRO; + netdev_info(ndev, "Skip LRO - unsupported with XDP\n"); + } + + return features; +} + static int netvsc_set_features(struct net_device *ndev, netdev_features_t features) { @@ -1872,12 +1983,14 @@ static const struct net_device_ops device_ops = { .ndo_start_xmit = netvsc_start_xmit, .ndo_change_rx_flags = netvsc_change_rx_flags, .ndo_set_rx_mode = netvsc_set_rx_mode, + .ndo_fix_features = netvsc_fix_features, .ndo_set_features = netvsc_set_features, .ndo_change_mtu = netvsc_change_mtu, .ndo_validate_addr = eth_validate_addr, .ndo_set_mac_address = netvsc_set_mac_addr, .ndo_select_queue = netvsc_select_queue, .ndo_get_stats64 = netvsc_get_stats64, + .ndo_bpf = netvsc_bpf, }; /* @@ -2164,6 +2277,7 @@ static int netvsc_register_vf(struct net_device *vf_netdev) { struct net_device_context *net_device_ctx; struct netvsc_device *netvsc_dev; + struct bpf_prog *prog; struct net_device *ndev; int ret; @@ -2208,6 +2322,9 @@ static int netvsc_register_vf(struct net_device *vf_netdev) vf_netdev->wanted_features = ndev->features; netdev_update_features(vf_netdev); + prog = netvsc_xdp_get(netvsc_dev); + netvsc_vf_setxdp(vf_netdev, prog); + return NOTIFY_OK; } @@ -2249,6 +2366,8 @@ static int netvsc_unregister_vf(struct net_device *vf_netdev) netdev_info(ndev, "VF unregistering: %s\n", vf_netdev->name); + netvsc_vf_setxdp(vf_netdev, NULL); + netdev_rx_handler_unregister(vf_netdev); netdev_upper_dev_unlink(vf_netdev, ndev); RCU_INIT_POINTER(net_device_ctx->vf_netdev, NULL); @@ -2362,14 +2481,14 @@ static int netvsc_probe(struct hv_device *dev, list_add(&net_device_ctx->list, &netvsc_dev_list); rtnl_unlock(); - kfree(device_info); + netvsc_devinfo_put(device_info); return 0; register_failed: rtnl_unlock(); rndis_filter_device_remove(dev, nvdev); rndis_failed: - kfree(device_info); + netvsc_devinfo_put(device_info); devinfo_failed: free_percpu(net_device_ctx->vf_stats); no_stats: @@ -2397,8 +2516,10 @@ static int netvsc_remove(struct hv_device *dev) rtnl_lock(); nvdev = rtnl_dereference(ndev_ctx->nvdev); - if (nvdev) + if (nvdev) { cancel_work_sync(&nvdev->subchan_work); + netvsc_xdp_set(net, NULL, NULL, nvdev); + } /* * Call to the vsc driver to let it know that the device is being @@ -2471,11 +2592,11 @@ static int netvsc_resume(struct hv_device *dev) ret = netvsc_attach(net, device_info); - rtnl_unlock(); - - kfree(device_info); + netvsc_devinfo_put(device_info); net_device_ctx->saved_netvsc_dev_info = NULL; + rtnl_unlock(); + return ret; } static const struct hv_vmbus_device_id id_table[] = { diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c index d4cfe455d764..376691de97ed 100644 --- a/drivers/net/hyperv/rndis_filter.c +++ b/drivers/net/hyperv/rndis_filter.c @@ -235,7 +235,7 @@ static int rndis_filter_send_request(struct rndis_device *dev, trace_rndis_send(dev->ndev, 0, &req->request_msg); rcu_read_lock_bh(); - ret = netvsc_send(dev->ndev, packet, NULL, pb, NULL); + ret = netvsc_send(dev->ndev, packet, NULL, pb, NULL, false); rcu_read_unlock_bh(); return ret; From patchwork Tue Jun 23 17:32:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1315394 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49rtdg5R4Qz9sSF; Wed, 24 Jun 2020 03:32:39 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1jnmmi-0002zm-DB; Tue, 23 Jun 2020 17:32:36 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jnmmd-0002xG-IU for kernel-team@lists.ubuntu.com; Tue, 23 Jun 2020 17:32:31 +0000 Received: from mail-qt1-f200.google.com ([209.85.160.200]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jnmmd-0004LD-8n for kernel-team@lists.ubuntu.com; Tue, 23 Jun 2020 17:32:31 +0000 Received: by mail-qt1-f200.google.com with SMTP id u93so6801406qtd.8 for ; Tue, 23 Jun 2020 10:32:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hNFRuGoAq/r4H8/tZW1VVMD7hWeX+x6DcWuQBX6+vyw=; b=HHzxFsLCUNGB4pVVKKdsGjMz8VRpaWqRumGPERSciAwRiGB5p+DhY9P12kxo/BygsL 9bRb82EVJ84m+IX4/gTlY4xQfhKd1kY1X2Hq3wWGoCCk3ZZrg9gddEg+gs/hKwLZfxAL Zdahf7E04tLklpA76W/A2zawGzYsFyBK5qIAfGMdjdx6j5/Q/QWuVi4Enxq0/RYhCk5M 0izqV7Os3CCWEfpqzg0FUxCpuQAFjNeMPYAOJQvk1C1srlWD/TGNFh9SANZ25P6PwYSr DZJfhWFbTPjrkYw263/CJ4t3mJstEV4kvEGSlla67/enEhVM0cJne3TBlSGhz9bz8vsA Jz8Q== X-Gm-Message-State: AOAM532nrdgIQHHnmTAjovrf/iS+DGNr2JwliNnAgD4ZulfSp8QuecQu 4b/Js2Jg6Bq0zrkf1Qr/f39qzx6t5i9nTMirIDapPruUquLUC9mZJW5XD/wSN9HjyV4gVZKV0nb nbFmXJsHH7Cr5oy5769zzE4E5LAxPYi9iTPVUHVi0 X-Received: by 2002:a0c:f385:: with SMTP id i5mr8200264qvk.4.1592933549704; Tue, 23 Jun 2020 10:32:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyrdHdYmz3sgZ8t1esMY9t0m3SI4MFtxnLDe1BpWQ8XZfYm3XXBIXiEmmLUkdlEw26OJbeBAQ== X-Received: by 2002:a0c:f385:: with SMTP id i5mr8200248qvk.4.1592933549418; Tue, 23 Jun 2020 10:32:29 -0700 (PDT) Received: from localhost.localdomain ([2001:67c:1562:8007::aac:44d8]) by smtp.gmail.com with ESMTPSA id u6sm1267636qtc.34.2020.06.23.10.32.27 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Jun 2020 10:32:28 -0700 (PDT) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [bionic:linux-azure][PATCH 2/3] hv_netvsc: Update document for XDP support Date: Tue, 23 Jun 2020 14:32:14 -0300 Message-Id: <20200623173215.346858-3-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200623173215.346858-1-marcelo.cerri@canonical.com> References: <20200623173215.346858-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/1877654 Added the new section in the document regarding XDP support by hv_netvsc driver. Signed-off-by: Haiyang Zhang Signed-off-by: David S. Miller (cherry picked from commit 12fa74383ed4d1ffa283f77c1e7fe038e8182405) Signed-off-by: Marcelo Henrique Cerri --- .../device_drivers/microsoft/netvsc.txt | 21 +++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/Documentation/networking/device_drivers/microsoft/netvsc.txt b/Documentation/networking/device_drivers/microsoft/netvsc.txt index 3bfa635bbbd5..cd63556b27a0 100644 --- a/Documentation/networking/device_drivers/microsoft/netvsc.txt +++ b/Documentation/networking/device_drivers/microsoft/netvsc.txt @@ -82,3 +82,24 @@ Features contain one or more packets. The send buffer is an optimization, the driver will use slower method to handle very large packets or if the send buffer area is exhausted. + + XDP support + ----------- + XDP (eXpress Data Path) is a feature that runs eBPF bytecode at the early + stage when packets arrive at a NIC card. The goal is to increase performance + for packet processing, reducing the overhead of SKB allocation and other + upper network layers. + + hv_netvsc supports XDP in native mode, and transparently sets the XDP + program on the associated VF NIC as well. + + Setting / unsetting XDP program on synthetic NIC (netvsc) propagates to + VF NIC automatically. Setting / unsetting XDP program on VF NIC directly + is not recommended, also not propagated to synthetic NIC, and may be + overwritten by setting of synthetic NIC. + + XDP program cannot run with LRO (RSC) enabled, so you need to disable LRO + before running XDP: + ethtool -K eth0 lro off + + XDP_REDIRECT action is not yet supported. From patchwork Tue Jun 23 17:32:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1315395 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49rtdj2lgkz9sQx; Wed, 24 Jun 2020 03:32:41 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1jnmmi-000305-Rp; Tue, 23 Jun 2020 17:32:36 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jnmmf-0002y1-DI for kernel-team@lists.ubuntu.com; Tue, 23 Jun 2020 17:32:33 +0000 Received: from mail-qk1-f198.google.com ([209.85.222.198]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jnmmf-0004LK-20 for kernel-team@lists.ubuntu.com; Tue, 23 Jun 2020 17:32:33 +0000 Received: by mail-qk1-f198.google.com with SMTP id 16so15779332qka.15 for ; Tue, 23 Jun 2020 10:32:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yAij2jIWTavBrtL4yLkBVjkf13Aj6cEgSb/SQgkgMh8=; b=lZ0eH0VK2AFbfRk6Rkyn2ETW5oYXtkpF0YQtMfOtVri6pTQ941mtlmNE+ts32/nlm4 8OrtHSufYSbRgt0eYyVjX+BAc9QaOQPHaSulBOOOHjFnJlegnYBmcniXAi3fxbvdL7Hy LT4q5t2qyvmzQQABx4yiiy/1SYjtcpjbcC1sRDLjlpBitzwS0vJFWKgwlDii+5hM6m63 +uxLvFwVLhLj7gpxEZ8h8jzYJVAiJfwlCI3JirUcAubv93RXSREutdSzctEiYxJwtfUz Zn/0Ws0hadSzgt6aITf8zjSRv0FTpTRz3F1FFBzFjhyEJHqxw2pR8Z285YAZrBjqmQ6N mrTQ== X-Gm-Message-State: AOAM532A/AmkjREq5hETFCUWVkmbWiiftZEUMFr/Wykj1j3FGHp3Z1uj +7hmv+/ovnOF9+qBfaTfr+3dUETnxOph4xzOWLC/vH5MGiNnqFHFlIb/bBoCLj2VMPqA6RFaNjX MG2NrBMm9ydLsP88y0zSxqWh+6rR3neVqP2vRqA3A X-Received: by 2002:ad4:482c:: with SMTP id h12mr7580625qvy.146.1592933551649; Tue, 23 Jun 2020 10:32:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzGlAVIAW34+P9gTFNWRLySacL2byLAvv4gDqefbS0NC4OFaRttgSSPyylVwQESKEe8BLfU5Q== X-Received: by 2002:ad4:482c:: with SMTP id h12mr7580599qvy.146.1592933551379; Tue, 23 Jun 2020 10:32:31 -0700 (PDT) Received: from localhost.localdomain ([2001:67c:1562:8007::aac:44d8]) by smtp.gmail.com with ESMTPSA id u6sm1267636qtc.34.2020.06.23.10.32.29 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Jun 2020 10:32:30 -0700 (PDT) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [bionic:linux-azure][PATCH 3/3] hv_netvsc: Fix XDP refcnt for synthetic and VF NICs Date: Tue, 23 Jun 2020 14:32:15 -0300 Message-Id: <20200623173215.346858-4-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200623173215.346858-1-marcelo.cerri@canonical.com> References: <20200623173215.346858-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/1877654 The caller of XDP_SETUP_PROG has already incremented refcnt in __bpf_prog_get(), so drivers should only increment refcnt by num_queues - 1. To fix the issue, update netvsc_xdp_set() to add the correct number to refcnt. Hold a refcnt in netvsc_xdp_set()’s other caller, netvsc_attach(). And, do the same in netvsc_vf_setxdp(). Otherwise, every time when VF is removed and added from the host side, the refcnt will be decreased by one, which may cause page fault when unloading xdp program. Fixes: 351e1581395f ("hv_netvsc: Add XDP support") Signed-off-by: Haiyang Zhang Signed-off-by: David S. Miller (cherry picked from commit 184367dce4f744bde54377203305ccc8889aa79f) Signed-off-by: Marcelo Henrique Cerri --- drivers/net/hyperv/netvsc_bpf.c | 13 +++++++++++-- drivers/net/hyperv/netvsc_drv.c | 5 ++++- 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c index 20adfe544294..b86611041db6 100644 --- a/drivers/net/hyperv/netvsc_bpf.c +++ b/drivers/net/hyperv/netvsc_bpf.c @@ -120,7 +120,7 @@ int netvsc_xdp_set(struct net_device *dev, struct bpf_prog *prog, } if (prog) - bpf_prog_add(prog, nvdev->num_chn); + bpf_prog_add(prog, nvdev->num_chn - 1); for (i = 0; i < nvdev->num_chn; i++) rcu_assign_pointer(nvdev->chan_table[i].bpf_prog, prog); @@ -136,6 +136,7 @@ int netvsc_vf_setxdp(struct net_device *vf_netdev, struct bpf_prog *prog) { struct netdev_bpf xdp; bpf_op_t ndo_bpf; + int ret; ASSERT_RTNL(); @@ -148,10 +149,18 @@ int netvsc_vf_setxdp(struct net_device *vf_netdev, struct bpf_prog *prog) memset(&xdp, 0, sizeof(xdp)); + if (prog) + bpf_prog_inc(prog); + xdp.command = XDP_SETUP_PROG; xdp.prog = prog; - return ndo_bpf(vf_netdev, &xdp); + ret = ndo_bpf(vf_netdev, &xdp); + + if (ret && prog) + bpf_prog_put(prog); + + return ret; } static u32 netvsc_xdp_query(struct netvsc_device *nvdev) diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index a33b088fda24..9aebae66821d 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -1055,9 +1055,12 @@ static int netvsc_attach(struct net_device *ndev, prog = dev_info->bprog; if (prog) { + bpf_prog_inc(prog); ret = netvsc_xdp_set(ndev, prog, NULL, nvdev); - if (ret) + if (ret) { + bpf_prog_put(prog); goto err1; + } } /* In any case device is now ready */