From patchwork Mon Nov 2 12:48:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: William Breathitt Gray X-Patchwork-Id: 1392229 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CPt6D0C6Kz9sWr; Mon, 2 Nov 2020 23:49:40 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kZZHC-0006ID-D5; Mon, 02 Nov 2020 12:49:34 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kZZH1-00068y-77 for kernel-team@lists.ubuntu.com; Mon, 02 Nov 2020 12:49:23 +0000 Received: from mail-qv1-f71.google.com ([209.85.219.71]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kZZGz-000466-Jb for kernel-team@lists.ubuntu.com; Mon, 02 Nov 2020 12:49:21 +0000 Received: by mail-qv1-f71.google.com with SMTP id d18so822232qvp.15 for ; Mon, 02 Nov 2020 04:49:21 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=o3KYhDnNhqdWKsxBm1aDpXOzDjMJvPjeRzijLb/R0KM=; b=coTzORlEGbz3gA+6yG2e2RiROaCg/svIroLTEzyFQoYYuPHngXT1psMCuVJD3pTDhp kHSBV0V+Dl5zUJaUEym/15xoQvdRO8/cghzt1m89bgt+XEJVkOLOYMH9Em6jUy1QAmOx aFws63faUZF9jN+Z5viSwk+Q5MVDLiQk44P6IU/5m0LUNFRU5+mqJaGlh88lhpw3kKZN 4ia/DxRj9SBgCAXbgUnWzwJtf1LZQUQGH6K3d+a1Lgl4tNYbx4l2rvtSMgYleUbCcuww K21TPrcdrch1bhwiZO35sLPryD4zV11FTcQ3V7Dtp0oP+SjJYv6hsjnNo6pdnFjP2xDa jeMw== X-Gm-Message-State: AOAM53389Vjtw4jcBvz224HfgFa+0EDBBbP24HaEMxiaDLgdzpFDYLqA thej3lmw/5C5lpN4ecThS2ZnoeC9BqanRsVlDEZWVMke4q2/QfJqmmI6+yTceo9IhkRQUSjd9R1 VsuZMjg3nUMjpqqEsNt9lbI+7cmwd5mKFSab7bJTN8w== X-Received: by 2002:a37:27c9:: with SMTP id n192mr13934894qkn.71.1604321360363; Mon, 02 Nov 2020 04:49:20 -0800 (PST) X-Google-Smtp-Source: ABdhPJwc+mUuMLtZFbJ5fM5VuFaod7pwVxdjVXHf91zju3vbIW1PHuXvMBXTXdHSHBSj+jyztoO5MQ== X-Received: by 2002:a37:27c9:: with SMTP id n192mr13934878qkn.71.1604321360078; Mon, 02 Nov 2020 04:49:20 -0800 (PST) Received: from localhost.localdomain (072-189-064-225.res.spectrum.com. [72.189.64.225]) by smtp.gmail.com with ESMTPSA id q7sm7666201qtd.49.2020.11.02.04.49.19 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Nov 2020 04:49:19 -0800 (PST) From: William Breathitt Gray To: kernel-team@lists.ubuntu.com Subject: [SRU][B:linux-azure-4.15][PATCH 19/40] virtio_net: convert to use generic xdp_frame and xdp_return_frame API Date: Mon, 2 Nov 2020 07:48:35 -0500 Message-Id: <20201102124856.4659-20-william.gray@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201102124856.4659-1-william.gray@canonical.com> References: <20201102124856.4659-1-william.gray@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jesper Dangaard Brouer BugLink: https://bugs.launchpad.net/bugs/1877654 The virtio_net driver assumes XDP frames are always released based on page refcnt (via put_page). Thus, is only queues the XDP data pointer address and uses virt_to_head_page() to retrieve struct page. Use the XDP return API to get away from such assumptions. Instead queue an xdp_frame, which allow us to use the xdp_return_frame API, when releasing the frame. V8: Avoid endianness issues (found by kbuild test robot) V9: Change __virtnet_xdp_xmit from bool to int return value (found by Dan Carpenter) Signed-off-by: Jesper Dangaard Brouer Signed-off-by: David S. Miller (backported from commit cac320c850efb25480cd0f71383b84ec61c0e138) [ vilhelmgray: context adjustment ] Signed-off-by: William Breathitt Gray --- drivers/net/virtio_net.c | 54 +++++++++++++++++++++------------------- 1 file changed, 29 insertions(+), 25 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 80aad47c8c97..280bab31a2d3 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -398,38 +398,48 @@ static void virtnet_xdp_flush(struct net_device *dev) virtqueue_kick(sq->vq); } -static bool __virtnet_xdp_xmit(struct virtnet_info *vi, - struct xdp_buff *xdp) +static int __virtnet_xdp_xmit(struct virtnet_info *vi, + struct xdp_buff *xdp) { struct virtio_net_hdr_mrg_rxbuf *hdr; - unsigned int len; + struct xdp_frame *xdpf, *xdpf_sent; struct send_queue *sq; + unsigned int len; unsigned int qp; - void *xdp_sent; int err; qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id(); sq = &vi->sq[qp]; /* Free up any pending old buffers before queueing new ones. */ - while ((xdp_sent = virtqueue_get_buf(sq->vq, &len)) != NULL) { - struct page *sent_page = virt_to_head_page(xdp_sent); + while ((xdpf_sent = virtqueue_get_buf(sq->vq, &len)) != NULL) + xdp_return_frame(xdpf_sent->data, &xdpf_sent->mem); - put_page(sent_page); - } + xdpf = convert_to_xdp_frame(xdp); + if (unlikely(!xdpf)) + return -EOVERFLOW; + + /* virtqueue want to use data area in-front of packet */ + if (unlikely(xdpf->metasize > 0)) + return -EOPNOTSUPP; - xdp->data -= vi->hdr_len; + if (unlikely(xdpf->headroom < vi->hdr_len)) + return -EOVERFLOW; + + /* Make room for virtqueue hdr (also change xdpf->headroom?) */ + xdpf->data -= vi->hdr_len; /* Zero header and leave csum up to XDP layers */ - hdr = xdp->data; + hdr = xdpf->data; memset(hdr, 0, vi->hdr_len); + xdpf->len += vi->hdr_len; - sg_init_one(sq->sg, xdp->data, xdp->data_end - xdp->data); + sg_init_one(sq->sg, xdpf->data, xdpf->len); - err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdp->data, GFP_ATOMIC); + err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdpf, GFP_ATOMIC); if (unlikely(err)) - return false; /* Caller handle free/refcnt */ + return -ENOSPC; /* Caller handle free/refcnt */ - return true; + return 0; } static int virtnet_xdp_xmit(struct net_device *dev, struct xdp_buff *xdp) @@ -437,7 +447,6 @@ static int virtnet_xdp_xmit(struct net_device *dev, struct xdp_buff *xdp) struct virtnet_info *vi = netdev_priv(dev); struct receive_queue *rq = vi->rq; struct bpf_prog *xdp_prog; - bool sent; /* Only allow ndo_xdp_xmit if XDP is loaded on dev, as this * indicate XDP resources have been successfully allocated. @@ -446,10 +455,7 @@ static int virtnet_xdp_xmit(struct net_device *dev, struct xdp_buff *xdp) if (!xdp_prog) return -ENXIO; - sent = __virtnet_xdp_xmit(vi, xdp); - if (!sent) - return -ENOSPC; - return 0; + return __virtnet_xdp_xmit(vi, xdp); } static unsigned int virtnet_get_headroom(struct virtnet_info *vi) @@ -537,7 +543,6 @@ static struct sk_buff *receive_small(struct net_device *dev, struct page *page = virt_to_head_page(buf); unsigned int delta = 0; struct page *xdp_page; - bool sent; int err; len -= vi->hdr_len; @@ -588,8 +593,8 @@ static struct sk_buff *receive_small(struct net_device *dev, delta = orig_data - xdp.data; break; case XDP_TX: - sent = __virtnet_xdp_xmit(vi, &xdp); - if (unlikely(!sent)) { + err = __virtnet_xdp_xmit(vi, &xdp); + if (unlikely(err)) { trace_xdp_exception(vi->dev, xdp_prog, act); goto err_xdp; } @@ -674,7 +679,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, unsigned int truesize; unsigned int headroom = mergeable_ctx_to_headroom(ctx); int err; - bool sent; head_skb = NULL; @@ -743,8 +747,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, } break; case XDP_TX: - sent = __virtnet_xdp_xmit(vi, &xdp); - if (unlikely(!sent)) { + err = __virtnet_xdp_xmit(vi, &xdp); + if (unlikely(err)) { trace_xdp_exception(vi->dev, xdp_prog, act); if (unlikely(xdp_page != page)) put_page(xdp_page);