{"id":1375804,"url":"http://patchwork.ozlabs.org/api/patches/1375804/?format=json","web_url":"http://patchwork.ozlabs.org/project/netdev/patch/291c3bd6daa3529fab6b07a93585d1d71ae4f280.1601648734.git.lorenzo@kernel.org/","project":{"id":7,"url":"http://patchwork.ozlabs.org/api/projects/7/?format=json","name":"Linux network development","link_name":"netdev","list_id":"netdev.vger.kernel.org","list_email":"netdev@vger.kernel.org","web_url":null,"scm_url":null,"webscm_url":null,"list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<291c3bd6daa3529fab6b07a93585d1d71ae4f280.1601648734.git.lorenzo@kernel.org>","list_archive_url":null,"date":"2020-10-02T14:42:01","name":"[v4,bpf-next,03/13] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer","commit_ref":null,"pull_url":null,"state":"changes-requested","archived":false,"hash":"ee5566aa25cdaa732c5a68a0b6d86fc9256c89d7","submitter":{"id":76007,"url":"http://patchwork.ozlabs.org/api/people/76007/?format=json","name":"Lorenzo Bianconi","email":"lorenzo@kernel.org"},"delegate":{"id":77147,"url":"http://patchwork.ozlabs.org/api/users/77147/?format=json","username":"bpf","first_name":"BPF","last_name":"Maintainers","email":"bpf@iogearbox.net"},"mbox":"http://patchwork.ozlabs.org/project/netdev/patch/291c3bd6daa3529fab6b07a93585d1d71ae4f280.1601648734.git.lorenzo@kernel.org/mbox/","series":[{"id":205635,"url":"http://patchwork.ozlabs.org/api/series/205635/?format=json","web_url":"http://patchwork.ozlabs.org/project/netdev/list/?series=205635","date":"2020-10-02T14:41:58","name":"mvneta: introduce XDP multi-buffer support","version":4,"mbox":"http://patchwork.ozlabs.org/series/205635/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/1375804/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/1375804/checks/","tags":{},"related":[],"headers":{"Return-Path":"<netdev-owner@vger.kernel.org>","X-Original-To":"patchwork-incoming-netdev@ozlabs.org","Delivered-To":"patchwork-incoming-netdev@ozlabs.org","Authentication-Results":["ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org\n (client-ip=23.128.96.18; helo=vger.kernel.org;\n envelope-from=netdev-owner@vger.kernel.org; receiver=<UNKNOWN>)","ozlabs.org;\n dmarc=pass (p=none dis=none) header.from=kernel.org","ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256\n header.s=default header.b=QqY2jLgi;\n\tdkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [23.128.96.18])\n\tby ozlabs.org (Postfix) with ESMTP id 4C2t4s0vftz9s1t\n\tfor <patchwork-incoming-netdev@ozlabs.org>;\n Sat,  3 Oct 2020 00:42:37 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n        id S2388178AbgJBOmg (ORCPT\n        <rfc822;patchwork-incoming-netdev@ozlabs.org>);\n        Fri, 2 Oct 2020 10:42:36 -0400","from mail.kernel.org ([198.145.29.99]:60782 \"EHLO mail.kernel.org\"\n        rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n        id S1726017AbgJBOme (ORCPT <rfc822;netdev@vger.kernel.org>);\n        Fri, 2 Oct 2020 10:42:34 -0400","from lore-desk.redhat.com (unknown [176.207.245.61])\n        (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))\n        (No client certificate requested)\n        by mail.kernel.org (Postfix) with ESMTPSA id A11FF2074B;\n        Fri,  2 Oct 2020 14:42:31 +0000 (UTC)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;\n        s=default; t=1601649754;\n        bh=eKPJVzu6E5/S5wENfNGlIxA6X+/VOWn+imBWPRph7Mk=;\n        h=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n        b=QqY2jLgiGp6kuq9XCyi0Ib8uCdIxBzyfH38XERRsxHDTATMaxQQSE+HNzaAxxWwaC\n         st8hLTznLEXcKqFQO7mDz9d/u048cFoDhX5Sp8NPOy/iOu8HFFSVjzeKfae+lo9MLF\n         AU9zO34FqBPNqsmHEukI6f7kQKgMuWLiXm9AyfQY=","From":"Lorenzo Bianconi <lorenzo@kernel.org>","To":"bpf@vger.kernel.org, netdev@vger.kernel.org","Cc":"davem@davemloft.net, kuba@kernel.org, ast@kernel.org,\n        daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com,\n        john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com,\n        lorenzo.bianconi@redhat.com, echaudro@redhat.com","Subject":"[PATCH v4 bpf-next 03/13] net: mvneta: update mb bit before passing\n the xdp buffer to eBPF layer","Date":"Fri,  2 Oct 2020 16:42:01 +0200","Message-Id":"\n <291c3bd6daa3529fab6b07a93585d1d71ae4f280.1601648734.git.lorenzo@kernel.org>","X-Mailer":"git-send-email 2.26.2","In-Reply-To":"<cover.1601648734.git.lorenzo@kernel.org>","References":"<cover.1601648734.git.lorenzo@kernel.org>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","Precedence":"bulk","List-ID":"<netdev.vger.kernel.org>","X-Mailing-List":"netdev@vger.kernel.org"},"content":"Update multi-buffer bit (mb) in xdp_buff to notify XDP/eBPF layer and\nXDP remote drivers if this is a \"non-linear\" XDP buffer. Access\nskb_shared_info only if xdp_buff mb is set\n\nSigned-off-by: Lorenzo Bianconi <lorenzo@kernel.org>\n---\n drivers/net/ethernet/marvell/mvneta.c | 42 +++++++++++++++++----------\n 1 file changed, 26 insertions(+), 16 deletions(-)","diff":"diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c\nindex d095718355d3..a431e8478297 100644\n--- a/drivers/net/ethernet/marvell/mvneta.c\n+++ b/drivers/net/ethernet/marvell/mvneta.c\n@@ -2027,12 +2027,17 @@ static void\n mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,\n \t\t    struct xdp_buff *xdp, int sync_len, bool napi)\n {\n-\tstruct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);\n+\tstruct skb_shared_info *sinfo;\n \tint i;\n \n+\tif (likely(!xdp->mb))\n+\t\tgoto out;\n+\n+\tsinfo = xdp_get_shared_info_from_buff(xdp);\n \tfor (i = 0; i < sinfo->nr_frags; i++)\n \t\tpage_pool_put_full_page(rxq->page_pool,\n \t\t\t\t\tskb_frag_page(&sinfo->frags[i]), napi);\n+out:\n \tpage_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data),\n \t\t\t   sync_len, napi);\n }\n@@ -2234,7 +2239,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,\n \tint data_len = -MVNETA_MH_SIZE, len;\n \tstruct net_device *dev = pp->dev;\n \tenum dma_data_direction dma_dir;\n-\tstruct skb_shared_info *sinfo;\n \n \tif (*size > MVNETA_MAX_RX_BUF_SIZE) {\n \t\tlen = MVNETA_MAX_RX_BUF_SIZE;\n@@ -2259,9 +2263,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,\n \txdp->data = data + pp->rx_offset_correction + MVNETA_MH_SIZE;\n \txdp->data_end = xdp->data + data_len;\n \txdp_set_data_meta_invalid(xdp);\n-\n-\tsinfo = xdp_get_shared_info_from_buff(xdp);\n-\tsinfo->nr_frags = 0;\n }\n \n static void\n@@ -2272,9 +2273,9 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,\n \t\t\t    struct page *page)\n {\n \tstruct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);\n+\tint data_len, len, nfrags = xdp->mb ? sinfo->nr_frags : 0;\n \tstruct net_device *dev = pp->dev;\n \tenum dma_data_direction dma_dir;\n-\tint data_len, len;\n \n \tif (*size > MVNETA_MAX_RX_BUF_SIZE) {\n \t\tlen = MVNETA_MAX_RX_BUF_SIZE;\n@@ -2288,17 +2289,21 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,\n \t\t\t\trx_desc->buf_phys_addr,\n \t\t\t\tlen, dma_dir);\n \n-\tif (data_len > 0 && sinfo->nr_frags < MAX_SKB_FRAGS) {\n-\t\tskb_frag_t *frag = &sinfo->frags[sinfo->nr_frags];\n+\tif (data_len > 0 && nfrags < MAX_SKB_FRAGS) {\n+\t\tskb_frag_t *frag = &sinfo->frags[nfrags];\n \n \t\tskb_frag_off_set(frag, pp->rx_offset_correction);\n \t\tskb_frag_size_set(frag, data_len);\n \t\t__skb_frag_set_page(frag, page);\n-\t\tsinfo->nr_frags++;\n-\n-\t\trx_desc->buf_phys_addr = 0;\n+\t\tnfrags++;\n+\t} else {\n+\t\tpage_pool_put_full_page(rxq->page_pool, page, true);\n \t}\n+\n+\trx_desc->buf_phys_addr = 0;\n+\tsinfo->nr_frags = nfrags;\n \t*size -= len;\n+\txdp->mb = 1;\n }\n \n static struct sk_buff *\n@@ -2306,7 +2311,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,\n \t\t      struct xdp_buff *xdp, u32 desc_status)\n {\n \tstruct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);\n-\tint i, num_frags = sinfo->nr_frags;\n+\tint i, num_frags = xdp->mb ? sinfo->nr_frags : 0;\n \tstruct sk_buff *skb;\n \n \tskb = build_skb(xdp->data_hard_start, PAGE_SIZE);\n@@ -2319,6 +2324,9 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,\n \tskb_put(skb, xdp->data_end - xdp->data);\n \tmvneta_rx_csum(pp, desc_status, skb);\n \n+\tif (likely(!xdp->mb))\n+\t\treturn skb;\n+\n \tfor (i = 0; i < num_frags; i++) {\n \t\tskb_frag_t *frag = &sinfo->frags[i];\n \n@@ -2338,13 +2346,14 @@ static int mvneta_rx_swbm(struct napi_struct *napi,\n {\n \tint rx_proc = 0, rx_todo, refill, size = 0;\n \tstruct net_device *dev = pp->dev;\n-\tstruct xdp_buff xdp_buf = {\n-\t\t.frame_sz = PAGE_SIZE,\n-\t\t.rxq = &rxq->xdp_rxq,\n-\t};\n \tstruct mvneta_stats ps = {};\n \tstruct bpf_prog *xdp_prog;\n \tu32 desc_status, frame_sz;\n+\tstruct xdp_buff xdp_buf;\n+\n+\txdp_buf.data_hard_start = NULL;\n+\txdp_buf.frame_sz = PAGE_SIZE;\n+\txdp_buf.rxq = &rxq->xdp_rxq;\n \n \t/* Get number of received packets */\n \trx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);\n@@ -2377,6 +2386,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,\n \t\t\tframe_sz = size - ETH_FCS_LEN;\n \t\t\tdesc_status = rx_status;\n \n+\t\t\txdp_buf.mb = 0;\n \t\t\tmvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,\n \t\t\t\t\t     &size, page);\n \t\t} else {\n","prefixes":["v4","bpf-next","03/13"]}