diff mbox series

[net] net: mvneta: fix dma sync size in mvneta_run_xdp

Message ID c73de2bf79cc3d2f6d4f8c8864ff6a64198db2c8.1578996931.git.lorenzo@kernel.org
State Accepted
Delegated to: David Miller
Headers show
Series [net] net: mvneta: fix dma sync size in mvneta_run_xdp | expand

Commit Message

Lorenzo Bianconi Jan. 14, 2020, 10:21 a.m. UTC
Page pool API will start syncing (if requested) starting from
page->dma_addr + pool->p.offset. Fix dma sync length in
mvneta_run_xdp since we do not need to account xdp headroom

Fixes: 07e13edbb6a6 ("net: mvneta: get rid of huge dma sync in mvneta_rx_refill")
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

Comments

Jesper Dangaard Brouer Jan. 14, 2020, 10:43 a.m. UTC | #1
On Tue, 14 Jan 2020 11:21:16 +0100
Lorenzo Bianconi <lorenzo@kernel.org> wrote:

> Page pool API will start syncing (if requested) starting from
> page->dma_addr + pool->p.offset. Fix dma sync length in
> mvneta_run_xdp since we do not need to account xdp headroom
> 
> Fixes: 07e13edbb6a6 ("net: mvneta: get rid of huge dma sync in mvneta_rx_refill")
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>

Looks correct to me.

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
David Miller Jan. 15, 2020, 2:50 a.m. UTC | #2
From: Lorenzo Bianconi <lorenzo@kernel.org>
Date: Tue, 14 Jan 2020 11:21:16 +0100

> Page pool API will start syncing (if requested) starting from
> page->dma_addr + pool->p.offset. Fix dma sync length in
> mvneta_run_xdp since we do not need to account xdp headroom
> 
> Fixes: 07e13edbb6a6 ("net: mvneta: get rid of huge dma sync in mvneta_rx_refill")
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>

Applied, thanks.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 71a872d46bc4..67ad8b8b127d 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2081,7 +2081,11 @@  static int
 mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	       struct bpf_prog *prog, struct xdp_buff *xdp)
 {
-	u32 ret, act = bpf_prog_run_xdp(prog, xdp);
+	unsigned int len;
+	u32 ret, act;
+
+	len = xdp->data_end - xdp->data_hard_start - pp->rx_offset_correction;
+	act = bpf_prog_run_xdp(prog, xdp);
 
 	switch (act) {
 	case XDP_PASS:
@@ -2094,9 +2098,8 @@  mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		if (err) {
 			ret = MVNETA_XDP_DROPPED;
 			__page_pool_put_page(rxq->page_pool,
-					virt_to_head_page(xdp->data),
-					xdp->data_end - xdp->data_hard_start,
-					true);
+					     virt_to_head_page(xdp->data),
+					     len, true);
 		} else {
 			ret = MVNETA_XDP_REDIR;
 		}
@@ -2106,9 +2109,8 @@  mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		ret = mvneta_xdp_xmit_back(pp, xdp);
 		if (ret != MVNETA_XDP_TX)
 			__page_pool_put_page(rxq->page_pool,
-					virt_to_head_page(xdp->data),
-					xdp->data_end - xdp->data_hard_start,
-					true);
+					     virt_to_head_page(xdp->data),
+					     len, true);
 		break;
 	default:
 		bpf_warn_invalid_xdp_action(act);
@@ -2119,8 +2121,7 @@  mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	case XDP_DROP:
 		__page_pool_put_page(rxq->page_pool,
 				     virt_to_head_page(xdp->data),
-				     xdp->data_end - xdp->data_hard_start,
-				     true);
+				     len, true);
 		ret = MVNETA_XDP_DROPPED;
 		break;
 	}