diff mbox series

[bpf-next,V2,2/8] i40e: implement flush flag for ndo_xdp_xmit

Message ID 152775719291.24817.3098409990616007642.stgit@firesoul
State Accepted, archived
Delegated to: BPF Maintainers
Headers show
Series bpf/xdp: add flags argument to ndo_xdp_xmit and flag flush operation | expand

Commit Message

Jesper Dangaard Brouer May 31, 2018, 8:59 a.m. UTC
When passed the XDP_XMIT_FLUSH flag i40e_xdp_xmit now performs the
same kind of ring tail update as in i40e_xdp_flush.  The advantage is
that all the necessary checks have been performed and xdp_ring can be
updated, instead of having to perform the exact same steps/checks in
i40e_xdp_flush

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 drivers/net/ethernet/intel/i40e/i40e_txrx.c |   10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

Comments

Daniel Borkmann June 4, 2018, 1:11 p.m. UTC | #1
On 05/31/2018 10:59 AM, Jesper Dangaard Brouer wrote:
> When passed the XDP_XMIT_FLUSH flag i40e_xdp_xmit now performs the
> same kind of ring tail update as in i40e_xdp_flush.  The advantage is
> that all the necessary checks have been performed and xdp_ring can be
> updated, instead of having to perform the exact same steps/checks in
> i40e_xdp_flush
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c |   10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> index c0451d6e0790..5f01e4ce9c92 100644
> --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> @@ -3676,6 +3676,7 @@ int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
>  	struct i40e_netdev_priv *np = netdev_priv(dev);
>  	unsigned int queue_index = smp_processor_id();
>  	struct i40e_vsi *vsi = np->vsi;
> +	struct i40e_ring *xdp_ring;
>  	int drops = 0;
>  	int i;
>  
> @@ -3685,20 +3686,25 @@ int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
>  	if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs)
>  		return -ENXIO;
>  
> -	if (unlikely(flags & ~XDP_XMIT_FLAGS_NONE))
> +	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
>  		return -EINVAL;
>  
> +	xdp_ring = vsi->xdp_rings[queue_index];
> +
>  	for (i = 0; i < n; i++) {
>  		struct xdp_frame *xdpf = frames[i];
>  		int err;
>  
> -		err = i40e_xmit_xdp_ring(xdpf, vsi->xdp_rings[queue_index]);
> +		err = i40e_xmit_xdp_ring(xdpf, xdp_ring);
>  		if (err != I40E_XDP_TX) {
>  			xdp_return_frame_rx_napi(xdpf);
>  			drops++;
>  		}
>  	}
>  
> +	if (unlikely(flags & XDP_XMIT_FLUSH))
> +		i40e_xdp_ring_update_tail(xdp_ring);

In addition to Alexei's feedback, I'd remove the unlikely() on the flush from here and the
ixgbe one like you did on the rest of the drivers in the series, just let CPU decide.

For the invalid flags case it's totally fine and in fact you could probably do this for all
three cases where you bail out in the beginning of i40e_xdp_xmit() and won't able able to
send anything anyway:

        if (test_bit(__I40E_VSI_DOWN, vsi->state))
                return -ENETDOWN;

        if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs)
                return -ENXIO;

        if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
                return -EINVAL;

Thanks,
Daniel
diff mbox series

Patch

diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index c0451d6e0790..5f01e4ce9c92 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -3676,6 +3676,7 @@  int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
 	struct i40e_netdev_priv *np = netdev_priv(dev);
 	unsigned int queue_index = smp_processor_id();
 	struct i40e_vsi *vsi = np->vsi;
+	struct i40e_ring *xdp_ring;
 	int drops = 0;
 	int i;
 
@@ -3685,20 +3686,25 @@  int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
 	if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs)
 		return -ENXIO;
 
-	if (unlikely(flags & ~XDP_XMIT_FLAGS_NONE))
+	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
 		return -EINVAL;
 
+	xdp_ring = vsi->xdp_rings[queue_index];
+
 	for (i = 0; i < n; i++) {
 		struct xdp_frame *xdpf = frames[i];
 		int err;
 
-		err = i40e_xmit_xdp_ring(xdpf, vsi->xdp_rings[queue_index]);
+		err = i40e_xmit_xdp_ring(xdpf, xdp_ring);
 		if (err != I40E_XDP_TX) {
 			xdp_return_frame_rx_napi(xdpf);
 			drops++;
 		}
 	}
 
+	if (unlikely(flags & XDP_XMIT_FLUSH))
+		i40e_xdp_ring_update_tail(xdp_ring);
+
 	return n - drops;
 }