diff mbox series

[net-next] tuntap: XDP_TX can use native XDP

Message ID 1520997820-8289-1-git-send-email-jasowang@redhat.com
State Accepted, archived
Delegated to: David Miller
Headers show
Series [net-next] tuntap: XDP_TX can use native XDP | expand

Commit Message

Jason Wang March 14, 2018, 3:23 a.m. UTC
Now we have ndo_xdp_xmit, switch to use it instead of the slow generic
XDP TX routine. XDP_TX on TAP gets ~20% improvements from ~1.5Mpps to
~1.8Mpps on 2.60GHz Core(TM) i7-5600U.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/tun.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

Comments

Michael S. Tsirkin March 14, 2018, 3:37 a.m. UTC | #1
On Wed, Mar 14, 2018 at 11:23:40AM +0800, Jason Wang wrote:
> Now we have ndo_xdp_xmit, switch to use it instead of the slow generic
> XDP TX routine. XDP_TX on TAP gets ~20% improvements from ~1.5Mpps to
> ~1.8Mpps on 2.60GHz Core(TM) i7-5600U.
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>

Acked-by: Michael S. Tsirkin <mst@redhat.com>

> ---
>  drivers/net/tun.c | 19 ++++++++-----------
>  1 file changed, 8 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> index 475088f..baeafa0 100644
> --- a/drivers/net/tun.c
> +++ b/drivers/net/tun.c
> @@ -1613,7 +1613,6 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>  	unsigned int delta = 0;
>  	char *buf;
>  	size_t copied;
> -	bool xdp_xmit = false;
>  	int err, pad = TUN_RX_PAD;
>  
>  	rcu_read_lock();
> @@ -1671,8 +1670,14 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>  			preempt_enable();
>  			return NULL;
>  		case XDP_TX:
> -			xdp_xmit = true;
> -			/* fall through */
> +			get_page(alloc_frag->page);
> +			alloc_frag->offset += buflen;
> +			if (tun_xdp_xmit(tun->dev, &xdp))
> +				goto err_redirect;
> +			tun_xdp_flush(tun->dev);

Why do we have to flush here though?
It might be a good idea to document the reason in a code comment.

> +			rcu_read_unlock();
> +			preempt_enable();
> +			return NULL;
>  		case XDP_PASS:
>  			delta = orig_data - xdp.data;
>  			break;
> @@ -1699,14 +1704,6 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>  	get_page(alloc_frag->page);
>  	alloc_frag->offset += buflen;
>  
> -	if (xdp_xmit) {
> -		skb->dev = tun->dev;
> -		generic_xdp_tx(skb, xdp_prog);
> -		rcu_read_unlock();
> -		preempt_enable();
> -		return NULL;
> -	}
> -
>  	rcu_read_unlock();
>  	preempt_enable();
>  
> -- 
> 2.7.4
David Miller March 14, 2018, 5:21 p.m. UTC | #2
From: Jason Wang <jasowang@redhat.com>
Date: Wed, 14 Mar 2018 11:23:40 +0800

> Now we have ndo_xdp_xmit, switch to use it instead of the slow generic
> XDP TX routine. XDP_TX on TAP gets ~20% improvements from ~1.5Mpps to
> ~1.8Mpps on 2.60GHz Core(TM) i7-5600U.
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>

Applied, thanks Jason.
Jason Wang March 15, 2018, 8:39 a.m. UTC | #3
On 2018年03月14日 11:37, Michael S. Tsirkin wrote:
>>   			return NULL;
>>   		case XDP_TX:
>> -			xdp_xmit = true;
>> -			/* fall through */
>> +			get_page(alloc_frag->page);
>> +			alloc_frag->offset += buflen;
>> +			if (tun_xdp_xmit(tun->dev, &xdp))
>> +				goto err_redirect;
>> +			tun_xdp_flush(tun->dev);
> Why do we have to flush here though?
> It might be a good idea to document the reason in a code comment.
>

ndo_xdp_xmit() does not touch doorbell, so we need a ndo_xdp_flush() 
here. It's the assumption of XDP API I think, so not sure it's worth to 
mention it here.

Thanks
Michael S. Tsirkin March 15, 2018, 1:32 p.m. UTC | #4
On Thu, Mar 15, 2018 at 04:39:25PM +0800, Jason Wang wrote:
> 
> 
> On 2018年03月14日 11:37, Michael S. Tsirkin wrote:
> > >   			return NULL;
> > >   		case XDP_TX:
> > > -			xdp_xmit = true;
> > > -			/* fall through */
> > > +			get_page(alloc_frag->page);
> > > +			alloc_frag->offset += buflen;
> > > +			if (tun_xdp_xmit(tun->dev, &xdp))
> > > +				goto err_redirect;
> > > +			tun_xdp_flush(tun->dev);
> > Why do we have to flush here though?
> > It might be a good idea to document the reason in a code comment.
> > 
> 
> ndo_xdp_xmit() does not touch doorbell, so we need a ndo_xdp_flush() here.
> It's the assumption of XDP API I think, so not sure it's worth to mention it
> here.
> 
> Thanks

Can't one flush we called after multiple xmit calls?
Jason Wang March 16, 2018, 6:46 a.m. UTC | #5
On 2018年03月15日 21:32, Michael S. Tsirkin wrote:
> On Thu, Mar 15, 2018 at 04:39:25PM +0800, Jason Wang wrote:
>>
>> On 2018年03月14日 11:37, Michael S. Tsirkin wrote:
>>>>    			return NULL;
>>>>    		case XDP_TX:
>>>> -			xdp_xmit = true;
>>>> -			/* fall through */
>>>> +			get_page(alloc_frag->page);
>>>> +			alloc_frag->offset += buflen;
>>>> +			if (tun_xdp_xmit(tun->dev, &xdp))
>>>> +				goto err_redirect;
>>>> +			tun_xdp_flush(tun->dev);
>>> Why do we have to flush here though?
>>> It might be a good idea to document the reason in a code comment.
>>>
>> ndo_xdp_xmit() does not touch doorbell, so we need a ndo_xdp_flush() here.
>> It's the assumption of XDP API I think, so not sure it's worth to mention it
>> here.
>>
>> Thanks
> Can't one flush we called after multiple xmit calls?

We can and could be another patch on top. But I want to unify it with 
the batching of XDP_REDIRECT by e.g let vhost submit more than one 
packets through msg.control in sendmsg().

Thanks
diff mbox series

Patch

diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 475088f..baeafa0 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1613,7 +1613,6 @@  static struct sk_buff *tun_build_skb(struct tun_struct *tun,
 	unsigned int delta = 0;
 	char *buf;
 	size_t copied;
-	bool xdp_xmit = false;
 	int err, pad = TUN_RX_PAD;
 
 	rcu_read_lock();
@@ -1671,8 +1670,14 @@  static struct sk_buff *tun_build_skb(struct tun_struct *tun,
 			preempt_enable();
 			return NULL;
 		case XDP_TX:
-			xdp_xmit = true;
-			/* fall through */
+			get_page(alloc_frag->page);
+			alloc_frag->offset += buflen;
+			if (tun_xdp_xmit(tun->dev, &xdp))
+				goto err_redirect;
+			tun_xdp_flush(tun->dev);
+			rcu_read_unlock();
+			preempt_enable();
+			return NULL;
 		case XDP_PASS:
 			delta = orig_data - xdp.data;
 			break;
@@ -1699,14 +1704,6 @@  static struct sk_buff *tun_build_skb(struct tun_struct *tun,
 	get_page(alloc_frag->page);
 	alloc_frag->offset += buflen;
 
-	if (xdp_xmit) {
-		skb->dev = tun->dev;
-		generic_xdp_tx(skb, xdp_prog);
-		rcu_read_unlock();
-		preempt_enable();
-		return NULL;
-	}
-
 	rcu_read_unlock();
 	preempt_enable();