diff mbox

[18/18] netvsc: call netif_receive_skb

Message ID 20170124210615.18628-19-sthemmin@microsoft.com
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Stephen Hemminger Jan. 24, 2017, 9:06 p.m. UTC
To improve performance, netvsc can call network stack directly and
avoid the local backlog queue. This is safe since incoming packets are
handled in softirq context already because the receive function
callback is called from a tasklet.

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
---
 drivers/net/hyperv/netvsc_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Eric Dumazet Jan. 24, 2017, 10:39 p.m. UTC | #1
On Tue, 2017-01-24 at 13:06 -0800, Stephen Hemminger wrote:
> To improve performance, netvsc can call network stack directly and
> avoid the local backlog queue. This is safe since incoming packets are
> handled in softirq context already because the receive function
> callback is called from a tasklet.

Is this tasklet implementing a limit or something ?

netif_rx() queues packets to the backlog, which is processed later by
net_rx_action() like other NAPI, with limit of 64 packets per round.

Calling netif_receive_skb() means you can escape this ability to fairly
distribute the cpu cycles among multiple NAPI.

I do not see range_cnt being capped in netvsc_receive()
Stephen Hemminger Jan. 24, 2017, 11 p.m. UTC | #2
On Tue, 24 Jan 2017 14:39:19 -0800
Eric Dumazet <eric.dumazet@gmail.com> wrote:

> On Tue, 2017-01-24 at 13:06 -0800, Stephen Hemminger wrote:
> > To improve performance, netvsc can call network stack directly and
> > avoid the local backlog queue. This is safe since incoming packets are
> > handled in softirq context already because the receive function
> > callback is called from a tasklet.  
> 
> Is this tasklet implementing a limit or something ?

The ring only holds a fixed amount of data so there is a limit but
it is quite large.

> 
> netif_rx() queues packets to the backlog, which is processed later by
> net_rx_action() like other NAPI, with limit of 64 packets per round.

Since netvsc_receive has to copy all incoming data it is a bottleneck
unto itself. By the time net_rx_action is invoked the cache is stale.

> 
> Calling netif_receive_skb() means you can escape this ability to fairly
> distribute the cpu cycles among multiple NAPI.
> 
> I do not see range_cnt being capped in netvsc_receive()

There is no cap. NAPI is coming and will help.
Eric Dumazet Jan. 24, 2017, 11:07 p.m. UTC | #3
On Tue, 2017-01-24 at 15:00 -0800, Stephen Hemminger wrote:
> On Tue, 24 Jan 2017 14:39:19 -0800
> Eric Dumazet <eric.dumazet@gmail.com> wrote:
> 
> > On Tue, 2017-01-24 at 13:06 -0800, Stephen Hemminger wrote:
> > > To improve performance, netvsc can call network stack directly and
> > > avoid the local backlog queue. This is safe since incoming packets are
> > > handled in softirq context already because the receive function
> > > callback is called from a tasklet.  
> > 
> > Is this tasklet implementing a limit or something ?
> 
> The ring only holds a fixed amount of data so there is a limit but
> it is quite large.
> 
> > 
> > netif_rx() queues packets to the backlog, which is processed later by
> > net_rx_action() like other NAPI, with limit of 64 packets per round.
> 
> Since netvsc_receive has to copy all incoming data it is a bottleneck
> unto itself. By the time net_rx_action is invoked the cache is stale.
> 
> > 
> > Calling netif_receive_skb() means you can escape this ability to fairly
> > distribute the cpu cycles among multiple NAPI.
> > 
> > I do not see range_cnt being capped in netvsc_receive()
> 
> There is no cap. NAPI is coming and will help.

This was my point really.

If you call netif_receive_skb() in a loop, it is not NAPI anymore,
and it is a potential latency spike point, while blocking BH and not
servicing other queues depending on this cpu.

(While sofirqs processing NAPI (including netif_rc()) can be scheduled
to ksoftirqd)

Not a big deal, I only want to point out that netif_receive_skb() can be
dangerous if used in a loop.
diff mbox

Patch

diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
index fe0df72532a3..72b0c1f7496e 100644
--- a/drivers/net/hyperv/netvsc_drv.c
+++ b/drivers/net/hyperv/netvsc_drv.c
@@ -695,7 +695,7 @@  int netvsc_recv_callback(struct net_device *net,
 	 * is done.
 	 * TODO - use NAPI?
 	 */
-	netif_rx(skb);
+	netif_receive_skb(skb);
 	rcu_read_unlock();
 
 	return 0;