Message ID | 20170914163107.8404-1-sthemmin@microsoft.com |
---|---|
State | Accepted, archived |
Delegated to: | David Miller |
Headers | show |
Series | [net] netvsc: increase default receive buffer size | expand |
From: Stephen Hemminger <stephen@networkplumber.org> Date: Thu, 14 Sep 2017 09:31:07 -0700 > The default receive buffer size was reduced by recent change > to a value which was appropriate for 10G and Windows Server 2016. > But the value is too small for full performance with 40G on Azure. > Increase the default back to maximum supported by host. > > Fixes: 8b5327975ae1 ("netvsc: allow controlling send/recv buffer size") > Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> What other side effects are there to making this buffer so large? Just curious...
On Thu, 14 Sep 2017 10:02:03 -0700 (PDT) David Miller <davem@davemloft.net> wrote: > From: Stephen Hemminger <stephen@networkplumber.org> > Date: Thu, 14 Sep 2017 09:31:07 -0700 > > > The default receive buffer size was reduced by recent change > > to a value which was appropriate for 10G and Windows Server 2016. > > But the value is too small for full performance with 40G on Azure. > > Increase the default back to maximum supported by host. > > > > Fixes: 8b5327975ae1 ("netvsc: allow controlling send/recv buffer size") > > Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> > > What other side effects are there to making this buffer so large? > > Just curious... It increase latency and exercises bufferbloat avoidance on TCP. The problem was the smaller buffer caused regressions in UDP benchmarks on 40G Azure. One could argue that this is not a reasonable benchmark but people run it. Apparently, Windows already went the same thing and uses an even bigger buffer. Longer term there will be more internal discussion with different teams about what the receive latency and buffering needs to be. Also, the issue goes away when doing accelerated networking (SR-IOV) is more widely used.
From: Stephen Hemminger <stephen@networkplumber.org> Date: Thu, 14 Sep 2017 09:31:07 -0700 > The default receive buffer size was reduced by recent change > to a value which was appropriate for 10G and Windows Server 2016. > But the value is too small for full performance with 40G on Azure. > Increase the default back to maximum supported by host. > > Fixes: 8b5327975ae1 ("netvsc: allow controlling send/recv buffer size") > Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Applied.
diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index c538a4f15f3b..d4902ee5f260 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -49,7 +49,7 @@ #define NETVSC_MIN_TX_SECTIONS 10 #define NETVSC_DEFAULT_TX 192 /* ~1M */ #define NETVSC_MIN_RX_SECTIONS 10 /* ~64K */ -#define NETVSC_DEFAULT_RX 2048 /* ~4M */ +#define NETVSC_DEFAULT_RX 10485 /* Max ~16M */ #define LINKCHANGE_INT (2 * HZ) #define VF_TAKEOVER_INT (HZ / 10)
The default receive buffer size was reduced by recent change to a value which was appropriate for 10G and Windows Server 2016. But the value is too small for full performance with 40G on Azure. Increase the default back to maximum supported by host. Fixes: 8b5327975ae1 ("netvsc: allow controlling send/recv buffer size") Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> --- drivers/net/hyperv/netvsc_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)