diff mbox

[1/1] Drivers: net-next: hyperv: Increase the size of the sendbuf region

Message ID 1406770549-29982-1-git-send-email-kys@microsoft.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

KY Srinivasan July 31, 2014, 1:35 a.m. UTC
For forwarding scenarios, it will be useful to allocate larger
sendbuf. Make the necessary adjustments to permit this.

Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
---
 drivers/net/hyperv/hyperv_net.h |    2 +-
 drivers/net/hyperv/netvsc.c     |    7 ++-----
 2 files changed, 3 insertions(+), 6 deletions(-)

Comments

David Miller Aug. 1, 2014, 4:59 a.m. UTC | #1
From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Wed, 30 Jul 2014 18:35:49 -0700

> For forwarding scenarios, it will be useful to allocate larger
> sendbuf. Make the necessary adjustments to permit this.
> 
> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>

This needs more information.

You're increasing the size by 16 times, 1MB --> 16MB, thus less
cache locality.

You're also now using vmalloc() memory, thus more TLB misses and
thrashing.

This must have a negative impact on performance, and you have to
test for that and quantify it when making a change as serious as
this one.

You also haven't gone into detail as to why forwarding scenerios
require more buffer space, than say thousands of local sockets
sending bulk TCP data.

I'm not applying this, it needs a lot more work.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
KY Srinivasan Aug. 1, 2014, 5:51 p.m. UTC | #2
> -----Original Message-----
> From: David Miller [mailto:davem@davemloft.net]
> Sent: Thursday, July 31, 2014 9:59 PM
> To: KY Srinivasan
> Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org;
> devel@linuxdriverproject.org; olaf@aepfle.de; apw@canonical.com;
> jasowang@redhat.com
> Subject: Re: [PATCH 1/1] Drivers: net-next: hyperv: Increase the size of the
> sendbuf region
> 
> From: "K. Y. Srinivasan" <kys@microsoft.com>
> Date: Wed, 30 Jul 2014 18:35:49 -0700
> 
> > For forwarding scenarios, it will be useful to allocate larger
> > sendbuf. Make the necessary adjustments to permit this.
> >
> > Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
> 
> This needs more information.
> 
> You're increasing the size by 16 times, 1MB --> 16MB, thus less cache locality.
> 
> You're also now using vmalloc() memory, thus more TLB misses and
> thrashing.
> 
> This must have a negative impact on performance, and you have to test for
> that and quantify it when making a change as serious as this one.
> 
> You also haven't gone into detail as to why forwarding scenerios require
> more buffer space, than say thousands of local sockets sending bulk TCP
> data.

David,

Intel did some benchmarking on our network throughput when Linux on Hyper-V was used as a gateway.
This fix gave us almost a 1 Gbps additional throughput on about 5Gbps base throughput we had prior to
Increasing the sendbuf size. The sendbuf mechanism is a copy based transport that we have which is clearly
more optimal than the copy-free page flipping mechanism (for small packets). In the forwarding scenario, we deal
only with MTU sized packets, and increasing the size of the senbuf area gave us the additional performance.

For what it is worth, Windows guests on Hyper-V, I am told use similar sendbuf size as well.

The exact value of sendbuf I think is less important than the fact that it needs to be larger than what Linux can
allocate as physically contiguous memory. Thus the change over to allocating via vmalloc(). As you know we currently
allocate 16MB receive buffer and we use vmalloc there for allocation. Also the low level channel code has already been
modified to deal with physically dis-contiguous memory in the ringbuffer setup.

Again based on experimentation Intel did, they say there was some improvement in throughput as the sendbuf size was
Increased up to 16MB and there was no effect on throughput beyond 16MB. Thus I chose 16MB here.

Increasing the sendbuf value makes a material difference in small packet handling. Let me know what I should do to
make this patch acceptable.

Regards,

K. Y

 
> 
> I'm not applying this, it needs a lot more work.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Aug. 2, 2014, 6:14 a.m. UTC | #3
Don't explain things to me in this thread.

Instead, tell the whole world and everyone who would ever see this
commit, in the commit log message.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
KY Srinivasan Aug. 2, 2014, 4:18 p.m. UTC | #4
> -----Original Message-----
> From: David Miller [mailto:davem@davemloft.net]
> Sent: Friday, August 1, 2014 11:14 PM
> To: KY Srinivasan
> Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org;
> devel@linuxdriverproject.org; olaf@aepfle.de; apw@canonical.com;
> jasowang@redhat.com
> Subject: Re: [PATCH 1/1] Drivers: net-next: hyperv: Increase the size of the
> sendbuf region
> 
> 
> Don't explain things to me in this thread.
> 
> Instead, tell the whole world and everyone who would ever see this commit,
> in the commit log message.
Will do. Before I re-sent the patch with the explanation, I wanted to make sure I fully
understood your concerns.

Regards,

K. Y
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
index 6cc37c1..40ba1ef 100644
--- a/drivers/net/hyperv/hyperv_net.h
+++ b/drivers/net/hyperv/hyperv_net.h
@@ -584,7 +584,7 @@  struct nvsp_message {
 
 #define NETVSC_RECEIVE_BUFFER_SIZE		(1024*1024*16)	/* 16MB */
 #define NETVSC_RECEIVE_BUFFER_SIZE_LEGACY	(1024*1024*15)  /* 15MB */
-#define NETVSC_SEND_BUFFER_SIZE			(1024 * 1024)   /* 1MB */
+#define NETVSC_SEND_BUFFER_SIZE			(1024 * 1024 * 16)   /* 16MB */
 #define NETVSC_INVALID_INDEX			-1
 
 
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index c041f63..c76178e 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -193,8 +193,7 @@  static int netvsc_destroy_buf(struct netvsc_device *net_device)
 	}
 	if (net_device->send_buf) {
 		/* Free up the receive buffer */
-		free_pages((unsigned long)net_device->send_buf,
-			   get_order(net_device->send_buf_size));
+		vfree(net_device->send_buf);
 		net_device->send_buf = NULL;
 	}
 	kfree(net_device->send_section_map);
@@ -303,9 +302,7 @@  static int netvsc_init_buf(struct hv_device *device)
 
 	/* Now setup the send buffer.
 	 */
-	net_device->send_buf =
-		(void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO,
-					 get_order(net_device->send_buf_size));
+	net_device->send_buf = vzalloc(net_device->send_buf_size);
 	if (!net_device->send_buf) {
 		netdev_err(ndev, "unable to allocate send "
 			   "buffer of size %d\n", net_device->send_buf_size);