diff mbox

[net-next,1/1] hv_netvsc: Properly size the vrss queues

Message ID 1432682469-15033-1-git-send-email-kys@microsoft.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

KY Srinivasan May 26, 2015, 11:21 p.m. UTC
The current algorithm for deciding on the number of VRSS channels is
not optimal since we open up the min of number of CPUs online and the
number of VRSS channels the host is offering. So on a 32 VCPU guest
we could potentially open 32 VRSS subchannels. Experimentation has
shown that it is best to limit the number of VRSS channels to the number
of CPUs within a NUMA node. As part of this work introduce a module
parameter to control the number of sub-channels we would open up as well.
Here is the new algorithm for deciding on the number of sub-channels we
would open up:
        1) Pick the minimum of what the host is offering and what the driver
           in the guest is specifying via the module parameter.
        2) Pick the minimum of (1) and the numbers of CPUs in the NUMA
           node the primary channel is bound to.

Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
---
 drivers/net/hyperv/hyperv_net.h   |    1 +
 drivers/net/hyperv/netvsc_drv.c   |    6 ++++++
 drivers/net/hyperv/rndis_filter.c |   16 ++++++++++++++--
 3 files changed, 21 insertions(+), 2 deletions(-)

Comments

David Miller May 27, 2015, 6:12 p.m. UTC | #1
From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Tue, 26 May 2015 16:21:09 -0700

> The current algorithm for deciding on the number of VRSS channels is
> not optimal since we open up the min of number of CPUs online and the
> number of VRSS channels the host is offering. So on a 32 VCPU guest
> we could potentially open 32 VRSS subchannels. Experimentation has
> shown that it is best to limit the number of VRSS channels to the number
> of CPUs within a NUMA node. As part of this work introduce a module
> parameter to control the number of sub-channels we would open up as well.
> Here is the new algorithm for deciding on the number of sub-channels we
> would open up:
>         1) Pick the minimum of what the host is offering and what the driver
>            in the guest is specifying via the module parameter.
>         2) Pick the minimum of (1) and the numbers of CPUs in the NUMA
>            node the primary channel is bound to.
> 
> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>

No new module parameters, sorry.

You will have to make it such that this can be changed at run time,
and use a generic run-time mechanism to configure this value that any
driver can use.

I will not accept: "this is not possible" or "this is too hard" as a
reason why you have to use a module parameter.

Settings that cannot be set at run time are painful for people who run
large scale operations where resetting entire systems to change a
setting is completely and utterly impractical.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
KY Srinivasan May 27, 2015, 6:20 p.m. UTC | #2
> -----Original Message-----
> From: David Miller [mailto:davem@davemloft.net]
> Sent: Wednesday, May 27, 2015 11:13 AM
> To: KY Srinivasan
> Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org;
> devel@linuxdriverproject.org; olaf@aepfle.de; apw@canonical.com;
> jasowang@redhat.com
> Subject: Re: [PATCH net-next 1/1] hv_netvsc: Properly size the vrss queues
> 
> From: "K. Y. Srinivasan" <kys@microsoft.com>
> Date: Tue, 26 May 2015 16:21:09 -0700
> 
> > The current algorithm for deciding on the number of VRSS channels is
> > not optimal since we open up the min of number of CPUs online and the
> > number of VRSS channels the host is offering. So on a 32 VCPU guest
> > we could potentially open 32 VRSS subchannels. Experimentation has
> > shown that it is best to limit the number of VRSS channels to the number
> > of CPUs within a NUMA node. As part of this work introduce a module
> > parameter to control the number of sub-channels we would open up as
> well.
> > Here is the new algorithm for deciding on the number of sub-channels we
> > would open up:
> >         1) Pick the minimum of what the host is offering and what the driver
> >            in the guest is specifying via the module parameter.
> >         2) Pick the minimum of (1) and the numbers of CPUs in the NUMA
> >            node the primary channel is bound to.
> >
> > Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
> 
> No new module parameters, sorry.
> 
> You will have to make it such that this can be changed at run time,
> and use a generic run-time mechanism to configure this value that any
> driver can use.
> 
> I will not accept: "this is not possible" or "this is too hard" as a
> reason why you have to use a module parameter.
> 
> Settings that cannot be set at run time are painful for people who run
> large scale operations where resetting entire systems to change a
> setting is completely and utterly impractical.

Agreed; we are working on full ethtool support to address this very issue. The
module parameter that I introduced here was just a temporary solution until 
the  full ethtool support. I will get rid of the module parameter and resubmit this patch.

Regards,

K. Y 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
index ddcc7f8..dd45440 100644
--- a/drivers/net/hyperv/hyperv_net.h
+++ b/drivers/net/hyperv/hyperv_net.h
@@ -161,6 +161,7 @@  struct netvsc_device_info {
 	unsigned char mac_adr[ETH_ALEN];
 	bool link_state;	/* 0 - link up, 1 - link down */
 	int  ring_size;
+	u32  max_num_vrss_chns;
 };
 
 enum rndis_device_state {
diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
index d9c88bc..feb94e2 100644
--- a/drivers/net/hyperv/netvsc_drv.c
+++ b/drivers/net/hyperv/netvsc_drv.c
@@ -46,6 +46,10 @@  static int ring_size = 128;
 module_param(ring_size, int, S_IRUGO);
 MODULE_PARM_DESC(ring_size, "Ring buffer size (# of pages)");
 
+static int max_num_vrss_chns = 8;
+module_param(max_num_vrss_chns, int, S_IRUGO);
+MODULE_PARM_DESC(num_vrss_chns, "Maximum VRSS channels we would open by default");
+
 static const u32 default_msg = NETIF_MSG_DRV | NETIF_MSG_PROBE |
 				NETIF_MSG_LINK | NETIF_MSG_IFUP |
 				NETIF_MSG_IFDOWN | NETIF_MSG_RX_ERR |
@@ -755,6 +759,7 @@  static int netvsc_change_mtu(struct net_device *ndev, int mtu)
 	ndevctx->device_ctx = hdev;
 	hv_set_drvdata(hdev, ndev);
 	device_info.ring_size = ring_size;
+	device_info.max_num_vrss_chns = max_num_vrss_chns;
 	rndis_filter_device_add(hdev, &device_info);
 	netif_tx_wake_all_queues(ndev);
 
@@ -975,6 +980,7 @@  static int netvsc_probe(struct hv_device *dev,
 
 	/* Notify the netvsc driver of the new device */
 	device_info.ring_size = ring_size;
+	device_info.max_num_vrss_chns = max_num_vrss_chns;
 	ret = rndis_filter_device_add(dev, &device_info);
 	if (ret != 0) {
 		netdev_err(net, "unable to add netvsc device (ret %d)\n", ret);
diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
index 9118cea..006c1b8 100644
--- a/drivers/net/hyperv/rndis_filter.c
+++ b/drivers/net/hyperv/rndis_filter.c
@@ -1013,6 +1013,9 @@  int rndis_filter_device_add(struct hv_device *dev,
 	struct ndis_recv_scale_cap rsscap;
 	u32 rsscap_size = sizeof(struct ndis_recv_scale_cap);
 	u32 mtu, size;
+	u32 num_rss_qs;
+	const struct cpumask *node_cpu_mask;
+	u32 num_possible_rss_qs;
 
 	rndis_device = get_rndis_device();
 	if (!rndis_device)
@@ -1100,9 +1103,18 @@  int rndis_filter_device_add(struct hv_device *dev,
 	if (ret || rsscap.num_recv_que < 2)
 		goto out;
 
+	num_rss_qs = min(device_info->max_num_vrss_chns, rsscap.num_recv_que);
+
 	net_device->max_chn = rsscap.num_recv_que;
-	net_device->num_chn = (num_online_cpus() < rsscap.num_recv_que) ?
-			       num_online_cpus() : rsscap.num_recv_que;
+
+	/*
+	 * We will limit the VRSS channels to the number CPUs in the NUMA node
+	 * the primary channel is currently bound to.
+	 */
+	node_cpu_mask = cpumask_of_node(cpu_to_node(dev->channel->target_cpu));
+	num_possible_rss_qs = cpumask_weight(node_cpu_mask);
+	net_device->num_chn = min(num_possible_rss_qs, num_rss_qs);
+
 	if (net_device->num_chn == 1)
 		goto out;