diff mbox

[RFC,1/1] fair.c: Add/Export find_idlest_perfer_cpu API

Message ID 1345232770.16533.234.camel@oc3660625478.ibm.com
State RFC, archived
Delegated to: David Miller
Headers show

Commit Message

Shirley Ma Aug. 17, 2012, 7:46 p.m. UTC
Add/Export a new API for per-cpu thread model networking device driver
to choose a preferred idlest cpu within allowed cpumask.

The receiving CPUs of a networking device are not under cgroup controls.
Normally the receiving work will be scheduled on the cpu on which the
interrupts are received. When such a networking device uses per-cpu
thread model, the cpu which is chose to process the packets might not be
part of cgroup cpusets without using such an API here. 

On NUMA system, by using the preferred cpumask from the same NUMA node
would help to reduce expensive cross memory access to/from the other
NUMA node.

KVM per-cpu vhost will be the first one to use this API. Any other
device driver which uses per-cpu thread model and has cgroup cpuset
control will use this API later.

Signed-off-by: Shirley Ma <xma@us.ibm.com>
---
 include/linux/sched.h |    2 ++
 kernel/sched/fair.c   |   41 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+), 0 deletions(-)


thanks
Shirley

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Peter Zijlstra Aug. 20, 2012, noon UTC | #1
On Fri, 2012-08-17 at 12:46 -0700, Shirley Ma wrote:
> Add/Export a new API for per-cpu thread model networking device driver
> to choose a preferred idlest cpu within allowed cpumask.
> 
> The receiving CPUs of a networking device are not under cgroup controls.
> Normally the receiving work will be scheduled on the cpu on which the
> interrupts are received. When such a networking device uses per-cpu
> thread model, the cpu which is chose to process the packets might not be
> part of cgroup cpusets without using such an API here. 
> 
> On NUMA system, by using the preferred cpumask from the same NUMA node
> would help to reduce expensive cross memory access to/from the other
> NUMA node.
> 
> KVM per-cpu vhost will be the first one to use this API. Any other
> device driver which uses per-cpu thread model and has cgroup cpuset
> control will use this API later.

How often will this be called and how do you obtain the cpumasks
provided to the function?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Shirley Ma Aug. 20, 2012, 10:17 p.m. UTC | #2
On Mon, 2012-08-20 at 14:00 +0200, Peter Zijlstra wrote:
> On Fri, 2012-08-17 at 12:46 -0700, Shirley Ma wrote:
> > Add/Export a new API for per-cpu thread model networking device
> driver
> > to choose a preferred idlest cpu within allowed cpumask.
> > 
> > The receiving CPUs of a networking device are not under cgroup
> controls.
> > Normally the receiving work will be scheduled on the cpu on which
> the
> > interrupts are received. When such a networking device uses per-cpu
> > thread model, the cpu which is chose to process the packets might
> not be
> > part of cgroup cpusets without using such an API here. 
> > 
> > On NUMA system, by using the preferred cpumask from the same NUMA
> node
> > would help to reduce expensive cross memory access to/from the other
> > NUMA node.
> > 
> > KVM per-cpu vhost will be the first one to use this API. Any other
> > device driver which uses per-cpu thread model and has cgroup cpuset
> > control will use this API later.
> 
> How often will this be called and how do you obtain the cpumasks
> provided to the function? 

It depends. It might be called pretty often if the user keeps changing
cgroups control cpuset. It might be less called if the cgroups control
cpuset is stable, and the host scheduler always schedules the work on
the same NUMA node.

The preferred cpumasks are obtained from local numa node. The allowed
cpumasks are obtained from caller's task allowed cpumasks (cgroups
control cpuset).

Thanks
Shirley

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Peter Zijlstra Aug. 21, 2012, 7:07 a.m. UTC | #3
On Mon, 2012-08-20 at 15:17 -0700, Shirley Ma wrote:
> On Mon, 2012-08-20 at 14:00 +0200, Peter Zijlstra wrote:
> > On Fri, 2012-08-17 at 12:46 -0700, Shirley Ma wrote:
> > > Add/Export a new API for per-cpu thread model networking device
> > driver
> > > to choose a preferred idlest cpu within allowed cpumask.
> > > 
> > > The receiving CPUs of a networking device are not under cgroup
> > controls.
> > > Normally the receiving work will be scheduled on the cpu on which
> > the
> > > interrupts are received. When such a networking device uses per-cpu
> > > thread model, the cpu which is chose to process the packets might
> > not be
> > > part of cgroup cpusets without using such an API here. 
> > > 
> > > On NUMA system, by using the preferred cpumask from the same NUMA
> > node
> > > would help to reduce expensive cross memory access to/from the other
> > > NUMA node.
> > > 
> > > KVM per-cpu vhost will be the first one to use this API. Any other
> > > device driver which uses per-cpu thread model and has cgroup cpuset
> > > control will use this API later.
> > 
> > How often will this be called and how do you obtain the cpumasks
> > provided to the function? 
> 
> It depends. It might be called pretty often if the user keeps changing
> cgroups control cpuset. It might be less called if the cgroups control
> cpuset is stable, and the host scheduler always schedules the work on
> the same NUMA node.

This just doesn't make any sense, you're scanning for the least loaded
cpu, this is unrelated to a change in cpuset. So tying the scan
frequency to changes in configuration is just broken.

> The preferred cpumasks are obtained from local numa node.

So why pass it as argument at all? Also, who says the current node is
the right one? It might just be running there temporarily.

>  The allowed
> cpumasks are obtained from caller's task allowed cpumasks (cgroups
> control cpuset).

task->cpus_allowed != cpusets.. Also, since you're using
task->cpus_allowed, pass a task_struct *, not a cpumask.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Shirley Ma Aug. 27, 2012, 7:07 p.m. UTC | #4
On Tue, 2012-08-21 at 09:07 +0200, Peter Zijlstra wrote:
> On Mon, 2012-08-20 at 15:17 -0700, Shirley Ma wrote:
> > On Mon, 2012-08-20 at 14:00 +0200, Peter Zijlstra wrote:
> > > On Fri, 2012-08-17 at 12:46 -0700, Shirley Ma wrote:
> > > > Add/Export a new API for per-cpu thread model networking device
> > > driver
> > > > to choose a preferred idlest cpu within allowed cpumask.
> > > > 
> > > > The receiving CPUs of a networking device are not under cgroup
> > > controls.
> > > > Normally the receiving work will be scheduled on the cpu on
> which
> > > the
> > > > interrupts are received. When such a networking device uses
> per-cpu
> > > > thread model, the cpu which is chose to process the packets
> might
> > > not be
> > > > part of cgroup cpusets without using such an API here. 
> > > > 
> > > > On NUMA system, by using the preferred cpumask from the same
> NUMA
> > > node
> > > > would help to reduce expensive cross memory access to/from the
> other
> > > > NUMA node.
> > > > 
> > > > KVM per-cpu vhost will be the first one to use this API. Any
> other
> > > > device driver which uses per-cpu thread model and has cgroup
> cpuset
> > > > control will use this API later.
> > > 
> > > How often will this be called and how do you obtain the cpumasks
> > > provided to the function? 
> > 
> > It depends. It might be called pretty often if the user keeps
> changing
> > cgroups control cpuset. It might be less called if the cgroups
> control
> > cpuset is stable, and the host scheduler always schedules the work
> on
> > the same NUMA node.
> This just doesn't make any sense, you're scanning for the least loaded
> cpu, this is unrelated to a change in cpuset. So tying the scan
> frequency to changes in configuration is just broken.

Thanks for your review. I am just back from my vacation. 

Why not? the caller knows the cpuset changes, and pass the right NUMA
node to choose the idlest cpu from that NUMA node. Practically, the VMs
don't change the cgroups. So it will not frequency to change the
configuration.

> > The preferred cpumasks are obtained from local numa node.
> 
> So why pass it as argument at all? Also, who says the current node is
> the right one? It might just be running there temporarily.

It leaves to the caller to make the right node choice. It tries to avoid
VMs running on the same cpu but on the same node with the host to
process the guest network packets.

> >  The allowed
> > cpumasks are obtained from caller's task allowed cpumasks (cgroups
> > control cpuset).
> 
> task->cpus_allowed != cpusets.. Also, since you're using
> task->cpus_allowed, pass a task_struct *, not a cpumask. 

Based on the documentation I read before, I thought the cpus_allowed ==
cgroups control cpuset. If not, where are the cgroups control cpusets
saved?

task->cpus_allowed = tsk_cpus_allowed(task_struct *p), which is
cpumask_t.

I can change the argument from cpumask to task_struct *, and call
tsk_cpus_allowed() instead of using task->cpus_allowed.

Thanks
Shirley



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 64d9df5..46cc4a7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2806,4 +2806,6 @@  static inline unsigned long rlimit_max(unsigned int limit)
 
 #endif /* __KERNEL__ */
 
+extern int find_idlest_prefer_cpu(struct cpumask *prefer,
+				 struct cpumask *allowed, int prev_cpu);
 #endif
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c099cc6..d3da151 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -26,6 +26,7 @@ 
 #include <linux/slab.h>
 #include <linux/profile.h>
 #include <linux/interrupt.h>
+#include <linux/export.h>
 
 #include <trace/events/sched.h>
 
@@ -2809,6 +2810,46 @@  unlock:
 
 	return new_cpu;
 }
+
+/*
+ * This API is used to find the most idle cpu from both preferred and
+ * allowed cpuset.
+ *
+ * allowed: The allowed cpumask is the caller's task allowed cpuset, which could * be from cgroup cpuset control.
+ * prefer: The perfer cpumask is caller's perferred cpuset choice, which could
+ * be a cpuset of a NUMA node for better performance.
+ *
+ * It helps per-cpu thread model to choose the preferred cpu in the allowed
+ * cpuset to be scheduled when the work doesn't want to be scheduled on the
+ * same cpu on which the work is received for better performance. For example
+ * the network work doesn't want to be on the same cpu on which the
+ * interrupt is received.
+ *
+ * If these two cpusets have intersects, the cpu is chose from the intersects;
+ * if there is no intersects, then the cpu is chose from the allowed cpuset.
+ *
+ * prev_cpu helps to better local cache when prev_cpu is not busy.
+ */
+int find_idlest_prefer_cpu(struct cpumask *prefer, struct cpumask *allowed,
+			  int prev_cpu)
+{
+	unsigned long load, min_load = ULONG_MAX;
+	int check, i, idlest = -1;
+
+	check = cpumask_intersects(prefer, allowed);
+	/* Traverse only the allowed CPUs */
+	if (check == 0)
+		prefer = allowed;
+	for_each_cpu_and(i, prefer, allowed) {
+		load = weighted_cpuload(i);
+		if (load < min_load || (load == min_load && i == prev_cpu)) {
+			min_load = load;
+			idlest = i;
+		}
+	}
+	return idlest;
+}
+EXPORT_SYMBOL(find_idlest_prefer_cpu);
 #endif /* CONFIG_SMP */
 
 static unsigned long