Message ID | 1467049290-32359-4-git-send-email-xinhui.pan@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
On Mon, Jun 27, 2016 at 01:41:30PM -0400, Pan Xinhui wrote: > @@ -118,8 +123,17 @@ bool osq_lock(struct optimistic_spin_queue *lock) > while (!READ_ONCE(node->locked)) { > /* > * If we need to reschedule bail... so we can block. > + * An over-committed guest with more vCPUs than pCPUs > + * might fall in this loop and cause a huge overload. > + * This is because vCPU A(prev) hold the osq lock and yield out > + * vCPU B(node) wait ->locked to be set, IOW, it wait utill > + * vCPU A run and unlock the osq lock. Such spin is meaningless > + * use vcpu_is_preempted to detech such case. IF arch does not > + * support vcpu preempted check, vcpu_is_preempted is a macro > + * defined by false. Or you could mention lock holder preemption and everybody will know what you're talking about. > */ > - if (need_resched()) > + if (need_resched() || > + vcpu_is_preempted(node_cpu(node->prev))) Did you really need that linebreak? > goto unqueue; > > cpu_relax_lowlatency(); > -- > 2.4.11 >
diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c index 05a3785..9e86f0b 100644 --- a/kernel/locking/osq_lock.c +++ b/kernel/locking/osq_lock.c @@ -21,6 +21,11 @@ static inline int encode_cpu(int cpu_nr) return cpu_nr + 1; } +static inline int node_cpu(struct optimistic_spin_node *node) +{ + return node->cpu - 1; +} + static inline struct optimistic_spin_node *decode_cpu(int encoded_cpu_val) { int cpu_nr = encoded_cpu_val - 1; @@ -118,8 +123,17 @@ bool osq_lock(struct optimistic_spin_queue *lock) while (!READ_ONCE(node->locked)) { /* * If we need to reschedule bail... so we can block. + * An over-committed guest with more vCPUs than pCPUs + * might fall in this loop and cause a huge overload. + * This is because vCPU A(prev) hold the osq lock and yield out + * vCPU B(node) wait ->locked to be set, IOW, it wait utill + * vCPU A run and unlock the osq lock. Such spin is meaningless + * use vcpu_is_preempted to detech such case. IF arch does not + * support vcpu preempted check, vcpu_is_preempted is a macro + * defined by false. */ - if (need_resched()) + if (need_resched() || + vcpu_is_preempted(node_cpu(node->prev))) goto unqueue; cpu_relax_lowlatency();
An over-committed guest with more vCPUs than pCPUs has a heavy overload in osq_lock(). This is because vCPU A hold the osq lock and yield out, vCPU B wait per_cpu node->locked to be set. IOW, vCPU B wait vCPU A to run and unlock the osq lock. Such spinning is meaningless. So lets use vcpu_is_preempted() to detect if we need stop the spinning test case: perf record -a perf bench sched messaging -g 400 -p && perf report before patch: 18.09% sched-messaging [kernel.vmlinux] [k] osq_lock 12.28% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner 5.27% sched-messaging [kernel.vmlinux] [k] mutex_unlock 3.89% sched-messaging [kernel.vmlinux] [k] wait_consider_task 3.64% sched-messaging [kernel.vmlinux] [k] _raw_write_lock_irq 3.41% sched-messaging [kernel.vmlinux] [k] mutex_spin_on_owner.is 2.49% sched-messaging [kernel.vmlinux] [k] system_call after patch: 20.68% sched-messaging [kernel.vmlinux] [k] mutex_spin_on_owner 8.45% sched-messaging [kernel.vmlinux] [k] mutex_unlock 4.12% sched-messaging [kernel.vmlinux] [k] system_call 3.01% sched-messaging [kernel.vmlinux] [k] system_call_common 2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7 2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner 2.00% sched-messaging [kernel.vmlinux] [k] osq_lock Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com> --- kernel/locking/osq_lock.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-)