Message ID | 1477642287-24104-3-git-send-email-xinhui.pan@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
On Fri, 28 Oct 2016, Pan Xinhui wrote: > /* > * If we need to reschedule bail... so we can block. >+ * Use vcpu_is_preempted to detech lock holder preemption issue ^^ detect >+ * and break. Could you please remove the rest of this comment? Its just noise to point out that vcpu_is_preempted is a macro defined by arch/false. This is standard protocol in the kernel. Same goes for all locks you change with this. Thanks, Davidlohr > * vcpu_is_preempted is a macro defined by false if >+ * arch does not support vcpu preempted check, > */ >- if (need_resched()) >+ if (need_resched() || vcpu_is_preempted(node_cpu(node->prev))) > goto unqueue; > > cpu_relax_lowlatency(); >-- >2.4.11 >
在 2016/10/30 00:52, Davidlohr Bueso 写道: > On Fri, 28 Oct 2016, Pan Xinhui wrote: >> /* >> * If we need to reschedule bail... so we can block. >> + * Use vcpu_is_preempted to detech lock holder preemption issue > ^^ detect ok. thanks for poingting it out. >> + * and break. > > Could you please remove the rest of this comment? Its just noise to point out > that vcpu_is_preempted is a macro defined by arch/false. This is standard protocol > in the kernel. > fair enough. > Same goes for all locks you change with this. > > Thanks, > Davidlohr > >> * vcpu_is_preempted is a macro defined by false if >> + * arch does not support vcpu preempted check, >> */ >> - if (need_resched()) >> + if (need_resched() || vcpu_is_preempted(node_cpu(node->prev))) >> goto unqueue; >> >> cpu_relax_lowlatency(); >> -- >> 2.4.11 >> >
diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c index 05a3785..39d1385 100644 --- a/kernel/locking/osq_lock.c +++ b/kernel/locking/osq_lock.c @@ -21,6 +21,11 @@ static inline int encode_cpu(int cpu_nr) return cpu_nr + 1; } +static inline int node_cpu(struct optimistic_spin_node *node) +{ + return node->cpu - 1; +} + static inline struct optimistic_spin_node *decode_cpu(int encoded_cpu_val) { int cpu_nr = encoded_cpu_val - 1; @@ -118,8 +123,11 @@ bool osq_lock(struct optimistic_spin_queue *lock) while (!READ_ONCE(node->locked)) { /* * If we need to reschedule bail... so we can block. + * Use vcpu_is_preempted to detech lock holder preemption issue + * and break. vcpu_is_preempted is a macro defined by false if + * arch does not support vcpu preempted check, */ - if (need_resched()) + if (need_resched() || vcpu_is_preempted(node_cpu(node->prev))) goto unqueue; cpu_relax_lowlatency();