diff mbox

[v2,2/4] powerpc/spinlock: support vcpu preempted check

Message ID 1467124991-13164-3-git-send-email-xinhui.pan@linux.vnet.ibm.com (mailing list archive)
State Superseded
Headers show

Commit Message

xinhui June 28, 2016, 2:43 p.m. UTC
This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself. Currently
kernel has an interface of bool vcpu_is_preempted(int cpu). It take the cpu
as parameter and return true if the cpu is preempted. Then kernel can break
the spin loops upon on the retval of vcpu_is_preempted.

As kernel has used this interface, So lets support it.

Only pSeries need supoort it. And the fact is powerNV are built into same
kernel image with pSeries. So we need return false if we are runnig as
powerNV. The another fact is that lppaca->yiled_count keeps zero on
powerNV. So we can just skip the machine type.

Suggested-by: Boqun Feng <boqun.feng@gmail.com>
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/spinlock.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

Comments

Wanpeng Li July 5, 2016, 9:57 a.m. UTC | #1
Hi Xinhui,
2016-06-28 22:43 GMT+08:00 Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>:
> This is to fix some lock holder preemption issues. Some other locks
> implementation do a spin loop before acquiring the lock itself. Currently
> kernel has an interface of bool vcpu_is_preempted(int cpu). It take the cpu
> as parameter and return true if the cpu is preempted. Then kernel can break
> the spin loops upon on the retval of vcpu_is_preempted.
>
> As kernel has used this interface, So lets support it.
>
> Only pSeries need supoort it. And the fact is powerNV are built into same
> kernel image with pSeries. So we need return false if we are runnig as
> powerNV. The another fact is that lppaca->yiled_count keeps zero on
> powerNV. So we can just skip the machine type.

Lock holder vCPU preemption can be detected by hardware pSeries or
paravirt method?

Regards,
Wanpeng Li
xinhui July 6, 2016, 4:58 a.m. UTC | #2
Hi, wanpeng

On 2016年07月05日 17:57, Wanpeng Li wrote:
> Hi Xinhui,
> 2016-06-28 22:43 GMT+08:00 Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>:
>> This is to fix some lock holder preemption issues. Some other locks
>> implementation do a spin loop before acquiring the lock itself. Currently
>> kernel has an interface of bool vcpu_is_preempted(int cpu). It take the cpu
>> as parameter and return true if the cpu is preempted. Then kernel can break
>> the spin loops upon on the retval of vcpu_is_preempted.
>>
>> As kernel has used this interface, So lets support it.
>>
>> Only pSeries need supoort it. And the fact is powerNV are built into same
>> kernel image with pSeries. So we need return false if we are runnig as
>> powerNV. The another fact is that lppaca->yiled_count keeps zero on
>> powerNV. So we can just skip the machine type.
>
> Lock holder vCPU preemption can be detected by hardware pSeries or
> paravirt method?
>
There is one shard struct between kernel and powerVM/KVM. And we read the yield_count of this struct to detect if one vcpu is running or not.
SO it's easy for ppc to implement such interface. Note that yield_count is set by powerVM/KVM.
and only pSeries can run a guest for now. :)

I also review x86 related code, looks like we need add one hyer-call to get such vcpu preemption info?

thanks
xinui
> Regards,
> Wanpeng Li
>
Wanpeng Li July 6, 2016, 6:46 a.m. UTC | #3
Cc Paolo, kvm ml
2016-07-06 12:58 GMT+08:00 xinhui <xinhui.pan@linux.vnet.ibm.com>:
> Hi, wanpeng
>
> On 2016年07月05日 17:57, Wanpeng Li wrote:
>>
>> Hi Xinhui,
>> 2016-06-28 22:43 GMT+08:00 Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>:
>>>
>>> This is to fix some lock holder preemption issues. Some other locks
>>> implementation do a spin loop before acquiring the lock itself. Currently
>>> kernel has an interface of bool vcpu_is_preempted(int cpu). It take the
>>> cpu
>>> as parameter and return true if the cpu is preempted. Then kernel can
>>> break
>>> the spin loops upon on the retval of vcpu_is_preempted.
>>>
>>> As kernel has used this interface, So lets support it.
>>>
>>> Only pSeries need supoort it. And the fact is powerNV are built into same
>>> kernel image with pSeries. So we need return false if we are runnig as
>>> powerNV. The another fact is that lppaca->yiled_count keeps zero on
>>> powerNV. So we can just skip the machine type.
>>
>>
>> Lock holder vCPU preemption can be detected by hardware pSeries or
>> paravirt method?
>>
> There is one shard struct between kernel and powerVM/KVM. And we read the
> yield_count of this struct to detect if one vcpu is running or not.
> SO it's easy for ppc to implement such interface. Note that yield_count is
> set by powerVM/KVM.
> and only pSeries can run a guest for now. :)
>
> I also review x86 related code, looks like we need add one hyer-call to get
> such vcpu preemption info?

There is no such stuff to record lock holder in x86 kvm, maybe we
don't need to depend on PLE handler algorithm to guess it if we can
know lock holder vCPU directly.

Regards,
Wanpeng Li
Peter Zijlstra July 6, 2016, 7:58 a.m. UTC | #4
On Wed, Jul 06, 2016 at 02:46:34PM +0800, Wanpeng Li wrote:
> > SO it's easy for ppc to implement such interface. Note that yield_count is
> > set by powerVM/KVM.
> > and only pSeries can run a guest for now. :)
> >
> > I also review x86 related code, looks like we need add one hyer-call to get
> > such vcpu preemption info?
> 
> There is no such stuff to record lock holder in x86 kvm, maybe we
> don't need to depend on PLE handler algorithm to guess it if we can
> know lock holder vCPU directly.

x86/kvm has vcpu->preempted to indicate if a vcpu is currently preempted
or not. I'm just not sure if that is visible to the guest or how to make
it so.
Wanpeng Li July 6, 2016, 8:32 a.m. UTC | #5
2016-07-06 15:58 GMT+08:00 Peter Zijlstra <peterz@infradead.org>:
> On Wed, Jul 06, 2016 at 02:46:34PM +0800, Wanpeng Li wrote:
>> > SO it's easy for ppc to implement such interface. Note that yield_count is
>> > set by powerVM/KVM.
>> > and only pSeries can run a guest for now. :)
>> >
>> > I also review x86 related code, looks like we need add one hyer-call to get
>> > such vcpu preemption info?
>>
>> There is no such stuff to record lock holder in x86 kvm, maybe we
>> don't need to depend on PLE handler algorithm to guess it if we can
>> know lock holder vCPU directly.
>
> x86/kvm has vcpu->preempted to indicate if a vcpu is currently preempted
> or not. I'm just not sure if that is visible to the guest or how to make
> it so.

Yeah, I miss it. I can be the volunteer to do it if there is any idea,
ping Paolo. :)

Regards,
Wanpeng Li
xinhui July 6, 2016, 10:18 a.m. UTC | #6
On 2016年07月06日 16:32, Wanpeng Li wrote:
> 2016-07-06 15:58 GMT+08:00 Peter Zijlstra <peterz@infradead.org>:
>> On Wed, Jul 06, 2016 at 02:46:34PM +0800, Wanpeng Li wrote:
>>>> SO it's easy for ppc to implement such interface. Note that yield_count is
>>>> set by powerVM/KVM.
>>>> and only pSeries can run a guest for now. :)
>>>>
>>>> I also review x86 related code, looks like we need add one hyer-call to get
>>>> such vcpu preemption info?
>>>
>>> There is no such stuff to record lock holder in x86 kvm, maybe we
>>> don't need to depend on PLE handler algorithm to guess it if we can
>>> know lock holder vCPU directly.
>>
>> x86/kvm has vcpu->preempted to indicate if a vcpu is currently preempted
>> or not. I'm just not sure if that is visible to the guest or how to make
>> it so.
>
> Yeah, I miss it. I can be the volunteer to do it if there is any idea,
> ping Paolo. :)
>
glad to know that. :)


> Regards,
> Wanpeng Li
>
Balbir Singh July 6, 2016, 10:54 a.m. UTC | #7
On Tue, 2016-06-28 at 10:43 -0400, Pan Xinhui wrote:
> This is to fix some lock holder preemption issues. Some other locks
> implementation do a spin loop before acquiring the lock itself. Currently
> kernel has an interface of bool vcpu_is_preempted(int cpu). It take the cpu
								^^ takes
> as parameter and return true if the cpu is preempted. Then kernel can break
> the spin loops upon on the retval of vcpu_is_preempted.

> As kernel has used this interface, So lets support it.

> Only pSeries need supoort it. And the fact is powerNV are built into same
		   ^^ support
> kernel image with pSeries. So we need return false if we are runnig as
> powerNV. The another fact is that lppaca->yiled_count keeps zero on
					  ^^ yield
> powerNV. So we can just skip the machine type.

> Suggested-by: Boqun Feng <boqun.feng@gmail.com>
> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/spinlock.h | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)

> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 523673d..3ac9fcb 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -52,6 +52,24 @@
>  #define SYNC_IO
>  #endif
>  
> +/*
> + * This support kernel to check if one cpu is preempted or not.
> + * Then we can fix some lock holder preemption issue.
> + */
> +#ifdef CONFIG_PPC_PSERIES
> +#define vcpu_is_preempted vcpu_is_preempted
> +static inline bool vcpu_is_preempted(int cpu)
> +{
> +	/*
> +	 * pSeries and powerNV can be built into same kernel image. In
> +	 * principle we need return false directly if we are running as
> +	 * powerNV. However the yield_count is always zero on powerNV, So
> +	 * skip such machine type check

Or you could use the ppc_md interface callbacks if required, but your
solution works as well

> +	 */
> +	return !!(be32_to_cpu(lppaca_of(cpu).yield_count) & 1);
> +}
> +#endif
> +
>  static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
>  {
>  	return lock.slock == 0;


Balbir Singh.
xinhui July 15, 2016, 3:35 p.m. UTC | #8
Hi, Baibir
	sorry for late responce, I missed reading your mail.

在 16/7/6 18:54, Balbir Singh 写道:
> On Tue, 2016-06-28 at 10:43 -0400, Pan Xinhui wrote:
>> This is to fix some lock holder preemption issues. Some other locks
>> implementation do a spin loop before acquiring the lock itself. Currently
>> kernel has an interface of bool vcpu_is_preempted(int cpu). It take the cpu
> 								^^ takes
>> as parameter and return true if the cpu is preempted. Then kernel can break
>> the spin loops upon on the retval of vcpu_is_preempted.
>>
>> As kernel has used this interface, So lets support it.
>>
>> Only pSeries need supoort it. And the fact is powerNV are built into same
> 		   ^^ support
>> kernel image with pSeries. So we need return false if we are runnig as
>> powerNV. The another fact is that lppaca->yiled_count keeps zero on
> 					  ^^ yield
>> powerNV. So we can just skip the machine type.
>>

Blame on me, I indeed need avoid such typo..
thanks for pointing it out.

>> Suggested-by: Boqun Feng <boqun.feng@gmail.com>
>> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>> Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/include/asm/spinlock.h | 18 ++++++++++++++++++
>>  1 file changed, 18 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
>> index 523673d..3ac9fcb 100644
>> --- a/arch/powerpc/include/asm/spinlock.h
>> +++ b/arch/powerpc/include/asm/spinlock.h
>> @@ -52,6 +52,24 @@
>>  #define SYNC_IO
>>  #endif
>>
>> +/*
>> + * This support kernel to check if one cpu is preempted or not.
>> + * Then we can fix some lock holder preemption issue.
>> + */
>> +#ifdef CONFIG_PPC_PSERIES
>> +#define vcpu_is_preempted vcpu_is_preempted
>> +static inline bool vcpu_is_preempted(int cpu)
>> +{
>> +	/*
>> +	 * pSeries and powerNV can be built into same kernel image. In
>> +	 * principle we need return false directly if we are running as
>> +	 * powerNV. However the yield_count is always zero on powerNV, So
>> +	 * skip such machine type check
>
> Or you could use the ppc_md interface callbacks if required, but your
> solution works as well
>

thanks, So I can keep my code as is.

thanks
xinhui

>> +	 */
>> +	return !!(be32_to_cpu(lppaca_of(cpu).yield_count) & 1);
>> +}
>> +#endif
>> +
>>  static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
>>  {
>>  	return lock.slock == 0;
>
>
> Balbir Singh.
>
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 523673d..3ac9fcb 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -52,6 +52,24 @@ 
 #define SYNC_IO
 #endif
 
+/*
+ * This support kernel to check if one cpu is preempted or not.
+ * Then we can fix some lock holder preemption issue.
+ */
+#ifdef CONFIG_PPC_PSERIES
+#define vcpu_is_preempted vcpu_is_preempted
+static inline bool vcpu_is_preempted(int cpu)
+{
+	/*
+	 * pSeries and powerNV can be built into same kernel image. In
+	 * principle we need return false directly if we are running as
+	 * powerNV. However the yield_count is always zero on powerNV, So
+	 * skip such machine type check
+	 */
+	return !!(be32_to_cpu(lppaca_of(cpu).yield_count) & 1);
+}
+#endif
+
 static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 {
 	return lock.slock == 0;