diff mbox

[1/3] kernel/sched: introduce vcpu preempted check interface

Message ID 1467049290-32359-2-git-send-email-xinhui.pan@linux.vnet.ibm.com (mailing list archive)
State Superseded
Headers show

Commit Message

xinhui June 27, 2016, 5:41 p.m. UTC
this supports to fix lock holder preempted issue which run as a guest

for kernel users, we could use bool vcpu_is_preempted(int cpu) to detech
if one vcpu is preempted or not.

The default implementation is a macrodefined by false. So compiler can
wrap it out if arch dose not support such vcpu pteempted check.

archs can implement it by define arch_vcpu_is_preempted().

Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
---
 include/linux/sched.h | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

Peter Zijlstra June 27, 2016, 2 p.m. UTC | #1
On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
> +++ b/include/linux/sched.h
> @@ -3293,6 +3293,15 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
>  
>  #endif /* CONFIG_SMP */
>  
> +#ifdef arch_vcpu_is_preempted
> +static inline bool vcpu_is_preempted(int cpu)
> +{
> +	return arch_vcpu_is_preempted(cpu);
> +}
> +#else
> +#define vcpu_is_preempted(cpu)	false
> +#endif

#ifndef vcpu_is_preempted
#define vcpu_is_preempted(cpu)		(false)
#endif

Is so much simpler...

Also, please Cc the virt list so that other interested parties can
comment, and maybe also the s390 folks.
Peter Zijlstra June 27, 2016, 2:02 p.m. UTC | #2
On Mon, Jun 27, 2016 at 04:00:43PM +0200, Peter Zijlstra wrote:
> On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
> > +++ b/include/linux/sched.h
> > @@ -3293,6 +3293,15 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
> >  
> >  #endif /* CONFIG_SMP */
> >  
> > +#ifdef arch_vcpu_is_preempted
> > +static inline bool vcpu_is_preempted(int cpu)
> > +{
> > +	return arch_vcpu_is_preempted(cpu);
> > +}
> > +#else
> > +#define vcpu_is_preempted(cpu)	false
> > +#endif
> 
> #ifndef vcpu_is_preempted
> #define vcpu_is_preempted(cpu)		(false)
> #endif
> 
> Is so much simpler...
> 
> Also, please Cc the virt list so that other interested parties can
> comment, and maybe also the s390 folks.

And before you hurry off to post again, add a patch doing
mutex_spin_on_owner() and rwsem_spin_in_owner().
Boqun Feng June 27, 2016, 2:05 p.m. UTC | #3
On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
> this supports to fix lock holder preempted issue which run as a guest
> 
> for kernel users, we could use bool vcpu_is_preempted(int cpu) to detech
> if one vcpu is preempted or not.
> 
> The default implementation is a macrodefined by false. So compiler can
> wrap it out if arch dose not support such vcpu pteempted check.
> 
> archs can implement it by define arch_vcpu_is_preempted().
> 
> Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
> ---
>  include/linux/sched.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 6e42ada..dc0a9c3 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -3293,6 +3293,15 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
>  
>  #endif /* CONFIG_SMP */
>  
> +#ifdef arch_vcpu_is_preempted
> +static inline bool vcpu_is_preempted(int cpu)
> +{
> +	return arch_vcpu_is_preempted(cpu);
> +}
> +#else
> +#define vcpu_is_preempted(cpu)	false
> +#endif
> +

I think you are missing Peter's comment here. We can

#ifndef vcpu_is_preempted
#define vcpu_is_preempted(cpu)	fasle
#endif

And different archs implement their own versions of vcpu_is_preempted(),
IOW, no need for an arch_vcpu_is_preempted().

Regards,
Boqun

>  extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
>  extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
>  
> -- 
> 2.4.11
>
xinhui June 28, 2016, 3:14 a.m. UTC | #4
On 2016年06月27日 22:02, Peter Zijlstra wrote:
> On Mon, Jun 27, 2016 at 04:00:43PM +0200, Peter Zijlstra wrote:
>> On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
>>> +++ b/include/linux/sched.h
>>> @@ -3293,6 +3293,15 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
>>>
>>>   #endif /* CONFIG_SMP */
>>>
>>> +#ifdef arch_vcpu_is_preempted
>>> +static inline bool vcpu_is_preempted(int cpu)
>>> +{
>>> +	return arch_vcpu_is_preempted(cpu);
>>> +}
>>> +#else
>>> +#define vcpu_is_preempted(cpu)	false
>>> +#endif
>>
>> #ifndef vcpu_is_preempted
>> #define vcpu_is_preempted(cpu)		(false)
>> #endif
>>
>> Is so much simpler...
>>
fair enough.

>> Also, please Cc the virt list so that other interested parties can
>> comment, and maybe also the s390 folks.
>
oh. I forgot that. maybe we need cc more.

root@ltcalpine2-lp13:~/linux# find ./arch -name kvm
./arch/arm/kvm
./arch/arm64/kvm
./arch/mips/kvm
./arch/powerpc/kvm
./arch/s390/kvm
./arch/tile/kvm
./arch/x86/kvm

> And before you hurry off to post again, add a patch doing
> mutex_spin_on_owner() and rwsem_spin_in_owner().
>
will do that.

thanks for your suggestion. :)
xinhui June 28, 2016, 3:15 a.m. UTC | #5
On 2016年06月27日 22:05, Boqun Feng wrote:
> On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
>> this supports to fix lock holder preempted issue which run as a guest
>>
>> for kernel users, we could use bool vcpu_is_preempted(int cpu) to detech
>> if one vcpu is preempted or not.
>>
>> The default implementation is a macrodefined by false. So compiler can
>> wrap it out if arch dose not support such vcpu pteempted check.
>>
>> archs can implement it by define arch_vcpu_is_preempted().
>>
>> Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
>> ---
>>   include/linux/sched.h | 9 +++++++++
>>   1 file changed, 9 insertions(+)
>>
>> diff --git a/include/linux/sched.h b/include/linux/sched.h
>> index 6e42ada..dc0a9c3 100644
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -3293,6 +3293,15 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
>>
>>   #endif /* CONFIG_SMP */
>>
>> +#ifdef arch_vcpu_is_preempted
>> +static inline bool vcpu_is_preempted(int cpu)
>> +{
>> +	return arch_vcpu_is_preempted(cpu);
>> +}
>> +#else
>> +#define vcpu_is_preempted(cpu)	false
>> +#endif
>> +
>
> I think you are missing Peter's comment here. We can
>
> #ifndef vcpu_is_preempted
> #define vcpu_is_preempted(cpu)	fasle
> #endif
>
> And different archs implement their own versions of vcpu_is_preempted(),
> IOW, no need for an arch_vcpu_is_preempted().
>
yes, right.
just vcpu_is_preempted, no arch_vcpu_is_preempted..
thanks

> Regards,
> Boqun
>
>>   extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
>>   extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
>>
>> --
>> 2.4.11
>>
Heiko Carstens June 28, 2016, 7 a.m. UTC | #6
On Mon, Jun 27, 2016 at 04:00:43PM +0200, Peter Zijlstra wrote:
> On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
> > +++ b/include/linux/sched.h
> > @@ -3293,6 +3293,15 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
> >  
> >  #endif /* CONFIG_SMP */
> >  
> > +#ifdef arch_vcpu_is_preempted
> > +static inline bool vcpu_is_preempted(int cpu)
> > +{
> > +	return arch_vcpu_is_preempted(cpu);
> > +}
> > +#else
> > +#define vcpu_is_preempted(cpu)	false
> > +#endif
> 
> #ifndef vcpu_is_preempted
> #define vcpu_is_preempted(cpu)		(false)
> #endif
> 
> Is so much simpler...
> 
> Also, please Cc the virt list so that other interested parties can
> comment, and maybe also the s390 folks.

The s390 implementation would be to simply use cpu_is_preempted() from
arch/s390/lib/spinlock.c.
It's nice that there will be a common code function for this!
xinhui June 28, 2016, 9:47 a.m. UTC | #7
On 2016年06月28日 15:00, Heiko Carstens wrote:
> On Mon, Jun 27, 2016 at 04:00:43PM +0200, Peter Zijlstra wrote:
>> On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
>>> +++ b/include/linux/sched.h
>>> @@ -3293,6 +3293,15 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
>>>
>>>   #endif /* CONFIG_SMP */
>>>
>>> +#ifdef arch_vcpu_is_preempted
>>> +static inline bool vcpu_is_preempted(int cpu)
>>> +{
>>> +	return arch_vcpu_is_preempted(cpu);
>>> +}
>>> +#else
>>> +#define vcpu_is_preempted(cpu)	false
>>> +#endif
>>
>> #ifndef vcpu_is_preempted
>> #define vcpu_is_preempted(cpu)		(false)
>> #endif
>>
>> Is so much simpler...
>>
>> Also, please Cc the virt list so that other interested parties can
>> comment, and maybe also the s390 folks.
>
> The s390 implementation would be to simply use cpu_is_preempted() from
> arch/s390/lib/spinlock.c.
that's great.

> It's nice that there will be a common code function for this!
>
diff mbox

Patch

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6e42ada..dc0a9c3 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -3293,6 +3293,15 @@  static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
 
 #endif /* CONFIG_SMP */
 
+#ifdef arch_vcpu_is_preempted
+static inline bool vcpu_is_preempted(int cpu)
+{
+	return arch_vcpu_is_preempted(cpu);
+}
+#else
+#define vcpu_is_preempted(cpu)	false
+#endif
+
 extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
 extern long sched_getaffinity(pid_t pid, struct cpumask *mask);