[kvm-unit-tests,v1,1/3] lib: provide generic spinlock

Message ID 20170512102042.4956-2-david@redhat.com
State New
Headers show

Commit Message

David Hildenbrand May 12, 2017, 10:20 a.m.
Let's provide a basic lock implementation that should work on most
architectures.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 lib/asm-generic/spinlock.h | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

Comments

Thomas Huth May 12, 2017, 10:58 a.m. | #1
On 12.05.2017 12:20, David Hildenbrand wrote:
> Let's provide a basic lock implementation that should work on most
> architectures.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  lib/asm-generic/spinlock.h | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/asm-generic/spinlock.h b/lib/asm-generic/spinlock.h
> index 3141744..e8c3a58 100644
> --- a/lib/asm-generic/spinlock.h
> +++ b/lib/asm-generic/spinlock.h
> @@ -1,4 +1,18 @@
>  #ifndef _ASM_GENERIC_SPINLOCK_H_
>  #define _ASM_GENERIC_SPINLOCK_H_
> -#error need architecture specific asm/spinlock.h
> +
> +struct spinlock {
> +    unsigned int v;
> +};
> +
> +static inline void spin_lock(struct spinlock *lock)
> +{
> +	while (!__sync_bool_compare_and_swap(&lock->v, 0, 1));
> +}
> +
> +static inline void spin_unlock(struct spinlock *lock)
> +{
> +	__sync_bool_compare_and_swap(&lock->v, 1, 0);
> +}
> +
>  #endif
> 

Reviewed-by: Thomas Huth <thuth@redhat.com>

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Radim Krčmář May 12, 2017, 4:43 p.m. | #2
2017-05-12 12:20+0200, David Hildenbrand:
> Let's provide a basic lock implementation that should work on most
> architectures.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  lib/asm-generic/spinlock.h | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/asm-generic/spinlock.h b/lib/asm-generic/spinlock.h
> index 3141744..e8c3a58 100644
> --- a/lib/asm-generic/spinlock.h
> +++ b/lib/asm-generic/spinlock.h
> @@ -1,4 +1,18 @@
>  #ifndef _ASM_GENERIC_SPINLOCK_H_
>  #define _ASM_GENERIC_SPINLOCK_H_
> -#error need architecture specific asm/spinlock.h
> +
> +struct spinlock {
> +    unsigned int v;
> +};
> +
> +static inline void spin_lock(struct spinlock *lock)
> +{
> +	while (!__sync_bool_compare_and_swap(&lock->v, 0, 1));
> +}
> +
> +static inline void spin_unlock(struct spinlock *lock)
> +{
> +	__sync_bool_compare_and_swap(&lock->v, 1, 0);
> +}

x86 would be better with __sync_lock_test_and_set() and
__sync_lock_release() as they generate the same code we have now,
instead of two locked cmpxchgs.

GCC mentions that some targets might have problems with that, but they
seem to fall back to boolean value and compare-and-swap.

Any reason to avoid "while(__sync_lock_test_and_set(&lock->v, 1));"?

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini May 12, 2017, 4:49 p.m. | #3
On 12/05/2017 18:43, Radim Krčmář wrote:
> 2017-05-12 12:20+0200, David Hildenbrand:
>> Let's provide a basic lock implementation that should work on most
>> architectures.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>  lib/asm-generic/spinlock.h | 16 +++++++++++++++-
>>  1 file changed, 15 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/asm-generic/spinlock.h b/lib/asm-generic/spinlock.h
>> index 3141744..e8c3a58 100644
>> --- a/lib/asm-generic/spinlock.h
>> +++ b/lib/asm-generic/spinlock.h
>> @@ -1,4 +1,18 @@
>>  #ifndef _ASM_GENERIC_SPINLOCK_H_
>>  #define _ASM_GENERIC_SPINLOCK_H_
>> -#error need architecture specific asm/spinlock.h
>> +
>> +struct spinlock {
>> +    unsigned int v;
>> +};
>> +
>> +static inline void spin_lock(struct spinlock *lock)
>> +{
>> +	while (!__sync_bool_compare_and_swap(&lock->v, 0, 1));
>> +}
>> +
>> +static inline void spin_unlock(struct spinlock *lock)
>> +{
>> +	__sync_bool_compare_and_swap(&lock->v, 1, 0);
>> +}
> 
> x86 would be better with __sync_lock_test_and_set() and
> __sync_lock_release() as they generate the same code we have now,
> instead of two locked cmpxchgs.

I agree these are a better match.

Paolo

> GCC mentions that some targets might have problems with that, but they
> seem to fall back to boolean value and compare-and-swap.
> 
> Any reason to avoid "while(__sync_lock_test_and_set(&lock->v, 1));"?
> 
> Thanks.
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Hildenbrand May 15, 2017, 7:48 a.m. | #4
On 12.05.2017 18:43, Radim Krčmář wrote:
> 2017-05-12 12:20+0200, David Hildenbrand:
>> Let's provide a basic lock implementation that should work on most
>> architectures.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>  lib/asm-generic/spinlock.h | 16 +++++++++++++++-
>>  1 file changed, 15 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/asm-generic/spinlock.h b/lib/asm-generic/spinlock.h
>> index 3141744..e8c3a58 100644
>> --- a/lib/asm-generic/spinlock.h
>> +++ b/lib/asm-generic/spinlock.h
>> @@ -1,4 +1,18 @@
>>  #ifndef _ASM_GENERIC_SPINLOCK_H_
>>  #define _ASM_GENERIC_SPINLOCK_H_
>> -#error need architecture specific asm/spinlock.h
>> +
>> +struct spinlock {
>> +    unsigned int v;
>> +};
>> +
>> +static inline void spin_lock(struct spinlock *lock)
>> +{
>> +	while (!__sync_bool_compare_and_swap(&lock->v, 0, 1));
>> +}
>> +
>> +static inline void spin_unlock(struct spinlock *lock)
>> +{
>> +	__sync_bool_compare_and_swap(&lock->v, 1, 0);
>> +}
> 
> x86 would be better with __sync_lock_test_and_set() and
> __sync_lock_release() as they generate the same code we have now,
> instead of two locked cmpxchgs.

Both should work, however using your pair looks nicer and also seems to
work for x86, powerpc and s390x (at least GCC doesn't spit fire).

So I'll resend, thanks!

> 
> GCC mentions that some targets might have problems with that, but they
> seem to fall back to boolean value and compare-and-swap.
> 
> Any reason to avoid "while(__sync_lock_test_and_set(&lock->v, 1));"?

I haven't found anything speaking against it. There are even multiple
articles suggesting to either use __sync_lock_test_and_set or
__sync_bool_compare_and_swap in a loop for exactly that purpose.

> 
> Thanks.
>

Patch

diff --git a/lib/asm-generic/spinlock.h b/lib/asm-generic/spinlock.h
index 3141744..e8c3a58 100644
--- a/lib/asm-generic/spinlock.h
+++ b/lib/asm-generic/spinlock.h
@@ -1,4 +1,18 @@ 
 #ifndef _ASM_GENERIC_SPINLOCK_H_
 #define _ASM_GENERIC_SPINLOCK_H_
-#error need architecture specific asm/spinlock.h
+
+struct spinlock {
+    unsigned int v;
+};
+
+static inline void spin_lock(struct spinlock *lock)
+{
+	while (!__sync_bool_compare_and_swap(&lock->v, 0, 1));
+}
+
+static inline void spin_unlock(struct spinlock *lock)
+{
+	__sync_bool_compare_and_swap(&lock->v, 1, 0);
+}
+
 #endif