diff mbox

[01/15] atomic: introduce atomic operations

Message ID 1344407156-25562-2-git-send-email-qemulist@gmail.com
State New
Headers show

Commit Message

pingfan liu Aug. 8, 2012, 6:25 a.m. UTC
From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

If out of global lock, we will be challenged by SMP in low level,
so need atomic ops.

This file is heavily copied from kernel. Currently, only x86 atomic ops
included, and will be extended for other arch for future.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 include/qemu/atomic.h |  161 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 161 insertions(+), 0 deletions(-)
 create mode 100644 include/qemu/atomic.h

Comments

Paolo Bonzini Aug. 8, 2012, 8:55 a.m. UTC | #1
Il 08/08/2012 08:25, Liu Ping Fan ha scritto:
> From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> 
> If out of global lock, we will be challenged by SMP in low level,
> so need atomic ops.
> 
> This file is heavily copied from kernel.

Then it cannot be GPLv2 _or later_.  Please use the version that I
pointed you to.

Paolo

 Currently, only x86 atomic ops
> included, and will be extended for other arch for future.
> 
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> ---
>  include/qemu/atomic.h |  161 +++++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 161 insertions(+), 0 deletions(-)
>  create mode 100644 include/qemu/atomic.h
> 
> diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
> new file mode 100644
> index 0000000..8e1fc3e
> --- /dev/null
> +++ b/include/qemu/atomic.h
> @@ -0,0 +1,161 @@
> +/*
> + * Simple interface for atomic operations.
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + *
> + */
Avi Kivity Aug. 8, 2012, 9:02 a.m. UTC | #2
On 08/08/2012 09:25 AM, Liu Ping Fan wrote:
> From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> 
> If out of global lock, we will be challenged by SMP in low level,
> so need atomic ops.
> 
> This file is heavily copied from kernel. Currently, only x86 atomic ops
> included, and will be extended for other arch for future.
> 

I propose we use gcc builtins.  We get automatic architecture support,
and tuning for newer processors if the user so chooses.

http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html

In May 2031 we can switch to C11 atomics.
陳韋任 Aug. 8, 2012, 9:05 a.m. UTC | #3
> I propose we use gcc builtins.  We get automatic architecture support,
> and tuning for newer processors if the user so chooses.
> 
> http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html
> 
> In May 2031 we can switch to C11 atomics.
         ^^^^
  Maybe 2013?
Avi Kivity Aug. 8, 2012, 9:15 a.m. UTC | #4
On 08/08/2012 12:05 PM, 陳韋任 (Wei-Ren Chen) wrote:
>> I propose we use gcc builtins.  We get automatic architecture support,
>> and tuning for newer processors if the user so chooses.
>> 
>> http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html
>> 
>> In May 2031 we can switch to C11 atomics.
>          ^^^^
>   Maybe 2013?
> 

Maybe in 2013 we'll get a compiler that supports C11...
Peter Maydell Aug. 8, 2012, 9:21 a.m. UTC | #5
On 8 August 2012 07:25, Liu Ping Fan <qemulist@gmail.com> wrote:
> +static inline void atomic_sub(int i, Atomic *v)
> +{
> +    asm volatile("lock; subl %1,%0"
> +             : "+m" (v->counter)
> +             : "ir" (i));
> +}

NAK. We don't want random inline assembly implementations of locking
primitives in QEMU, they are way too hard to keep working with all the
possible host architectures we support. I spent some time a while back
getting rid of the (variously busted) versions we had previously.

If you absolutely must use atomic ops, use the gcc builtins. For
preference, stick to higher level and less error-prone abstractions.

-- PMM
Stefan Hajnoczi Aug. 8, 2012, 1:09 p.m. UTC | #6
On Wed, Aug 8, 2012 at 10:21 AM, Peter Maydell <peter.maydell@linaro.org> wrote:
> On 8 August 2012 07:25, Liu Ping Fan <qemulist@gmail.com> wrote:
>> +static inline void atomic_sub(int i, Atomic *v)
>> +{
>> +    asm volatile("lock; subl %1,%0"
>> +             : "+m" (v->counter)
>> +             : "ir" (i));
>> +}
>
> NAK. We don't want random inline assembly implementations of locking
> primitives in QEMU, they are way too hard to keep working with all the
> possible host architectures we support. I spent some time a while back
> getting rid of the (variously busted) versions we had previously.
>
> If you absolutely must use atomic ops, use the gcc builtins. For
> preference, stick to higher level and less error-prone abstractions.

We're spoilt for choice here:

1. Atomic built-ins from gcc
2. glib atomics

No need to roll our own or copy the implementation from the kernel.

Stefan
Paolo Bonzini Aug. 8, 2012, 1:18 p.m. UTC | #7
Il 08/08/2012 15:09, Stefan Hajnoczi ha scritto:
>> > NAK. We don't want random inline assembly implementations of locking
>> > primitives in QEMU, they are way too hard to keep working with all the
>> > possible host architectures we support. I spent some time a while back
>> > getting rid of the (variously busted) versions we had previously.
>> >
>> > If you absolutely must use atomic ops, use the gcc builtins. For
>> > preference, stick to higher level and less error-prone abstractions.
> We're spoilt for choice here:
> 
> 1. Atomic built-ins from gcc
> 2. glib atomics
> 
> No need to roll our own or copy the implementation from the kernel.

To some extent we need to because:

1. GCC atomics look ugly, :) do not provide rmb/wmb, and in some
versions of GCC mb is known to be (wrongly) a no-op.

2. glib atomics do not provide mb/rmb/wmb either, and
g_atomic_int_get/g_atomic_int_set are inefficient: they add barriers
everywhere, while it is clearer if you put barriers manually, and you
often do not need barriers in the get side.  glib atomics also do not
provide xchg.

I agree however that a small wrapper around GCC atomics is much better
than assembly.  Assembly can be limited to the memory barriers (where we
already have it anyway).

Paolo
Peter Maydell Aug. 8, 2012, 1:32 p.m. UTC | #8
On 8 August 2012 14:18, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 08/08/2012 15:09, Stefan Hajnoczi ha scritto:
>> No need to roll our own or copy the implementation from the kernel.
>
> To some extent we need to because:
>
> 1. GCC atomics look ugly, :) do not provide rmb/wmb, and in some
> versions of GCC mb is known to be (wrongly) a no-op.
>
> 2. glib atomics do not provide mb/rmb/wmb either, and
> g_atomic_int_get/g_atomic_int_set are inefficient: they add barriers
> everywhere, while it is clearer if you put barriers manually, and you
> often do not need barriers in the get side.  glib atomics also do not
> provide xchg.

These are arguments in favour of "don't try to use atomic ops" --
if serious large projects like GCC and glib can't produce working
efficient implementations for all target architectures, what chance
do we have?

-- PMM
Paolo Bonzini Aug. 8, 2012, 1:49 p.m. UTC | #9
Il 08/08/2012 15:32, Peter Maydell ha scritto:
>> > 1. GCC atomics look ugly, :) do not provide rmb/wmb, and in some
>> > versions of GCC mb is known to be (wrongly) a no-op.
>> >
>> > 2. glib atomics do not provide mb/rmb/wmb either, and
>> > g_atomic_int_get/g_atomic_int_set are inefficient: they add barriers
>> > everywhere, while it is clearer if you put barriers manually, and you
>> > often do not need barriers in the get side.  glib atomics also do not
>> > provide xchg.
> These are arguments in favour of "don't try to use atomic ops" --
> if serious large projects like GCC and glib can't produce working
> efficient implementations for all target architectures, what chance
> do we have?

Well, maybe... but the flaws in both GCC and glib are small in size
(even though large in importance, at least for us) and we can work
around them easily.  mb/rmb/wmb is essentially the small set of atomic
operations that we're already using.

Paolo
Avi Kivity Aug. 8, 2012, 2 p.m. UTC | #10
On 08/08/2012 04:49 PM, Paolo Bonzini wrote:
> Il 08/08/2012 15:32, Peter Maydell ha scritto:
>>> > 1. GCC atomics look ugly, :) do not provide rmb/wmb, and in some
>>> > versions of GCC mb is known to be (wrongly) a no-op.
>>> >
>>> > 2. glib atomics do not provide mb/rmb/wmb either, and
>>> > g_atomic_int_get/g_atomic_int_set are inefficient: they add barriers
>>> > everywhere, while it is clearer if you put barriers manually, and you
>>> > often do not need barriers in the get side.  glib atomics also do not
>>> > provide xchg.
>> These are arguments in favour of "don't try to use atomic ops" --
>> if serious large projects like GCC and glib can't produce working
>> efficient implementations for all target architectures, what chance
>> do we have?
> 
> Well, maybe... but the flaws in both GCC and glib are small in size
> (even though large in importance, at least for us) and we can work
> around them easily.  mb/rmb/wmb is essentially the small set of atomic
> operations that we're already using.

We can easily define rmb()/wmb() to be __sync_synchronize() as a
default, and only override them where it matters.
diff mbox

Patch

diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
new file mode 100644
index 0000000..8e1fc3e
--- /dev/null
+++ b/include/qemu/atomic.h
@@ -0,0 +1,161 @@ 
+/*
+ * Simple interface for atomic operations.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef __QEMU_ATOMIC_H
+#define __QEMU_ATOMIC_H 1
+
+typedef struct Atomic {
+    int counter;
+} Atomic;
+
+
+#if defined(__i386__) || defined(__x86_64__)
+
+/**
+ *  * atomic_read - read atomic variable
+ *   * @v: pointer of type Atomic
+ *    *
+ *     * Atomically reads the value of @v.
+ *      */
+static inline int atomic_read(const Atomic *v)
+{
+    return (*(volatile int *)&(v)->counter);
+}
+
+/**
+ *  * atomic_set - set atomic variable
+ *   * @v: pointer of type Atomic
+ *    * @i: required value
+ *     *
+ *      * Atomically sets the value of @v to @i.
+ *       */
+static inline void atomic_set(Atomic *v, int i)
+{
+    v->counter = i;
+}
+
+/**
+ *  * atomic_add - add integer to atomic variable
+ *   * @i: integer value to add
+ *    * @v: pointer of type Atomic
+ *     *
+ *      * Atomically adds @i to @v.
+ *       */
+static inline void atomic_add(int i, Atomic *v)
+{
+    asm volatile("lock; addl %1,%0"
+             : "+m" (v->counter)
+             : "ir" (i));
+}
+
+/**
+ *  * atomic_sub - subtract integer from atomic variable
+ *   * @i: integer value to subtract
+ *    * @v: pointer of type Atomic
+ *     *
+ *      * Atomically subtracts @i from @v.
+ *       */
+static inline void atomic_sub(int i, Atomic *v)
+{
+    asm volatile("lock; subl %1,%0"
+             : "+m" (v->counter)
+             : "ir" (i));
+}
+
+/**
+ *  * atomic_sub_and_test - subtract value from variable and test result
+ *   * @i: integer value to subtract
+ *    * @v: pointer of type Atomic
+ *     *
+ *      * Atomically subtracts @i from @v and returns
+ *       * true if the result is zero, or false for all
+ *        * other cases.
+ *         */
+static inline int atomic_sub_and_test(int i, Atomic *v)
+{
+    unsigned char c;
+
+    asm volatile("lock; subl %2,%0; sete %1"
+             : "+m" (v->counter), "=qm" (c)
+             : "ir" (i) : "memory");
+    return c;
+}
+
+/**
+ *  * atomic_inc - increment atomic variable
+ *   * @v: pointer of type Atomic
+ *    *
+ *     * Atomically increments @v by 1.
+ *      */
+static inline void atomic_inc(Atomic *v)
+{
+    asm volatile("lock; incl %0"
+             : "+m" (v->counter));
+}
+
+/**
+ *  * atomic_dec - decrement atomic variable
+ *   * @v: pointer of type Atomic
+ *    *
+ *     * Atomically decrements @v by 1.
+ *      */
+static inline void atomic_dec(Atomic *v)
+{
+    asm volatile("lock; decl %0"
+             : "+m" (v->counter));
+}
+
+/**
+ *  * atomic_dec_and_test - decrement and test
+ *   * @v: pointer of type Atomic
+ *    *
+ *     * Atomically decrements @v by 1 and
+ *      * returns true if the result is 0, or false for all other
+ *       * cases.
+ *        */
+static inline int atomic_dec_and_test(Atomic *v)
+{
+    unsigned char c;
+
+    asm volatile("lock; decl %0; sete %1"
+             : "+m" (v->counter), "=qm" (c)
+             : : "memory");
+    return c != 0;
+}
+
+/**
+ *  * atomic_inc_and_test - increment and test
+ *   * @v: pointer of type Atomic
+ *    *
+ *     * Atomically increments @v by 1
+ *      * and returns true if the result is zero, or false for all
+ *       * other cases.
+ *        */
+static inline int atomic_inc_and_test(Atomic *v)
+{
+    unsigned char c;
+
+    asm volatile("lock; incl %0; sete %1"
+             : "+m" (v->counter), "=qm" (c)
+             : : "memory");
+    return c != 0;
+}
+
+static inline int atomic_add_and_return(int i, Atomic *v)
+{
+    int ret = i;
+
+    asm volatile ("lock; xaddl %0, %1"
+            : "+r" (ret), "+m" (v->counter)
+            : : "memory", "cc");
+
+    return ret + i;
+}
+#endif
+
+#endif