diff mbox series

[1/8] qatomic: add smp_mb__before/after_rmw()

Message ID 20230303171939.237819-2-pbonzini@redhat.com
State New
Headers show
Series Fix missing memory barriers on ARM | expand

Commit Message

Paolo Bonzini March 3, 2023, 5:19 p.m. UTC
On ARM, seqcst loads and stores (which QEMU does not use) are compiled
respectively as LDAR and STLR instructions.  Even though STLR is also
used for store-release operations, STLR followed by LDAR provides
store-against-load ordering, which is stronger than a store-release.
Compare this to ARMv7, where store-release is DMB+STR and store-seqcst
is DMB+STR+DMB.

This means that on ARM a sequence of

  qatomic_store_release(&y, ...);         // STLR
  a = qatomic_load_acquire(&x);           // LDAR

provides stronger ordering at the processor level than the two MOV
instructions you'd get on x86.

Likewise, on ARM sequentially consistent read-modify-write operations only
need to use LDAXR and STLXR respectively for the load and the store, which
is weaker than the LOCK prefix used on x86.

In a strange twist of events, however, the _stronger_ semantics
of the ARM instructions can end up causing bugs on ARM, not on x86.
The problems occur when seqcst atomics are mixed with relaxed atomics.

QEMU's atomics try to bridge the Linux API (that most of the developers
are familiar with) and the C11 API, and the two have a substantial
difference:

- in Linux, strongly-ordered atomics such as atomic_add_return() affect
  the global ordering of _all_ memory operations, including for example
  READ_ONCE()/WRITE_ONCE()

- in C11, sequentially consistent atomics (except for seq-cst fences)
  only affect the ordering of sequentially consistent operations.
  In particular, since relaxed loads are done with LDR on ARM, they are
  not ordered against seqcst stores (which are done with STLR).

QEMU implements high-level synchronization primitives with the idea that
the primitives contain the necessary memory barriers, and the callers can
use relaxed atomics (qatomic_read/qatomic_set) or even regular accesses.
This is very much incompatible with the C11 view that seqcst accesses
are only ordered against other seqcst accesses, and requires using seqcst
fences as in the following example:

   qatomic_set(&y, 1);            qatomic_set(&x, 1);
   smp_mb();                      smp_mb();
   ... qatomic_read(&x) ...       ... qatomic_read(&y) ...

When a qatomic_*() read-modify write operation is used instead of one
or both stores, developers that are more familiar with the Linux API may
be tempted to omit the smp_mb(), which will work on x86 but not on ARM.

This nasty difference between Linux and C11 read-modify-write operations
has already caused issues in util/async.c and more are being found.
Provide something similar to Linux smp_mb__before/after_atomic(); this
has the double function of documenting clearly why there is a memory
barrier, and avoiding a double barrier on x86 and s390x systems.

The new macro can already be put to use in qatomic_mb_set().

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 docs/devel/atomics.rst | 26 +++++++++++++++++++++-----
 include/qemu/atomic.h  | 17 ++++++++++++++++-
 2 files changed, 37 insertions(+), 6 deletions(-)

Comments

Richard Henderson March 5, 2023, 6:57 p.m. UTC | #1
On 3/3/23 09:19, Paolo Bonzini wrote:
> This nasty difference between Linux and C11 read-modify-write operations
> has already caused issues in util/async.c and more are being found.
> Provide something similar to Linux smp_mb__before/after_atomic(); this
> has the double function of documenting clearly why there is a memory
> barrier, and avoiding a double barrier on x86 and s390x systems.

It does make me question our choice to use neither the Linux nor the C11 model.

> +      +--------------------------------+
> +      | QEMU (incorrect)               |
> +      +================================+
> +      | ::                             |
> +      |                                |
> +      |   a = qatomic_fetch_add(&x, 2);|
> +      |   smp_mb__after_rmw();         |
> +      |   b = qatomic_read(&y);        |
> +      +--------------------------------+

Correct, surely.

> +/*
> + * SEQ_CST is weaker than the older __sync_* builtins and Linux
> + * kernel read-modify-write atomics.  Provide a macro to obtain
> + * the same semantics.
> + */
> +#if !defined(QEMU_SANITIZE_THREAD) && \
> +    (defined(__i386__) || defined(__x86_64__) || defined(__s390x__))
> +# define smp_mb__before_rmw() signal_barrier()
> +# define smp_mb__after_rmw() signal_barrier()
> +#else
> +# define smp_mb__before_rmw() smp_mb()
> +# define smp_mb__after_rmw() smp_mb()
> +#endif

I notice you never actually use smp_mb__before_rmw(), but I suppose since we're trying to 
mirror smp_mb__before(), we should keep the interface whole.

Other than the "incorrect" above,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~
Paolo Bonzini March 5, 2023, 9 p.m. UTC | #2
Il dom 5 mar 2023, 19:57 Richard Henderson <richard.henderson@linaro.org>
ha scritto:

> On 3/3/23 09:19, Paolo Bonzini wrote:
> > This nasty difference between Linux and C11 read-modify-write operations
> > has already caused issues in util/async.c and more are being found.
> > Provide something similar to Linux smp_mb__before/after_atomic(); this
> > has the double function of documenting clearly why there is a memory
> > barrier, and avoiding a double barrier on x86 and s390x systems.
>
> It does make me question our choice to use neither the Linux nor the C11
> model.
>

We do use the C11 model. However, C11's "just use seqcst everywhere" is not
suitable to define higher-level primitives—including those that are part of
the C11 or pthreads specifications.

For example, C11 implementations (such as glibc) have the same issue as
QEMU in their implementations of e.g. pthread_cond_wait or pthread_join.
The primitive itself could in principle be implemented using seqcst
atomics, but it has to assume that the caller uses not even relaxed
atomics, but just plain old non-atomic accesses.

If instead you mean using acqrel memory ordering instead of seqcst for the
RMW operations, honestly I think I understand acq and rel pretty well at
this point but I don't understand acqrel enough to be able to use it.

Paolo


> > +      +--------------------------------+
> > +      | QEMU (incorrect)               |
> > +      +================================+
> > +      | ::                             |
> > +      |                                |
> > +      |   a = qatomic_fetch_add(&x, 2);|
> > +      |   smp_mb__after_rmw();         |
> > +      |   b = qatomic_read(&y);        |
> > +      +--------------------------------+
>
> Correct, surely.
>
> > +/*
> > + * SEQ_CST is weaker than the older __sync_* builtins and Linux
> > + * kernel read-modify-write atomics.  Provide a macro to obtain
> > + * the same semantics.
> > + */
> > +#if !defined(QEMU_SANITIZE_THREAD) && \
> > +    (defined(__i386__) || defined(__x86_64__) || defined(__s390x__))
> > +# define smp_mb__before_rmw() signal_barrier()
> > +# define smp_mb__after_rmw() signal_barrier()
> > +#else
> > +# define smp_mb__before_rmw() smp_mb()
> > +# define smp_mb__after_rmw() smp_mb()
> > +#endif
>
> I notice you never actually use smp_mb__before_rmw(), but I suppose since
> we're trying to
> mirror smp_mb__before(), we should keep the interface whole.
>
> Other than the "incorrect" above,
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
>
>
> r~
>
>
David Hildenbrand March 6, 2023, 1:21 p.m. UTC | #3
On 03.03.23 18:19, Paolo Bonzini wrote:
> On ARM, seqcst loads and stores (which QEMU does not use) are compiled
> respectively as LDAR and STLR instructions.  Even though STLR is also
> used for store-release operations, STLR followed by LDAR provides
> store-against-load ordering, which is stronger than a store-release.
> Compare this to ARMv7, where store-release is DMB+STR and store-seqcst
> is DMB+STR+DMB.
> 
> This means that on ARM a sequence of
> 
>    qatomic_store_release(&y, ...);         // STLR
>    a = qatomic_load_acquire(&x);           // LDAR
> 
> provides stronger ordering at the processor level than the two MOV
> instructions you'd get on x86.
> 
> Likewise, on ARM sequentially consistent read-modify-write operations only
> need to use LDAXR and STLXR respectively for the load and the store, which
> is weaker than the LOCK prefix used on x86.
> 
> In a strange twist of events, however, the _stronger_ semantics
> of the ARM instructions can end up causing bugs on ARM, not on x86.
> The problems occur when seqcst atomics are mixed with relaxed atomics.
> 
> QEMU's atomics try to bridge the Linux API (that most of the developers
> are familiar with) and the C11 API, and the two have a substantial
> difference:
> 
> - in Linux, strongly-ordered atomics such as atomic_add_return() affect
>    the global ordering of _all_ memory operations, including for example
>    READ_ONCE()/WRITE_ONCE()
> 
> - in C11, sequentially consistent atomics (except for seq-cst fences)
>    only affect the ordering of sequentially consistent operations.
>    In particular, since relaxed loads are done with LDR on ARM, they are
>    not ordered against seqcst stores (which are done with STLR).
> 
> QEMU implements high-level synchronization primitives with the idea that
> the primitives contain the necessary memory barriers, and the callers can
> use relaxed atomics (qatomic_read/qatomic_set) or even regular accesses.
> This is very much incompatible with the C11 view that seqcst accesses
> are only ordered against other seqcst accesses, and requires using seqcst
> fences as in the following example:
> 
>     qatomic_set(&y, 1);            qatomic_set(&x, 1);
>     smp_mb();                      smp_mb();
>     ... qatomic_read(&x) ...       ... qatomic_read(&y) ...
> 
> When a qatomic_*() read-modify write operation is used instead of one
> or both stores, developers that are more familiar with the Linux API may
> be tempted to omit the smp_mb(), which will work on x86 but not on ARM.
> 
> This nasty difference between Linux and C11 read-modify-write operations
> has already caused issues in util/async.c and more are being found.
> Provide something similar to Linux smp_mb__before/after_atomic(); this
> has the double function of documenting clearly why there is a memory
> barrier, and avoiding a double barrier on x86 and s390x systems.
> 

Right, just like smp_mb__before_atomic()/smp_mb__after_atomic().


> The new macro can already be put to use in qatomic_mb_set().
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   docs/devel/atomics.rst | 26 +++++++++++++++++++++-----
>   include/qemu/atomic.h  | 17 ++++++++++++++++-
>   2 files changed, 37 insertions(+), 6 deletions(-)
> 
> diff --git a/docs/devel/atomics.rst b/docs/devel/atomics.rst
> index 7957310071d9..898f5393c07a 100644
> --- a/docs/devel/atomics.rst
> +++ b/docs/devel/atomics.rst
> @@ -27,7 +27,8 @@ provides macros that fall in three camps:
>   
>   - weak atomic access and manual memory barriers: ``qatomic_read()``,
>     ``qatomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``,
> -  ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``;
> +  ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``,
> +  ``smp_mb__before_rmw()``, ``smp_mb__after_rmw()``;
>   
>   - sequentially consistent atomic access: everything else.
>   
> @@ -472,7 +473,7 @@ and memory barriers, and the equivalents in QEMU:
>     sequential consistency.
>   
>   - in QEMU, ``qatomic_read()`` and ``qatomic_set()`` do not participate in
> -  the total ordering enforced by sequentially-consistent operations.
> +  the ordering enforced by read-modify-write operations.
>     This is because QEMU uses the C11 memory model.  The following example
>     is correct in Linux but not in QEMU:
>   
> @@ -488,9 +489,24 @@ and memory barriers, and the equivalents in QEMU:
>     because the read of ``y`` can be moved (by either the processor or the
>     compiler) before the write of ``x``.
>   
> -  Fixing this requires an ``smp_mb()`` memory barrier between the write
> -  of ``x`` and the read of ``y``.  In the common case where only one thread
> -  writes ``x``, it is also possible to write it like this:
> +  Fixing this requires a full memory barrier between the write of ``x`` and
> +  the read of ``y``.  QEMU provides ``smp_mb__before_rmw()`` and
> +  ``smp_mb__after_rmw()``; they act both as an optimization,
> +  avoiding the memory barrier on processors where it is unnecessary,
> +  and as a clarification of this corner case of the C11 memory model:
> +
> +      +--------------------------------+
> +      | QEMU (incorrect)               |

Just double-checking: shouldn't this be "QEMU (correct)" ?

Or am I confused? :)
David Hildenbrand March 6, 2023, 1:22 p.m. UTC | #4
On 06.03.23 14:21, David Hildenbrand wrote:
> On 03.03.23 18:19, Paolo Bonzini wrote:
>> On ARM, seqcst loads and stores (which QEMU does not use) are compiled
>> respectively as LDAR and STLR instructions.  Even though STLR is also
>> used for store-release operations, STLR followed by LDAR provides
>> store-against-load ordering, which is stronger than a store-release.
>> Compare this to ARMv7, where store-release is DMB+STR and store-seqcst
>> is DMB+STR+DMB.
>>
>> This means that on ARM a sequence of
>>
>>     qatomic_store_release(&y, ...);         // STLR
>>     a = qatomic_load_acquire(&x);           // LDAR
>>
>> provides stronger ordering at the processor level than the two MOV
>> instructions you'd get on x86.
>>
>> Likewise, on ARM sequentially consistent read-modify-write operations only
>> need to use LDAXR and STLXR respectively for the load and the store, which
>> is weaker than the LOCK prefix used on x86.
>>
>> In a strange twist of events, however, the _stronger_ semantics
>> of the ARM instructions can end up causing bugs on ARM, not on x86.
>> The problems occur when seqcst atomics are mixed with relaxed atomics.
>>
>> QEMU's atomics try to bridge the Linux API (that most of the developers
>> are familiar with) and the C11 API, and the two have a substantial
>> difference:
>>
>> - in Linux, strongly-ordered atomics such as atomic_add_return() affect
>>     the global ordering of _all_ memory operations, including for example
>>     READ_ONCE()/WRITE_ONCE()
>>
>> - in C11, sequentially consistent atomics (except for seq-cst fences)
>>     only affect the ordering of sequentially consistent operations.
>>     In particular, since relaxed loads are done with LDR on ARM, they are
>>     not ordered against seqcst stores (which are done with STLR).
>>
>> QEMU implements high-level synchronization primitives with the idea that
>> the primitives contain the necessary memory barriers, and the callers can
>> use relaxed atomics (qatomic_read/qatomic_set) or even regular accesses.
>> This is very much incompatible with the C11 view that seqcst accesses
>> are only ordered against other seqcst accesses, and requires using seqcst
>> fences as in the following example:
>>
>>      qatomic_set(&y, 1);            qatomic_set(&x, 1);
>>      smp_mb();                      smp_mb();
>>      ... qatomic_read(&x) ...       ... qatomic_read(&y) ...
>>
>> When a qatomic_*() read-modify write operation is used instead of one
>> or both stores, developers that are more familiar with the Linux API may
>> be tempted to omit the smp_mb(), which will work on x86 but not on ARM.
>>
>> This nasty difference between Linux and C11 read-modify-write operations
>> has already caused issues in util/async.c and more are being found.
>> Provide something similar to Linux smp_mb__before/after_atomic(); this
>> has the double function of documenting clearly why there is a memory
>> barrier, and avoiding a double barrier on x86 and s390x systems.
>>
> 
> Right, just like smp_mb__before_atomic()/smp_mb__after_atomic().
> 
> 
>> The new macro can already be put to use in qatomic_mb_set().
>>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>    docs/devel/atomics.rst | 26 +++++++++++++++++++++-----
>>    include/qemu/atomic.h  | 17 ++++++++++++++++-
>>    2 files changed, 37 insertions(+), 6 deletions(-)
>>
>> diff --git a/docs/devel/atomics.rst b/docs/devel/atomics.rst
>> index 7957310071d9..898f5393c07a 100644
>> --- a/docs/devel/atomics.rst
>> +++ b/docs/devel/atomics.rst
>> @@ -27,7 +27,8 @@ provides macros that fall in three camps:
>>    
>>    - weak atomic access and manual memory barriers: ``qatomic_read()``,
>>      ``qatomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``,
>> -  ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``;
>> +  ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``,
>> +  ``smp_mb__before_rmw()``, ``smp_mb__after_rmw()``;
>>    
>>    - sequentially consistent atomic access: everything else.
>>    
>> @@ -472,7 +473,7 @@ and memory barriers, and the equivalents in QEMU:
>>      sequential consistency.
>>    
>>    - in QEMU, ``qatomic_read()`` and ``qatomic_set()`` do not participate in
>> -  the total ordering enforced by sequentially-consistent operations.
>> +  the ordering enforced by read-modify-write operations.
>>      This is because QEMU uses the C11 memory model.  The following example
>>      is correct in Linux but not in QEMU:
>>    
>> @@ -488,9 +489,24 @@ and memory barriers, and the equivalents in QEMU:
>>      because the read of ``y`` can be moved (by either the processor or the
>>      compiler) before the write of ``x``.
>>    
>> -  Fixing this requires an ``smp_mb()`` memory barrier between the write
>> -  of ``x`` and the read of ``y``.  In the common case where only one thread
>> -  writes ``x``, it is also possible to write it like this:
>> +  Fixing this requires a full memory barrier between the write of ``x`` and
>> +  the read of ``y``.  QEMU provides ``smp_mb__before_rmw()`` and
>> +  ``smp_mb__after_rmw()``; they act both as an optimization,
>> +  avoiding the memory barrier on processors where it is unnecessary,
>> +  and as a clarification of this corner case of the C11 memory model:
>> +
>> +      +--------------------------------+
>> +      | QEMU (incorrect)               |
> 
> Just double-checking: shouldn't this be "QEMU (correct)" ?
> 
> Or am I confused? :)
> 

Oh, noticed Richard also pointed that out. With that fixed LGTM

Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/docs/devel/atomics.rst b/docs/devel/atomics.rst
index 7957310071d9..898f5393c07a 100644
--- a/docs/devel/atomics.rst
+++ b/docs/devel/atomics.rst
@@ -27,7 +27,8 @@  provides macros that fall in three camps:
 
 - weak atomic access and manual memory barriers: ``qatomic_read()``,
   ``qatomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``,
-  ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``;
+  ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``,
+  ``smp_mb__before_rmw()``, ``smp_mb__after_rmw()``;
 
 - sequentially consistent atomic access: everything else.
 
@@ -472,7 +473,7 @@  and memory barriers, and the equivalents in QEMU:
   sequential consistency.
 
 - in QEMU, ``qatomic_read()`` and ``qatomic_set()`` do not participate in
-  the total ordering enforced by sequentially-consistent operations.
+  the ordering enforced by read-modify-write operations.
   This is because QEMU uses the C11 memory model.  The following example
   is correct in Linux but not in QEMU:
 
@@ -488,9 +489,24 @@  and memory barriers, and the equivalents in QEMU:
   because the read of ``y`` can be moved (by either the processor or the
   compiler) before the write of ``x``.
 
-  Fixing this requires an ``smp_mb()`` memory barrier between the write
-  of ``x`` and the read of ``y``.  In the common case where only one thread
-  writes ``x``, it is also possible to write it like this:
+  Fixing this requires a full memory barrier between the write of ``x`` and
+  the read of ``y``.  QEMU provides ``smp_mb__before_rmw()`` and
+  ``smp_mb__after_rmw()``; they act both as an optimization,
+  avoiding the memory barrier on processors where it is unnecessary,
+  and as a clarification of this corner case of the C11 memory model:
+
+      +--------------------------------+
+      | QEMU (incorrect)               |
+      +================================+
+      | ::                             |
+      |                                |
+      |   a = qatomic_fetch_add(&x, 2);|
+      |   smp_mb__after_rmw();         |
+      |   b = qatomic_read(&y);        |
+      +--------------------------------+
+
+  In the common case where only one thread writes ``x``, it is also possible
+  to write it like this:
 
       +--------------------------------+
       | QEMU (correct)                 |
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index 874134fd19ae..f85834ee8b20 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -245,6 +245,20 @@ 
 #define smp_wmb()   smp_mb_release()
 #define smp_rmb()   smp_mb_acquire()
 
+/*
+ * SEQ_CST is weaker than the older __sync_* builtins and Linux
+ * kernel read-modify-write atomics.  Provide a macro to obtain
+ * the same semantics.
+ */
+#if !defined(QEMU_SANITIZE_THREAD) && \
+    (defined(__i386__) || defined(__x86_64__) || defined(__s390x__))
+# define smp_mb__before_rmw() signal_barrier()
+# define smp_mb__after_rmw() signal_barrier()
+#else
+# define smp_mb__before_rmw() smp_mb()
+# define smp_mb__after_rmw() smp_mb()
+#endif
+
 /* qatomic_mb_read/set semantics map Java volatile variables. They are
  * less expensive on some platforms (notably POWER) than fully
  * sequentially consistent operations.
@@ -259,7 +273,8 @@ 
 #if !defined(QEMU_SANITIZE_THREAD) && \
     (defined(__i386__) || defined(__x86_64__) || defined(__s390x__))
 /* This is more efficient than a store plus a fence.  */
-# define qatomic_mb_set(ptr, i)  ((void)qatomic_xchg(ptr, i))
+# define qatomic_mb_set(ptr, i) \
+    ({ (void)qatomic_xchg(ptr, i); smp_mb__after_rmw(); })
 #else
 # define qatomic_mb_set(ptr, i) \
    ({ qatomic_store_release(ptr, i); smp_mb(); })