diff mbox

atomics: add volatile_read/volatile_set

Message ID 1468851450-9863-1-git-send-email-pbonzini@redhat.com
State New
Headers show

Commit Message

Paolo Bonzini July 18, 2016, 2:17 p.m. UTC
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 docs/atomics.txt      | 19 ++++++++++++++++---
 include/qemu/atomic.h | 17 +++++++++++++++++
 2 files changed, 33 insertions(+), 3 deletions(-)

Comments

Sergey Fedorov July 18, 2016, 4:52 p.m. UTC | #1
So how are we going to use them?

Thanks,
Sergey

On 18/07/16 17:17, Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  docs/atomics.txt      | 19 ++++++++++++++++---
>  include/qemu/atomic.h | 17 +++++++++++++++++
>  2 files changed, 33 insertions(+), 3 deletions(-)
>
> diff --git a/docs/atomics.txt b/docs/atomics.txt
> index c95950b..1f21d2e 100644
> --- a/docs/atomics.txt
> +++ b/docs/atomics.txt
> @@ -123,6 +123,14 @@ to do so, because it tells readers which variables are shared with
>  other threads, and which are local to the current thread or protected
>  by other, more mundane means.
>  
> +atomic_read() and atomic_set() only support accesses as large as a
> +pointer.  If you need to access variables larger than a pointer you
> +can use volatile_read() and volatile_set(), but be careful: these always
> +use volatile accesses, and 64-bit volatile accesses are not atomic on
> +several 32-bit processors such as ARMv7.  In other words, volatile_read
> +and volatile_set only provide "safe register" semantics when applied to
> +64-bit variables.
> +
>  Memory barriers control the order of references to shared memory.
>  They come in four kinds:
>  
> @@ -335,11 +343,16 @@ and memory barriers, and the equivalents in QEMU:
>    Both semantics prevent the compiler from doing certain transformations;
>    the difference is that atomic accesses are guaranteed to be atomic,
>    while volatile accesses aren't. Thus, in the volatile case we just cross
> -  our fingers hoping that the compiler will generate atomic accesses,
> -  since we assume the variables passed are machine-word sized and
> -  properly aligned.
> +  our fingers hoping that the compiler and processor will provide atomic
> +  accesses, since we assume the variables passed are machine-word sized
> +  and properly aligned.
> +
>    No barriers are implied by atomic_read/set in either Linux or QEMU.
>  
> +- volatile_read and volatile_set are equivalent to ACCESS_ONCE in Linux.
> +  No barriers are implied by volatile_read/set in QEMU, nor by
> +  ACCESS_ONCE in Linux.
> +
>  - atomic read-modify-write operations in Linux are of three kinds:
>  
>           atomic_OP          returns void
> diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
> index 7e13fca..8409bdb 100644
> --- a/include/qemu/atomic.h
> +++ b/include/qemu/atomic.h
> @@ -18,6 +18,12 @@
>  /* Compiler barrier */
>  #define barrier()   ({ asm volatile("" ::: "memory"); (void)0; })
>  
> +/* These will only be atomic if the processor does the fetch or store
> + * in a single issue memory operation
> + */
> +#define volatile_read(ptr)       (*(__typeof__(*ptr) volatile*) (ptr))
> +#define volatile_set(ptr, i)     ((*(__typeof__(*ptr) volatile*) (ptr)) = (i))
> +
>  #ifdef __ATOMIC_RELAXED
>  /* For C11 atomic ops */
>  
> @@ -260,6 +266,17 @@
>   */
>  #define atomic_read(ptr)       (*(__typeof__(*ptr) volatile*) (ptr))
>  #define atomic_set(ptr, i)     ((*(__typeof__(*ptr) volatile*) (ptr)) = (i))
> +#define atomic_read(ptr)                              \
> +    ({                                                \
> +    QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \
> +    volatile_read(ptr);                               \
> +    })
> +
> +#define atomic_set(ptr, i)  do {                      \
> +    QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \
> +    volatile_set(ptr, i);                             \
> +} while(0)
> +
>  
>  /**
>   * atomic_rcu_read - reads a RCU-protected pointer to a local variable
diff mbox

Patch

diff --git a/docs/atomics.txt b/docs/atomics.txt
index c95950b..1f21d2e 100644
--- a/docs/atomics.txt
+++ b/docs/atomics.txt
@@ -123,6 +123,14 @@  to do so, because it tells readers which variables are shared with
 other threads, and which are local to the current thread or protected
 by other, more mundane means.
 
+atomic_read() and atomic_set() only support accesses as large as a
+pointer.  If you need to access variables larger than a pointer you
+can use volatile_read() and volatile_set(), but be careful: these always
+use volatile accesses, and 64-bit volatile accesses are not atomic on
+several 32-bit processors such as ARMv7.  In other words, volatile_read
+and volatile_set only provide "safe register" semantics when applied to
+64-bit variables.
+
 Memory barriers control the order of references to shared memory.
 They come in four kinds:
 
@@ -335,11 +343,16 @@  and memory barriers, and the equivalents in QEMU:
   Both semantics prevent the compiler from doing certain transformations;
   the difference is that atomic accesses are guaranteed to be atomic,
   while volatile accesses aren't. Thus, in the volatile case we just cross
-  our fingers hoping that the compiler will generate atomic accesses,
-  since we assume the variables passed are machine-word sized and
-  properly aligned.
+  our fingers hoping that the compiler and processor will provide atomic
+  accesses, since we assume the variables passed are machine-word sized
+  and properly aligned.
+
   No barriers are implied by atomic_read/set in either Linux or QEMU.
 
+- volatile_read and volatile_set are equivalent to ACCESS_ONCE in Linux.
+  No barriers are implied by volatile_read/set in QEMU, nor by
+  ACCESS_ONCE in Linux.
+
 - atomic read-modify-write operations in Linux are of three kinds:
 
          atomic_OP          returns void
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index 7e13fca..8409bdb 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -18,6 +18,12 @@ 
 /* Compiler barrier */
 #define barrier()   ({ asm volatile("" ::: "memory"); (void)0; })
 
+/* These will only be atomic if the processor does the fetch or store
+ * in a single issue memory operation
+ */
+#define volatile_read(ptr)       (*(__typeof__(*ptr) volatile*) (ptr))
+#define volatile_set(ptr, i)     ((*(__typeof__(*ptr) volatile*) (ptr)) = (i))
+
 #ifdef __ATOMIC_RELAXED
 /* For C11 atomic ops */
 
@@ -260,6 +266,17 @@ 
  */
 #define atomic_read(ptr)       (*(__typeof__(*ptr) volatile*) (ptr))
 #define atomic_set(ptr, i)     ((*(__typeof__(*ptr) volatile*) (ptr)) = (i))
+#define atomic_read(ptr)                              \
+    ({                                                \
+    QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \
+    volatile_read(ptr);                               \
+    })
+
+#define atomic_set(ptr, i)  do {                      \
+    QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \
+    volatile_set(ptr, i);                             \
+} while(0)
+
 
 /**
  * atomic_rcu_read - reads a RCU-protected pointer to a local variable