Patchwork [PATCH-RFC,1/3] qemu: add barriers.h header

login
register
mail settings
Submitter Michael S. Tsirkin
Date Dec. 8, 2009, 4:18 p.m.
Message ID <20091208161825.GB32188@redhat.com>
Download mbox | patch
Permalink /patch/40642/
State New
Headers show

Comments

Michael S. Tsirkin - Dec. 8, 2009, 4:18 p.m.
With qemu-kvm and with iothread, guest might read in-memory structures
on one host CPU while it is being updated on another one, and vice
versa.  Therefore we must use memory barriers to ensure proper memory
read/write ordering.

The following implements memory barriers with API matching
that of the linux kernel, and implementation based on
a header from libibverbs infiniband library
(which happened to be pretty clean and which I am familiar with).

Since that library uses dual GPLv2/BSD I think it's nice to keep BSD license
option around just for this header, so that the original can borrow back
from us.

It seems that work is ongoing to implement a portable atomics
library - no option seems to be mature at the moment,
but if that becomes available, we'll be able to use it,
and replace the header in qemu. All of the upcoming
variants seem to be API compatible with linux,
so it will be easy.

Barriers aren't currently required with tcg because tcg does not
implement atomics, and so it runs iothread in lockstep
with tcg thread. However, it's better to keep it simple
and insert barriers always.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/barriers.h |  131 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 131 insertions(+), 0 deletions(-)
 create mode 100644 hw/barriers.h

Patch

diff --git a/hw/barriers.h b/hw/barriers.h
new file mode 100644
index 0000000..e1dbb70
--- /dev/null
+++ b/hw/barriers.h
@@ -0,0 +1,131 @@ 
+/*
+ * Copyright (c) 2005 Topspin Communications.  All rights reserved.
+ * Copyright (c) 2009 Red Hat Inc.  All rights reserved.
+ *
+ * Author: Michael S. Tsirkin <mst@redhat.com>
+ * Based on libibverbs library by Roland Dreier <rolandd@cisco.com>
+ *
+ * Note: Please be nice and keep the BSD license option on this file around, so
+ * that libibverbs library can reuse this code if it wants to.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef QEMU_BARRIERS_H
+#define QEMU_BARRIERS_H
+
+/* Compiler barrier. Prevents compiler from re-ordering
+ * memory operations, but does not prevent CPU from doing this. */
+#define barrier() asm volatile("" ::: "memory")
+
+/*
+ * Host-architecture-specific defines.  Currently, an architecture is
+ * required to implement the following operations:
+ *
+ * mb() - memory barrier.  No loads or stores may be reordered across
+ *     this macro by either the compiler or the CPU.
+ * rmb() - read memory barrier.  No loads may be reordered across this
+ *     macro by either the compiler or the CPU.
+ * wmb() - write memory barrier.  No stores may be reordered across
+ *     this macro by either the compiler or the CPU.
+ */
+
+#if defined(__i386__)
+
+/* We use GCC builtin if it's available, as that can use
+ * mfence on 32 bit as well, e.g. if built with -march=pentium-m.
+ * However, on i386, there seem to be known bugs as recently as 4.3. */
+#if defined(_GNUC__) && __GNUC__ >= 4 && __GNUC_MINOR__ >= 4
+#define mb() __sync_synchronize()
+#else
+#define mb() asm volatile("lock; addl $0,0(%%esp) " ::: "memory")
+#endif
+#define rmb() mb()
+/*
+ * We use a compiler barrier for wmb() because we don't care about
+ * ordering against non-temporal/string stores (for now at least).
+ */
+#define wmb() barrier()
+
+#elif defined(__x86_64__)
+
+/*
+ * We use a compiler barrier for wmb() because we don't care about
+ * ordering against non-temporal/string stores (for now at least).
+ */
+#define mb() asm volatile("mfence" ::: "memory")
+#define rmb() asm volatile("lfence" ::: "memory")
+#define wmb() barrier()
+
+#elif defined(__PPC64__)
+
+#define mb() asm volatile("sync" ::: "memory")
+#define rmb() asm volatile("lwsync" ::: "memory")
+#define wmb() mb()
+
+#elif defined(__ia64__)
+
+#define mb() asm volatile("mf" ::: "memory")
+#define rmb() mb()
+#define wmb() mb()
+
+#elif defined(__PPC__)
+
+#define mb() asm volatile("sync" ::: "memory")
+#define rmb() mb()
+#define wmb() asm volatile("eieio" ::: "memory")
+
+#elif defined(__sparc_v9__)
+
+#define mb() asm volatile("membar #LoadLoad | #LoadStore | " \
+                          "#StoreStore | #StoreLoad" ::: "memory")
+#define rmb() asm volatile("membar #LoadLoad" ::: "memory")
+#define wmb() asm volatile("membar #StoreStore" ::: "memory")
+
+#elif defined(__sparc__)
+
+#define mb() barrier()
+#define rmb() mb()
+#define wmb() mb()
+
+#else
+
+#warning No architecture specific defines found.  Using generic implementation.
+
+/* Use GCC builtin if it's available. */
+#if defined(_GNUC__) && __GNUC__ >= 4 && __GNUC_MINOR__ >= 1
+#define mb() __sync_synchronize()
+#else
+#define mb() barrier()
+#endif
+#define rmb() mb()
+#define wmb() mb()
+
+#endif
+#endif /* QEMU_BARRIERS_H */