Patchwork [v3,2/4] timer: protect timers_state's clock with seqlock

login
register
mail settings
Submitter pingfan liu
Date Aug. 27, 2013, 3:21 a.m.
Message ID <1377573663-16727-3-git-send-email-pingfank@linux.vnet.ibm.com>
Download mbox | patch
Permalink /patch/270012/
State New
Headers show

Comments

pingfan liu - Aug. 27, 2013, 3:21 a.m.
The vm_clock may be read outside BQL. This will make timers_state
--the foundation of vm_clock exposed to race condition.
Using private lock to protect it.

Note in tcg mode, vm_clock still read inside BQL, so icount is
left without private lock's protection. As for cpu_ticks_* in
timers_state, it is still protected by BQL.

Lock rule: private lock innermost, ie BQL->"this lock"

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
 cpus.c | 36 ++++++++++++++++++++++++++++++------
 1 file changed, 30 insertions(+), 6 deletions(-)
Alex Bligh - Aug. 27, 2013, 3:18 p.m.
On 27 Aug 2013, at 04:21, Liu Ping Fan wrote:

> Note in tcg mode, vm_clock still read inside BQL, so icount is

Should refer to QEMU_CLOCK_VIRTUAL if after my patches

> left without private lock's protection. As for cpu_ticks_* in
> timers_state, it is still protected by BQL.

I *think* what you are saying here is that after this patch,
reading QEMU_CLOCK_VIRTUAL is threadsafe unless use_icount
is true, in which case it is not thread safe as existing
callers rely on the BQL.

The commit could be a bit more specific here, not least as
if I read it right, that will need fixing before
QEMU_CLOCK_VIRTUAL is used at all in other other threads.
pingfan liu - Aug. 27, 2013, 11:59 p.m.
On Tue, Aug 27, 2013 at 11:18 PM, Alex Bligh <alex@alex.org.uk> wrote:
>
> On 27 Aug 2013, at 04:21, Liu Ping Fan wrote:
>
>> Note in tcg mode, vm_clock still read inside BQL, so icount is
>
> Should refer to QEMU_CLOCK_VIRTUAL if after my patches
>
Will change the log.
>> left without private lock's protection. As for cpu_ticks_* in
>> timers_state, it is still protected by BQL.
>
> I *think* what you are saying here is that after this patch,
> reading QEMU_CLOCK_VIRTUAL is threadsafe unless use_icount
> is true, in which case it is not thread safe as existing
> callers rely on the BQL.
>
Yes.
> The commit could be a bit more specific here, not least as
> if I read it right, that will need fixing before
> QEMU_CLOCK_VIRTUAL is used at all in other other threads.
>
Yes, this patch should be ready, before we use timers on other threads.

Thx,
Pingfan
> --
> Alex Bligh
>
>
>
>

Patch

diff --git a/cpus.c b/cpus.c
index b9e5685..bcead3b 100644
--- a/cpus.c
+++ b/cpus.c
@@ -37,6 +37,7 @@ 
 #include "sysemu/qtest.h"
 #include "qemu/main-loop.h"
 #include "qemu/bitmap.h"
+#include "qemu/seqlock.h"
 
 #ifndef _WIN32
 #include "qemu/compatfd.h"
@@ -112,6 +113,13 @@  static int64_t qemu_icount;
 typedef struct TimersState {
     int64_t cpu_ticks_prev;
     int64_t cpu_ticks_offset;
+    /* cpu_clock_offset will be read out of BQL, so protect it with private
+     * lock. As for cpu_ticks_*, no requirement to read it outside BQL yet.
+     * Lock rule: innermost
+     */
+    QemuSeqLock clock_seqlock;
+    /* mutex for seqlock */
+    QemuMutex mutex;
     int64_t cpu_clock_offset;
     int32_t cpu_ticks_enabled;
     int64_t dummy;
@@ -137,6 +145,7 @@  int64_t cpu_get_icount(void)
 }
 
 /* return the host CPU cycle counter and handle stop/restart */
+/* cpu_ticks is safely if holding BQL */
 int64_t cpu_get_ticks(void)
 {
     if (use_icount) {
@@ -161,33 +170,46 @@  int64_t cpu_get_ticks(void)
 int64_t cpu_get_clock(void)
 {
     int64_t ti;
-    if (!timers_state.cpu_ticks_enabled) {
-        return timers_state.cpu_clock_offset;
-    } else {
-        ti = get_clock();
-        return ti + timers_state.cpu_clock_offset;
-    }
+    unsigned start;
+
+    do {
+        start = seqlock_read_begin(&timers_state.clock_seqlock);
+        if (!timers_state.cpu_ticks_enabled) {
+            ti = timers_state.cpu_clock_offset;
+        } else {
+            ti = get_clock();
+            ti += timers_state.cpu_clock_offset;
+        }
+    } while (seqlock_read_retry(&timers_state.clock_seqlock, start));
+
+    return ti;
 }
 
 /* enable cpu_get_ticks() */
 void cpu_enable_ticks(void)
 {
+    /* Here, the really thing protected by seqlock is cpu_clock_offset. */
+    seqlock_write_lock(&timers_state.clock_seqlock);
     if (!timers_state.cpu_ticks_enabled) {
         timers_state.cpu_ticks_offset -= cpu_get_real_ticks();
         timers_state.cpu_clock_offset -= get_clock();
         timers_state.cpu_ticks_enabled = 1;
     }
+    seqlock_write_unlock(&timers_state.clock_seqlock);
 }
 
 /* disable cpu_get_ticks() : the clock is stopped. You must not call
    cpu_get_ticks() after that.  */
 void cpu_disable_ticks(void)
 {
+    /* Here, the really thing protected by seqlock is cpu_clock_offset. */
+    seqlock_write_lock(&timers_state.clock_seqlock);
     if (timers_state.cpu_ticks_enabled) {
         timers_state.cpu_ticks_offset = cpu_get_ticks();
         timers_state.cpu_clock_offset = cpu_get_clock();
         timers_state.cpu_ticks_enabled = 0;
     }
+    seqlock_write_unlock(&timers_state.clock_seqlock);
 }
 
 /* Correlation between real and virtual time is always going to be
@@ -371,6 +393,8 @@  static const VMStateDescription vmstate_timers = {
 
 void configure_icount(const char *option)
 {
+    qemu_mutex_init(&timers_state.mutex);
+    seqlock_init(&timers_state.clock_seqlock, &timers_state.mutex);
     vmstate_register(NULL, 0, &vmstate_timers, &timers_state);
     if (!option) {
         return;