diff mbox

[RFC,v1,03/11] tcg: comment on which functions have to be called with tb_lock held

Message ID 1458317932-1875-4-git-send-email-alex.bennee@linaro.org
State New
Headers show

Commit Message

Alex Bennée March 18, 2016, 4:18 p.m. UTC
From: Paolo Bonzini <pbonzini@redhat.com>

softmmu requires more functions to be thread-safe, because translation
blocks can be invalidated from e.g. notdirty callbacks.  Probably the
same holds for user-mode emulation, it's just that no one has ever
tried to produce a coherent locking there.

This patch will guide the introduction of more tb_lock and tb_unlock
calls for system emulation.

Note that after this patch some (most) of the mentioned functions are
still called outside tb_lock/tb_unlock.  The next one will rectify this.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
v1(ajb):
  - just s-o-b
---
 exec.c                  |  1 +
 include/exec/exec-all.h |  1 +
 include/qom/cpu.h       |  3 +++
 tcg/tcg.h               |  2 ++
 translate-all.c         | 38 ++++++++++++++++++++++++++++++--------
 5 files changed, 37 insertions(+), 8 deletions(-)

Comments

Paolo Bonzini March 18, 2016, 4:59 p.m. UTC | #1
On 18/03/2016 17:18, Alex Bennée wrote:
> +
> +    /* Protected by tb_lock.  */

Only writes are protected by tb_lock.  Read happen outside the lock.

Reads are not quite thread safe yet, because of tb_flush.  In order to
fix that, there's either the async_safe_run() mechanism from Fred or
preferrably the code generation buffer could be moved under RCU.

Because tb_flush is really rare, my suggestion is simply to allocate two
code generation buffers and do something like

static int which_buffer_is_in_use_bit_mask = 1;
...

   /* in tb_flush */
   assert (which_buffer_is_in_use_bit_mask != 3);
   if (which_buffer_is_in_use_bit_mask == 1) {
       which_buffer_is_in_use_bit_mask |= 2;
       call_rcu(function doing which_buffer_is_in_use_bit_mask &= ~1);
       point TCG to second buffer
    } else if (which_buffer_is_in_use_bit_mask == 2) {
       which_buffer_is_in_use_bit_mask |= 1;
       call_rcu(function doing which_buffer_is_in_use_bit_mask &= ~2);
       point TCG to first buffer
    }

Basically, we just assert that call_rcu makes at least one pass between
two tb_flushes.

All this is also a prerequisite for patch 1.

Paolo

>      struct TranslationBlock *tb_jmp_cache[TB_JMP_CACHE_SIZE];
> +
Emilio Cota March 21, 2016, 9:50 p.m. UTC | #2
On Fri, Mar 18, 2016 at 17:59:46 +0100, Paolo Bonzini wrote:
> On 18/03/2016 17:18, Alex Bennée wrote:
> > +
> > +    /* Protected by tb_lock.  */
> 
> Only writes are protected by tb_lock.  Read happen outside the lock.
> 
> Reads are not quite thread safe yet, because of tb_flush.  In order to
> fix that, there's either the async_safe_run() mechanism from Fred or
> preferrably the code generation buffer could be moved under RCU.

A third approach (which I prefer) is to protect tb_jmp_cache with
a seqlock. That way invalidates (via tlb_flush from other CPUs, or
via tb_flush) are picked up if they're racing with concurrent reads.

> Because tb_flush is really rare, my suggestion is simply to allocate two
> code generation buffers and do something like
> 
> static int which_buffer_is_in_use_bit_mask = 1;
> ...
> 
>    /* in tb_flush */
>    assert (which_buffer_is_in_use_bit_mask != 3);
>    if (which_buffer_is_in_use_bit_mask == 1) {
>        which_buffer_is_in_use_bit_mask |= 2;
>        call_rcu(function doing which_buffer_is_in_use_bit_mask &= ~1);
>        point TCG to second buffer
>     } else if (which_buffer_is_in_use_bit_mask == 2) {
>        which_buffer_is_in_use_bit_mask |= 1;
>        call_rcu(function doing which_buffer_is_in_use_bit_mask &= ~2);
>        point TCG to first buffer
>     }
> 
> Basically, we just assert that call_rcu makes at least one pass between
> two tb_flushes.
> 
> All this is also a prerequisite for patch 1.

The problem with this approach is that the "point TCG to second buffer"
is not just a question of pointing code_gen_buffer to a new address;
we'd have to create a new tcg_ctx struct, since tcg_ctx has quite a few
elements that are dependent on code_gen_buffer (e.g. s->code_ptr,
s->code_buf). And this could end up with readers reading a partially
up-to-date (i.e. corrupt) tcg_ctx.

I know you're not enthusiastic about it, but I think a mechanism to "stop
all CPUs and wait until they have indeed stopped" is in this case justified.

I'm preparing an RFC with these two changes (seqlock and stop all cpus mechanism)
on top of these base patches.

Thanks,

		Emilio
Paolo Bonzini March 21, 2016, 10:12 p.m. UTC | #3
On 21/03/2016 22:50, Emilio G. Cota wrote:
> The problem with this approach is that the "point TCG to second buffer"
> is not just a question of pointing code_gen_buffer to a new address;
> we'd have to create a new tcg_ctx struct, since tcg_ctx has quite a few
> elements that are dependent on code_gen_buffer (e.g. s->code_ptr,
> s->code_buf). 

Are these (or other fields similarly dependent on code_gen_buffer) ever
read outside tb_lock?  A quick "git grep -wl" suggests that they are
only used from tcg/, which should only run while tb_lock is held.

If not it would be enough to call tcg_prologue_init from tb_flush.

Paolo
diff mbox

Patch

diff --git a/exec.c b/exec.c
index f09dd4e..4f0e5ed 100644
--- a/exec.c
+++ b/exec.c
@@ -825,6 +825,7 @@  int cpu_breakpoint_insert(CPUState *cpu, vaddr pc, int flags,
 {
     CPUBreakpoint *bp;
 
+    /* TODO: locking (RCU?) */
     bp = g_malloc(sizeof(*bp));
 
     bp->pc = pc;
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 0ef6ea5..c41d680 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -372,6 +372,7 @@  static inline void tb_set_jmp_target(TranslationBlock *tb,
 
 #endif
 
+/* Called with tb_lock held.  */
 static inline void tb_add_jump(TranslationBlock *tb, int n,
                                TranslationBlock *tb_next)
 {
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 4132108..9124b6d 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -310,7 +310,10 @@  struct CPUState {
 
     void *env_ptr; /* CPUArchState */
     struct TranslationBlock *current_tb;
+
+    /* Protected by tb_lock.  */
     struct TranslationBlock *tb_jmp_cache[TB_JMP_CACHE_SIZE];
+
     struct GDBRegisterState *gdb_regs;
     int gdb_num_regs;
     int gdb_num_g_regs;
diff --git a/tcg/tcg.h b/tcg/tcg.h
index aa4e123..cd7cde7 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -609,6 +609,7 @@  static inline bool tcg_op_buf_full(void)
 
 /* pool based memory allocation */
 
+/* tb_lock must be held for tcg_malloc_internal. */
 void *tcg_malloc_internal(TCGContext *s, int size);
 void tcg_pool_reset(TCGContext *s);
 void tcg_pool_delete(TCGContext *s);
@@ -618,6 +619,7 @@  void tb_unlock(void);
 bool tb_lock_recursive(void);
 void tb_lock_reset(void);
 
+/* Called with tb_lock held.  */
 static inline void *tcg_malloc(int size)
 {
     TCGContext *s = &tcg_ctx;
diff --git a/translate-all.c b/translate-all.c
index f68dcbc..1a02450 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -260,7 +260,9 @@  static int encode_search(TranslationBlock *tb, uint8_t *block)
     return p - block;
 }
 
-/* The cpu state corresponding to 'searched_pc' is restored.  */
+/* The cpu state corresponding to 'searched_pc' is restored.
+ * Called with tb_lock held.
+ */
 static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
                                      uintptr_t searched_pc)
 {
@@ -318,8 +320,10 @@  bool cpu_restore_state(CPUState *cpu, uintptr_t retaddr)
         if (tb->cflags & CF_NOCACHE) {
             /* one-shot translation, invalidate it immediately */
             cpu->current_tb = NULL;
+            tb_lock();
             tb_phys_invalidate(tb, -1);
             tb_free(tb);
+            tb_unlock();
         }
         return true;
     }
@@ -411,6 +415,7 @@  static void page_init(void)
 }
 
 /* If alloc=1:
+ * Called with tb_lock held for system emulation.
  * Called with mmap_lock held for user-mode emulation.
  */
 static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
@@ -766,8 +771,12 @@  bool tcg_enabled(void)
     return tcg_ctx.code_gen_buffer != NULL;
 }
 
-/* Allocate a new translation block. Flush the translation buffer if
-   too many translation blocks or too much generated code. */
+/*
+ * Allocate a new translation block. Flush the translation buffer if
+ * too many translation blocks or too much generated code.
+ *
+ * Called with tb_lock held.
+ */
 static TranslationBlock *tb_alloc(target_ulong pc)
 {
     TranslationBlock *tb;
@@ -781,6 +790,7 @@  static TranslationBlock *tb_alloc(target_ulong pc)
     return tb;
 }
 
+/* Called with tb_lock held.  */
 void tb_free(TranslationBlock *tb)
 {
     /* In practice this is mostly used for single use temporary TB
@@ -888,7 +898,10 @@  static void tb_invalidate_check(target_ulong address)
     }
 }
 
-/* verify that all the pages have correct rights for code */
+/* verify that all the pages have correct rights for code
+ *
+ * Called with tb_lock held.
+ */
 static void tb_page_check(void)
 {
     TranslationBlock *tb;
@@ -976,7 +989,10 @@  static inline void tb_jmp_remove(TranslationBlock *tb, int n)
     }
 }
 
-/* invalidate one TB */
+/* invalidate one TB
+ *
+ * Called with tb_lock held.
+ */
 void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
 {
     PageDesc *p;
@@ -1067,7 +1083,7 @@  static void build_page_bitmap(PageDesc *p)
 }
 #endif
 
-/* Called with mmap_lock held for user mode emulation.  */
+/* Called with tb_lock held, and mmap_lock too for user mode emulation.  */
 TranslationBlock *tb_gen_code(CPUState *cpu,
                               target_ulong pc, target_ulong cs_base,
                               int flags, int cflags)
@@ -1333,7 +1349,9 @@  void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len)
     }
     if (!p->code_bitmap &&
         ++p->code_write_count >= SMC_BITMAP_USE_THRESHOLD) {
-        /* build code bitmap */
+        /* build code bitmap.  FIXME: writes should be protected by
+         * tb_lock, reads by tb_lock or RCU.
+         */
         build_page_bitmap(p);
     }
     if (p->code_bitmap) {
@@ -1423,6 +1441,7 @@  static void tb_invalidate_phys_page(tb_page_addr_t addr,
 
 /* add the tb in the target page and protect it if necessary
  *
+ * Called with tb_lock held.
  * Called with mmap_lock held for user-mode emulation.
  */
 static inline void tb_alloc_page(TranslationBlock *tb,
@@ -1482,6 +1501,7 @@  static inline void tb_alloc_page(TranslationBlock *tb,
 /* add a new TB and link it to the physical page tables. phys_page2 is
  * (-1) to indicate that only one page contains the TB.
  *
+ * Called with tb_lock held.
  * Called with mmap_lock held for user-mode emulation.
  */
 static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
@@ -1528,7 +1548,8 @@  static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
 }
 
 /* find the TB 'tb' such that tb[0].tc_ptr <= tc_ptr <
-   tb[1].tc_ptr. Return NULL if not found */
+ * tb[1].tc_ptr. Return NULL if not found
+ */
 static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
 {
     int m_min, m_max, m;
@@ -1581,6 +1602,7 @@  void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
 }
 #endif /* !defined(CONFIG_USER_ONLY) */
 
+/* Called with tb_lock held.  */
 void tb_check_watchpoint(CPUState *cpu)
 {
     TranslationBlock *tb;