diff mbox series

linux-user: wrap fork() in a start/end exclusive section

Message ID 1512650481-1723-1-git-send-email-peter.maydell@linaro.org
State New
Headers show
Series linux-user: wrap fork() in a start/end exclusive section | expand

Commit Message

Peter Maydell Dec. 7, 2017, 12:41 p.m. UTC
When we do a fork() in usermode emulation, we need to be in
a start/end exclusive section, so that we can ensure that no
other thread is in an RCU section. Otherwise you can get this
deadlock:

- fork thread: has mmap_lock, waits for rcu_sync_lock
  (because rcu_init_lock() is registered as a pthread_atfork() hook)
- RCU thread: has rcu_sync_lock, waits for rcu_read_(un)lock
- another CPU thread: in RCU critical section, waits for mmap_lock

This can show up if you have a heavily multithreaded guest program
that does a fork().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reported-by: Stuart Monteith <stuart.monteith@linaro.org>
---
Based-on: <1512397331-15238-1-git-send-email-peter.maydell@linaro.org>
(this applies on top of 'linux-user: Fix locking order in fork_start()')

I think this should fix the deadlock that Stuart reports, but I
can't reproduce it, so testing welcome.

 linux-user/main.c | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Laurent Vivier Jan. 19, 2018, 3:22 p.m. UTC | #1
Le 07/12/2017 à 13:41, Peter Maydell a écrit :
> When we do a fork() in usermode emulation, we need to be in
> a start/end exclusive section, so that we can ensure that no
> other thread is in an RCU section. Otherwise you can get this
> deadlock:
> 
> - fork thread: has mmap_lock, waits for rcu_sync_lock
>   (because rcu_init_lock() is registered as a pthread_atfork() hook)
> - RCU thread: has rcu_sync_lock, waits for rcu_read_(un)lock
> - another CPU thread: in RCU critical section, waits for mmap_lock
> 
> This can show up if you have a heavily multithreaded guest program
> that does a fork().
> 
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> Reported-by: Stuart Monteith <stuart.monteith@linaro.org>
> ---
> Based-on: <1512397331-15238-1-git-send-email-peter.maydell@linaro.org>
> (this applies on top of 'linux-user: Fix locking order in fork_start()')
> 
> I think this should fix the deadlock that Stuart reports, but I
> can't reproduce it, so testing welcome.
> 
>  linux-user/main.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/linux-user/main.c b/linux-user/main.c
> index 146ee3e..ff116fe 100644
> --- a/linux-user/main.c
> +++ b/linux-user/main.c
> @@ -128,6 +128,7 @@ int cpu_get_pic_interrupt(CPUX86State *env)
>  /* Make sure everything is in a consistent state for calling fork().  */
>  void fork_start(void)
>  {
> +    start_exclusive();
>      mmap_fork_start();
>      qemu_mutex_lock(&tb_ctx.tb_lock);
>      cpu_list_lock();
> @@ -148,9 +149,13 @@ void fork_end(int child)
>          qemu_mutex_init(&tb_ctx.tb_lock);
>          qemu_init_cpu_list();
>          gdbserver_fork(thread_cpu);
> +        /* qemu_init_cpu_list() takes care of reinitializing the
> +         * exclusive state, so we don't need to end_exclusive() here.
> +         */
>      } else {
>          qemu_mutex_unlock(&tb_ctx.tb_lock);
>          cpu_list_unlock();
> +        end_exclusive();
>      }
>  }
>  
> 

Applied to my linux-user branch.

Thanks,
Laurent
diff mbox series

Patch

diff --git a/linux-user/main.c b/linux-user/main.c
index 146ee3e..ff116fe 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -128,6 +128,7 @@  int cpu_get_pic_interrupt(CPUX86State *env)
 /* Make sure everything is in a consistent state for calling fork().  */
 void fork_start(void)
 {
+    start_exclusive();
     mmap_fork_start();
     qemu_mutex_lock(&tb_ctx.tb_lock);
     cpu_list_lock();
@@ -148,9 +149,13 @@  void fork_end(int child)
         qemu_mutex_init(&tb_ctx.tb_lock);
         qemu_init_cpu_list();
         gdbserver_fork(thread_cpu);
+        /* qemu_init_cpu_list() takes care of reinitializing the
+         * exclusive state, so we don't need to end_exclusive() here.
+         */
     } else {
         qemu_mutex_unlock(&tb_ctx.tb_lock);
         cpu_list_unlock();
+        end_exclusive();
     }
 }