From patchwork Sun Jun 19 22:28:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: sergey.fedorov@linaro.org X-Patchwork-Id: 637757 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3rXpnV43ftz9sxb for ; Mon, 20 Jun 2016 08:39:10 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b=bRg8GL78; dkim-atps=neutral Received: from localhost ([::1]:40308 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bElMy-00079R-9N for incoming@patchwork.ozlabs.org; Sun, 19 Jun 2016 18:39:08 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49817) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bElDp-0005Hv-1N for qemu-devel@nongnu.org; Sun, 19 Jun 2016 18:29:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bElDm-0002OH-J8 for qemu-devel@nongnu.org; Sun, 19 Jun 2016 18:29:39 -0400 Received: from mail-lf0-x22c.google.com ([2a00:1450:4010:c07::22c]:33882) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bElDm-0002O2-7k for qemu-devel@nongnu.org; Sun, 19 Jun 2016 18:29:38 -0400 Received: by mail-lf0-x22c.google.com with SMTP id h129so26381358lfh.1 for ; Sun, 19 Jun 2016 15:29:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=aoQ5sdP2r8xu1X9IaFSwTn+HSOHHp5QSNz2550Ly+MQ=; b=bRg8GL7851U0zKhed3qJf8Jc+zLpgGFlzRw9pEbHcVIuaybR80RRiLpoHOn8dNQ9u6 nKDhUEhz+Pd92l2WxIM+EIS/Dyl/CuED6/D/5KmrQgOw3ll7vipEzSGnzqd2WzFFDfz6 oXRhe/iZSwcCbWp2i3WWBcTfL+DukD1iBglo4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=aoQ5sdP2r8xu1X9IaFSwTn+HSOHHp5QSNz2550Ly+MQ=; b=DxL7H49yGv/syGydhO1+6r+DOwVHMS7dP2UTCJPzE/Xrlcq9j/LfsknxIHF9IsfpSV vJS5RSVM3YErUCBB8B7kWqOKRP9+K25wCTI6CYOG//O31e+jOoAYfACH96YEPolFBhFN 2Ad+qd28+r0G0wb679aP3GQ465cY7w+TT5xclJWwP59vAUTrOLbUYbM5LUzoORVKR5Ly qSsyhojZRcU9HcTT4YTbE4OV2Iw7XQZaoXGcaY31YkbtEUfk2D5OEos3bPso21JYhhn0 km96J0r34q6zPxHj3yMIs0NJG8jpGV1l9J9ckiRsqIrZQTCvOhhzC9tfFlGtHscUArAc jBUQ== X-Gm-Message-State: ALyK8tJE1IhHdjKAsnQ85kgmlLF7Qq7F/x7ILPcVYVRGODSHyRj1n8pbF5Cg0p720Naq2VYx X-Received: by 10.25.29.7 with SMTP id d7mr3381689lfd.93.1466375374521; Sun, 19 Jun 2016 15:29:34 -0700 (PDT) Received: from sergey-laptop.Dlink (broadband-46-188-121-115.2com.net. [46.188.121.115]) by smtp.gmail.com with ESMTPSA id eu7sm3191552lbc.39.2016.06.19.15.29.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 19 Jun 2016 15:29:33 -0700 (PDT) From: Sergey Fedorov To: qemu-devel@nongnu.org Date: Mon, 20 Jun 2016 01:28:31 +0300 Message-Id: <1466375313-7562-7-git-send-email-sergey.fedorov@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1466375313-7562-1-git-send-email-sergey.fedorov@linaro.org> References: <1466375313-7562-1-git-send-email-sergey.fedorov@linaro.org> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:4010:c07::22c Subject: [Qemu-devel] [RFC 6/8] linux-user: Support CPU work queue X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Riku Voipio , Sergey Fedorov , Peter Crosthwaite , patches@linaro.org, Paolo Bonzini , Sergey Fedorov , Richard Henderson Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Sergey Fedorov Make CPU work core functions common between system and user-mode emulation. User-mode does not have BQL, so flush_queued_work() is protected by 'exclusive_lock'. Signed-off-by: Sergey Fedorov Signed-off-by: Sergey Fedorov --- cpu-exec-common.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++ cpus.c | 87 ++----------------------------------------------- include/exec/exec-all.h | 4 +++ linux-user/main.c | 13 ++++++++ 4 files changed, 102 insertions(+), 85 deletions(-) diff --git a/cpu-exec-common.c b/cpu-exec-common.c index 0cb4ae60eff9..8184e0662cbd 100644 --- a/cpu-exec-common.c +++ b/cpu-exec-common.c @@ -77,3 +77,86 @@ void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc) } siglongjmp(cpu->jmp_env, 1); } + +static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi) +{ + qemu_mutex_lock(&cpu->work_mutex); + if (cpu->queued_work_first == NULL) { + cpu->queued_work_first = wi; + } else { + cpu->queued_work_last->next = wi; + } + cpu->queued_work_last = wi; + wi->next = NULL; + wi->done = false; + qemu_mutex_unlock(&cpu->work_mutex); + + qemu_cpu_kick(cpu); +} + +void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) +{ + struct qemu_work_item wi; + + if (qemu_cpu_is_self(cpu)) { + func(cpu, data); + return; + } + + wi.func = func; + wi.data = data; + wi.free = false; + + queue_work_on_cpu(cpu, &wi); + while (!atomic_mb_read(&wi.done)) { + CPUState *self_cpu = current_cpu; + + wait_cpu_work(); + current_cpu = self_cpu; + } +} + +void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) +{ + struct qemu_work_item *wi; + + if (qemu_cpu_is_self(cpu)) { + func(cpu, data); + return; + } + + wi = g_malloc0(sizeof(struct qemu_work_item)); + wi->func = func; + wi->data = data; + wi->free = true; + + queue_work_on_cpu(cpu, wi); +} + +void flush_queued_work(CPUState *cpu) +{ + struct qemu_work_item *wi; + + if (cpu->queued_work_first == NULL) { + return; + } + + qemu_mutex_lock(&cpu->work_mutex); + while (cpu->queued_work_first != NULL) { + wi = cpu->queued_work_first; + cpu->queued_work_first = wi->next; + if (!cpu->queued_work_first) { + cpu->queued_work_last = NULL; + } + qemu_mutex_unlock(&cpu->work_mutex); + wi->func(cpu, wi->data); + qemu_mutex_lock(&cpu->work_mutex); + if (wi->free) { + g_free(wi); + } else { + atomic_mb_set(&wi->done, true); + } + } + qemu_mutex_unlock(&cpu->work_mutex); + signal_cpu_work(); +} diff --git a/cpus.c b/cpus.c index f123eb707cc6..98f60f6f98f5 100644 --- a/cpus.c +++ b/cpus.c @@ -910,71 +910,16 @@ void qemu_init_cpu_loop(void) qemu_thread_get_self(&io_thread); } -static void wait_cpu_work(void) +void wait_cpu_work(void) { qemu_cond_wait(&qemu_work_cond, &qemu_global_mutex); } -static void signal_cpu_work(void) +void signal_cpu_work(void) { qemu_cond_broadcast(&qemu_work_cond); } -static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi) -{ - qemu_mutex_lock(&cpu->work_mutex); - if (cpu->queued_work_first == NULL) { - cpu->queued_work_first = wi; - } else { - cpu->queued_work_last->next = wi; - } - cpu->queued_work_last = wi; - wi->next = NULL; - wi->done = false; - qemu_mutex_unlock(&cpu->work_mutex); - - qemu_cpu_kick(cpu); -} - -void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) -{ - struct qemu_work_item wi; - - if (qemu_cpu_is_self(cpu)) { - func(cpu, data); - return; - } - - wi.func = func; - wi.data = data; - wi.free = false; - - queue_work_on_cpu(cpu, &wi); - while (!atomic_mb_read(&wi.done)) { - CPUState *self_cpu = current_cpu; - - wait_cpu_work(); - current_cpu = self_cpu; - } -} - -void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) -{ - struct qemu_work_item *wi; - - if (qemu_cpu_is_self(cpu)) { - func(cpu, data); - return; - } - - wi = g_malloc0(sizeof(struct qemu_work_item)); - wi->func = func; - wi->data = data; - wi->free = true; - - queue_work_on_cpu(cpu, wi); -} - static void qemu_kvm_destroy_vcpu(CPUState *cpu) { if (kvm_destroy_vcpu(cpu) < 0) { @@ -987,34 +932,6 @@ static void qemu_tcg_destroy_vcpu(CPUState *cpu) { } -static void flush_queued_work(CPUState *cpu) -{ - struct qemu_work_item *wi; - - if (cpu->queued_work_first == NULL) { - return; - } - - qemu_mutex_lock(&cpu->work_mutex); - while (cpu->queued_work_first != NULL) { - wi = cpu->queued_work_first; - cpu->queued_work_first = wi->next; - if (!cpu->queued_work_first) { - cpu->queued_work_last = NULL; - } - qemu_mutex_unlock(&cpu->work_mutex); - wi->func(cpu, wi->data); - qemu_mutex_lock(&cpu->work_mutex); - if (wi->free) { - g_free(wi); - } else { - atomic_mb_set(&wi->done, true); - } - } - qemu_mutex_unlock(&cpu->work_mutex); - signal_cpu_work(); -} - static void qemu_wait_io_event_common(CPUState *cpu) { if (cpu->stop) { diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index c1f59fa59d2c..23b4b50e0a45 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -407,4 +407,8 @@ extern int singlestep; extern CPUState *tcg_current_cpu; extern bool exit_request; +void wait_cpu_work(void); +void signal_cpu_work(void); +void flush_queued_work(CPUState *cpu); + #endif diff --git a/linux-user/main.c b/linux-user/main.c index 0093a8008c8e..5a68651159c2 100644 --- a/linux-user/main.c +++ b/linux-user/main.c @@ -111,6 +111,7 @@ static pthread_mutex_t cpu_list_mutex = PTHREAD_MUTEX_INITIALIZER; static pthread_mutex_t exclusive_lock = PTHREAD_MUTEX_INITIALIZER; static pthread_cond_t exclusive_cond = PTHREAD_COND_INITIALIZER; static pthread_cond_t exclusive_resume = PTHREAD_COND_INITIALIZER; +static pthread_cond_t work_cond = PTHREAD_COND_INITIALIZER; static bool exclusive_pending; static int tcg_pending_cpus; @@ -140,6 +141,7 @@ void fork_end(int child) pthread_mutex_init(&cpu_list_mutex, NULL); pthread_cond_init(&exclusive_cond, NULL); pthread_cond_init(&exclusive_resume, NULL); + pthread_cond_init(&work_cond, NULL); qemu_mutex_init(&tcg_ctx.tb_ctx.tb_lock); gdbserver_fork(thread_cpu); } else { @@ -148,6 +150,16 @@ void fork_end(int child) } } +void wait_cpu_work(void) +{ + pthread_cond_wait(&work_cond, &exclusive_lock); +} + +void signal_cpu_work(void) +{ + pthread_cond_broadcast(&work_cond); +} + /* Wait for pending exclusive operations to complete. The exclusive lock must be held. */ static inline void exclusive_idle(void) @@ -206,6 +218,7 @@ static inline void cpu_exec_end(CPUState *cpu) pthread_cond_broadcast(&exclusive_cond); } exclusive_idle(); + flush_queued_work(cpu); pthread_mutex_unlock(&exclusive_lock); }