From patchwork Fri Feb 25 19:33:56 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 84567 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id B875EB70ED for ; Sat, 26 Feb 2011 06:34:50 +1100 (EST) Received: from localhost ([127.0.0.1]:48198 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Pt3R9-0005nI-2F for incoming@patchwork.ozlabs.org; Fri, 25 Feb 2011 14:34:47 -0500 Received: from [140.186.70.92] (port=48135 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Pt3QU-0005mS-1F for qemu-devel@nongnu.org; Fri, 25 Feb 2011 14:34:11 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Pt3QP-00011K-3B for qemu-devel@nongnu.org; Fri, 25 Feb 2011 14:34:06 -0500 Received: from mail-wy0-f173.google.com ([74.125.82.173]:47935) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Pt3QO-000118-RU for qemu-devel@nongnu.org; Fri, 25 Feb 2011 14:34:01 -0500 Received: by wyb29 with SMTP id 29so2013734wyb.4 for ; Fri, 25 Feb 2011 11:33:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:message-id:date:from:user-agent :mime-version:to:cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=Zi37/4/dac3bYoQkUwr1e5EUIExaDXoYIwP7H8G2OFY=; b=fROvusZPFGDw+zJakbMVLgvIt2l1ZV5vlLNLTtBSCZSn1uWGfhVoOOVI7gGoqnwKSx lwmBIB/2e5KC7sSN5jastf3eDe9LPF3Z2dPXAlEoTcg10uVjUCXcJovARMtnwD7oNySw ZWsWNxbse4/yDm3jlXel449GZyOoppdbGM/0c= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; b=e8iOdgbRYWeKOMHFghCJa4kAlrZSKrOqOxuKEctgNIymlpR1sPB7r7AHKW0KcsFh5D C/ZgdnoQABZvhjJ8XcXtqaFwpNGA43cDjoRnimMkZ/BZbZEMR5uLZ8KRsdSM5Y72ATv8 jY2E2AxpPFwL2cCRtqFur51QEaKWwok1n4n7I= Received: by 10.227.157.19 with SMTP id z19mr2480943wbw.189.1298662439136; Fri, 25 Feb 2011 11:33:59 -0800 (PST) Received: from yakj.usersys.redhat.com (93-34-149-100.ip50.fastwebnet.it [93.34.149.100]) by mx.google.com with ESMTPS id r6sm494760weq.20.2011.02.25.11.33.57 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 25 Feb 2011 11:33:57 -0800 (PST) Message-ID: <4D680424.8050305@redhat.com> Date: Fri, 25 Feb 2011 20:33:56 +0100 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101209 Fedora/3.1.7-0.35.b3pre.fc14 Lightning/1.0b3pre Mnenhy/0.8.3 Thunderbird/3.1.7 MIME-Version: 1.0 To: Jan Kiszka References: <1298278286-9158-1-git-send-email-pbonzini@redhat.com> <20110223101806.GA27880@edde.se.axis.com> <4D64E0B2.3050900@redhat.com> <20110223110850.GB27880@edde.se.axis.com> <4D64F208.3070506@siemens.com> In-Reply-To: <4D64F208.3070506@siemens.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 2) X-Received-From: 74.125.82.173 Cc: "Edgar E. Iglesias" , qemu-devel@nongnu.org Subject: [Qemu-devel] Re: [PATCH 0/4] Improve -icount, fix it with iothread X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On 02/23/2011 12:39 PM, Jan Kiszka wrote: > You should try to trace the event flow in qemu, either via strace, via > the built-in tracer (which likely requires a bit more tracepoints), or > via a system-level tracer (ftrace / kernelshark). The apparent problem is that 25% of cycles is spent in mutex locking and unlocking. But in fact, the real problem is that 90% of the time is spent doing something else than executing code. QEMU exits _a lot_ due to the vm_clock timers. The deadlines are rarely more than a few ms ahead, and at 1 MIPS that leaves room for executing a few thousand instructions for each context switch. The iothread overhead is what makes the situation so bad, because it takes a lot more time to execute those instructions. We do so many (almost) useless passes through cpu_exec_all that even microoptimization helps, for example this: is enough to cut all_cpu_threads_idle from 9 to 4.5% (not unexpected: the number of calls is halved). But it shouldn't be that high anyway, so I'm not proposing the patch formally. Additionally, the fact that the execution is 99.99% lockstep means you cannot really overlap any part of the I/O and VCPU threads. I found a couple of inaccuracies in my patches that already cut 50% of the time, though. > Did my patches contribute a bit to overhead reduction? They specifically > target the costly vcpu/iothread switches in TCG mode (caused by TCGs > excessive lock-holding times). Yes, they cut 15%. Paolo --- a/cpus.c +++ b/cpus.c @@ -767,10 +767,6 @@ static void qemu_wait_io_event_common(CPUState *env) { CPUState *env; - while (all_cpu_threads_idle()) { - qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000); - } - qemu_mutex_unlock(&qemu_global_mutex); /* @@ -1110,7 +1111,15 @@ bool cpu_exec_all(void) } } exit_request = 0; + +#ifdef CONFIG_IOTHREAD + while (all_cpu_threads_idle()) { + qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000); + } + return true; +#else return !all_cpu_threads_idle(); +#endif } void set_numa_modes(void)