From patchwork Thu Mar 4 15:05:13 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 46915 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id CD331B7CB9 for ; Fri, 5 Mar 2010 02:19:18 +1100 (EST) Received: from localhost ([127.0.0.1]:51407 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NnCpX-0006SH-Gs for incoming@patchwork.ozlabs.org; Thu, 04 Mar 2010 10:19:15 -0500 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NnCdL-0002VX-To for qemu-devel@nongnu.org; Thu, 04 Mar 2010 10:06:40 -0500 Received: from [199.232.76.173] (port=44323 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NnCdK-0002Uk-DO for qemu-devel@nongnu.org; Thu, 04 Mar 2010 10:06:38 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NnCdD-0000fk-60 for qemu-devel@nongnu.org; Thu, 04 Mar 2010 10:06:38 -0500 Received: from mx1.redhat.com ([209.132.183.28]:49133) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NnCdB-0000fJ-N7 for qemu-devel@nongnu.org; Thu, 04 Mar 2010 10:06:30 -0500 Received: from int-mx05.intmail.prod.int.phx2.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.18]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o24F6RLr024412 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 4 Mar 2010 10:06:27 -0500 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx05.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id o24F6QhG013885; Thu, 4 Mar 2010 10:06:26 -0500 Received: from amt.cnet (vpn-11-217.rdu.redhat.com [10.11.11.217]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id o24F6OkK028987; Thu, 4 Mar 2010 10:06:24 -0500 Received: from amt.cnet (amt.cnet [127.0.0.1]) by amt.cnet (Postfix) with ESMTP id D8AE068A79B; Thu, 4 Mar 2010 12:05:48 -0300 (BRT) Received: (from marcelo@localhost) by amt.cnet (8.14.3/8.14.3/Submit) id o24F5kJA028135; Thu, 4 Mar 2010 12:05:46 -0300 From: Marcelo Tosatti To: Anthony Liguori Date: Thu, 4 Mar 2010 12:05:13 -0300 Message-Id: In-Reply-To: References: X-Scanned-By: MIMEDefang 2.67 on 10.5.11.18 X-detected-operating-system: by monty-python.gnu.org: Genre and OS details not recognized. Cc: Jan Kiszka , Marcelo Tosatti , qemu-devel@nongnu.org, kvm@vger.kernel.org Subject: [Qemu-devel] [PATCH 3/6] KVM: Rework of guest debug state writing X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Jan Kiszka So far we synchronized any dirty VCPU state back into the kernel before updating the guest debug state. This was a tribute to a deficite in x86 kernels before 2.6.33. But as this is an arch-dependent issue, it is better handle in the x86 part of KVM and remove the writeback point for generic code. This also avoids overwriting the flushed state later on if user space decides to change some more registers before resuming the guest. We furthermore need to reinject guest exceptions via the appropriate mechanism. That is KVM_SET_GUEST_DEBUG for older kernels and KVM_SET_VCPU_EVENTS for recent ones. Using both mechanisms at the same time will cause state corruptions. Signed-off-by: Jan Kiszka Signed-off-by: Marcelo Tosatti --- kvm-all.c | 24 ++++++++++++++++-------- kvm.h | 1 + target-i386/kvm.c | 47 +++++++++++++++++++++++++++++++++++++++++++---- 3 files changed, 60 insertions(+), 12 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 1a02076..2f7e33a 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -65,6 +65,7 @@ struct KVMState int broken_set_mem_region; int migration_log; int vcpu_events; + int robust_singlestep; #ifdef KVM_CAP_SET_GUEST_DEBUG struct kvm_sw_breakpoint_head kvm_sw_breakpoints; #endif @@ -659,6 +660,12 @@ int kvm_init(int smp_cpus) s->vcpu_events = kvm_check_extension(s, KVM_CAP_VCPU_EVENTS); #endif + s->robust_singlestep = 0; +#ifdef KVM_CAP_X86_ROBUST_SINGLESTEP + s->robust_singlestep = + kvm_check_extension(s, KVM_CAP_X86_ROBUST_SINGLESTEP); +#endif + ret = kvm_arch_init(s, smp_cpus); if (ret < 0) goto err; @@ -917,6 +924,11 @@ int kvm_has_vcpu_events(void) return kvm_state->vcpu_events; } +int kvm_has_robust_singlestep(void) +{ + return kvm_state->robust_singlestep; +} + void kvm_setup_guest_memory(void *start, size_t size) { if (!kvm_has_sync_mmu()) { @@ -974,10 +986,6 @@ static void kvm_invoke_set_guest_debug(void *data) struct kvm_set_guest_debug_data *dbg_data = data; CPUState *env = dbg_data->env; - if (env->kvm_vcpu_dirty) { - kvm_arch_put_registers(env); - env->kvm_vcpu_dirty = 0; - } dbg_data->err = kvm_vcpu_ioctl(env, KVM_SET_GUEST_DEBUG, &dbg_data->dbg); } @@ -985,12 +993,12 @@ int kvm_update_guest_debug(CPUState *env, unsigned long reinject_trap) { struct kvm_set_guest_debug_data data; - data.dbg.control = 0; - if (env->singlestep_enabled) - data.dbg.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_SINGLESTEP; + data.dbg.control = reinject_trap; + if (env->singlestep_enabled) { + data.dbg.control |= KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_SINGLESTEP; + } kvm_arch_update_guest_debug(env, &data.dbg); - data.dbg.control |= reinject_trap; data.env = env; on_vcpu(env, kvm_invoke_set_guest_debug, &data); diff --git a/kvm.h b/kvm.h index a74dfcb..a602e45 100644 --- a/kvm.h +++ b/kvm.h @@ -40,6 +40,7 @@ int kvm_log_stop(target_phys_addr_t phys_addr, ram_addr_t size); int kvm_has_sync_mmu(void); int kvm_has_vcpu_events(void); +int kvm_has_robust_singlestep(void); void kvm_setup_guest_memory(void *start, size_t size); diff --git a/target-i386/kvm.c b/target-i386/kvm.c index d2116a7..e0247ea 100644 --- a/target-i386/kvm.c +++ b/target-i386/kvm.c @@ -852,6 +852,37 @@ static int kvm_get_vcpu_events(CPUState *env) return 0; } +static int kvm_guest_debug_workarounds(CPUState *env) +{ + int ret = 0; +#ifdef KVM_CAP_SET_GUEST_DEBUG + unsigned long reinject_trap = 0; + + if (!kvm_has_vcpu_events()) { + if (env->exception_injected == 1) { + reinject_trap = KVM_GUESTDBG_INJECT_DB; + } else if (env->exception_injected == 3) { + reinject_trap = KVM_GUESTDBG_INJECT_BP; + } + env->exception_injected = -1; + } + + /* + * Kernels before KVM_CAP_X86_ROBUST_SINGLESTEP overwrote flags.TF + * injected via SET_GUEST_DEBUG while updating GP regs. Work around this + * by updating the debug state once again if single-stepping is on. + * Another reason to call kvm_update_guest_debug here is a pending debug + * trap raise by the guest. On kernels without SET_VCPU_EVENTS we have to + * reinject them via SET_GUEST_DEBUG. + */ + if (reinject_trap || + (!kvm_has_robust_singlestep() && env->singlestep_enabled)) { + ret = kvm_update_guest_debug(env, reinject_trap); + } +#endif /* KVM_CAP_SET_GUEST_DEBUG */ + return ret; +} + int kvm_arch_put_registers(CPUState *env) { int ret; @@ -880,6 +911,11 @@ int kvm_arch_put_registers(CPUState *env) if (ret < 0) return ret; + /* must be last */ + ret = kvm_guest_debug_workarounds(env); + if (ret < 0) + return ret; + return 0; } @@ -1123,10 +1159,13 @@ int kvm_arch_debug(struct kvm_debug_exit_arch *arch_info) } else if (kvm_find_sw_breakpoint(cpu_single_env, arch_info->pc)) handle = 1; - if (!handle) - kvm_update_guest_debug(cpu_single_env, - (arch_info->exception == 1) ? - KVM_GUESTDBG_INJECT_DB : KVM_GUESTDBG_INJECT_BP); + if (!handle) { + cpu_synchronize_state(cpu_single_env); + assert(cpu_single_env->exception_injected == -1); + + cpu_single_env->exception_injected = arch_info->exception; + cpu_single_env->has_error_code = 0; + } return handle; }