From patchwork Fri Apr 26 18:19:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 240035 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id F373A2C00B3 for ; Sat, 27 Apr 2013 05:53:09 +1000 (EST) Received: from localhost ([::1]:37596 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UVnMd-0002e0-TS for incoming@patchwork.ozlabs.org; Fri, 26 Apr 2013 14:27:19 -0400 Received: from eggs.gnu.org ([208.118.235.92]:54512) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UVnFK-0007sJ-20 for qemu-devel@nongnu.org; Fri, 26 Apr 2013 14:19:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UVnFA-0004EB-R1 for qemu-devel@nongnu.org; Fri, 26 Apr 2013 14:19:45 -0400 Received: from cantor2.suse.de ([195.135.220.15]:38754 helo=mx2.suse.de) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UVnFA-0004Cv-8J; Fri, 26 Apr 2013 14:19:36 -0400 Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id A3D825E000218; Fri, 26 Apr 2013 20:19:34 +0200 (CEST) From: Alexander Graf To: qemu-ppc@nongnu.org Date: Fri, 26 Apr 2013 20:19:31 +0200 Message-Id: <1367000373-7972-23-git-send-email-agraf@suse.de> X-Mailer: git-send-email 1.6.0.2 In-Reply-To: <1367000373-7972-1-git-send-email-agraf@suse.de> References: <1367000373-7972-1-git-send-email-agraf@suse.de> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x X-Received-From: 195.135.220.15 Cc: Blue Swirl , qemu-devel@nongnu.org, Aurelien Jarno , "Jason J. Herne" Subject: [Qemu-devel] [PATCH 22/24] Allow selective runtime register synchronization X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Jason J. Herne We want to avoid expensive register synchronization IOCTL's on the hot path so a new kvm_s390_get_registers_partial() is introduced as a compliment to kvm_arch_get_registers(). The new function is called on the hot path, and kvm_arch_get_registers() is called when we need the complete runtime register state. kvm_arch_put_registers() is updated to only sync the partial runtime set when we've only dirtied the partial runtime set. This is to avoid sending bad data back to KVM if we've only partially synced the runtime register set. Signed-off-by: Jason J. Herne Reviewed-by: Christian Borntraeger Signed-off-by: Alexander Graf --- target-s390x/cpu.h | 17 +++++++++++++ target-s390x/kvm.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+), 0 deletions(-) diff --git a/target-s390x/cpu.h b/target-s390x/cpu.h index e351005..0ce82cf 100644 --- a/target-s390x/cpu.h +++ b/target-s390x/cpu.h @@ -78,6 +78,11 @@ typedef struct MchkQueue { uint16_t type; } MchkQueue; +/* Defined values for CPUS390XState.runtime_reg_dirty_mask */ +#define KVM_S390_RUNTIME_DIRTY_NONE 0 +#define KVM_S390_RUNTIME_DIRTY_PARTIAL 1 +#define KVM_S390_RUNTIME_DIRTY_FULL 2 + typedef struct CPUS390XState { uint64_t regs[16]; /* GP registers */ CPU_DoubleU fregs[16]; /* FP registers */ @@ -121,6 +126,13 @@ typedef struct CPUS390XState { uint64_t cputm; uint32_t todpr; + /* on S390 the runtime register set has two dirty states: + * a partial dirty state in which only the registers that + * are needed all the time are fetched. And a fully dirty + * state in which all runtime registers are fetched. + */ + uint32_t runtime_reg_dirty_mask; + CPU_COMMON /* reset does memset(0) up to here */ @@ -1068,6 +1080,7 @@ void kvm_s390_io_interrupt(S390CPU *cpu, uint16_t subchannel_id, uint32_t io_int_word); void kvm_s390_crw_mchk(S390CPU *cpu); void kvm_s390_enable_css_support(S390CPU *cpu); +int kvm_s390_get_registers_partial(CPUState *cpu); #else static inline void kvm_s390_io_interrupt(S390CPU *cpu, uint16_t subchannel_id, @@ -1082,6 +1095,10 @@ static inline void kvm_s390_crw_mchk(S390CPU *cpu) static inline void kvm_s390_enable_css_support(S390CPU *cpu) { } +static inline int kvm_s390_get_registers_partial(CPUState *cpu) +{ + return -ENOSYS; +} #endif static inline void s390_io_interrupt(S390CPU *cpu, diff --git a/target-s390x/kvm.c b/target-s390x/kvm.c index 644f484..02b2e39 100644 --- a/target-s390x/kvm.c +++ b/target-s390x/kvm.c @@ -123,6 +123,7 @@ int kvm_arch_put_registers(CPUState *cs, int level) { S390CPU *cpu = S390_CPU(cs); CPUS390XState *env = &cpu->env; + struct kvm_one_reg reg; struct kvm_sregs sregs; struct kvm_regs regs; int ret; @@ -147,6 +148,30 @@ int kvm_arch_put_registers(CPUState *cs, int level) } } + if (env->runtime_reg_dirty_mask == KVM_S390_RUNTIME_DIRTY_FULL) { + reg.id = KVM_REG_S390_CPU_TIMER; + reg.addr = (__u64)&(env->cputm); + ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, ®); + if (ret < 0) { + return ret; + } + + reg.id = KVM_REG_S390_CLOCK_COMP; + reg.addr = (__u64)&(env->ckc); + ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, ®); + if (ret < 0) { + return ret; + } + + reg.id = KVM_REG_S390_TODPR; + reg.addr = (__u64)&(env->todpr); + ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, ®); + if (ret < 0) { + return ret; + } + } + env->runtime_reg_dirty_mask = KVM_S390_RUNTIME_DIRTY_NONE; + /* Do we need to save more than that? */ if (level == KVM_PUT_RUNTIME_STATE) { return 0; @@ -186,11 +211,52 @@ int kvm_arch_get_registers(CPUState *cs) { S390CPU *cpu = S390_CPU(cs); CPUS390XState *env = &cpu->env; + struct kvm_one_reg reg; + int r; + + r = kvm_s390_get_registers_partial(cs); + if (r < 0) { + return r; + } + + reg.id = KVM_REG_S390_CPU_TIMER; + reg.addr = (__u64)&(env->cputm); + r = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, ®); + if (r < 0) { + return r; + } + + reg.id = KVM_REG_S390_CLOCK_COMP; + reg.addr = (__u64)&(env->ckc); + r = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, ®); + if (r < 0) { + return r; + } + + reg.id = KVM_REG_S390_TODPR; + reg.addr = (__u64)&(env->todpr); + r = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, ®); + if (r < 0) { + return r; + } + + env->runtime_reg_dirty_mask = KVM_S390_RUNTIME_DIRTY_FULL; + return 0; +} + +int kvm_s390_get_registers_partial(CPUState *cs) +{ + S390CPU *cpu = S390_CPU(cs); + CPUS390XState *env = &cpu->env; struct kvm_sregs sregs; struct kvm_regs regs; int ret; int i; + if (env->runtime_reg_dirty_mask) { + return 0; + } + /* get the PSW */ env->psw.addr = cs->kvm_run->psw_addr; env->psw.mask = cs->kvm_run->psw_mask; @@ -236,6 +302,7 @@ int kvm_arch_get_registers(CPUState *cs) /* no prefix without sync regs */ } + env->runtime_reg_dirty_mask = KVM_S390_RUNTIME_DIRTY_PARTIAL; return 0; }