From patchwork Mon Mar 18 15:11:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 1058265 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 44NmJS4gvTz9s7h for ; Tue, 19 Mar 2019 19:22:48 +1100 (AEDT) Received: from localhost ([127.0.0.1]:53422 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h6A1G-0002Gw-5S for incoming@patchwork.ozlabs.org; Tue, 19 Mar 2019 04:22:46 -0400 Received: from eggs.gnu.org ([209.51.188.92]:43347) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h69wW-0007cu-QO for qemu-devel@nongnu.org; Tue, 19 Mar 2019 04:17:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h69wV-0002qx-5V for qemu-devel@nongnu.org; Tue, 19 Mar 2019 04:17:52 -0400 Received: from mga07.intel.com ([134.134.136.100]:44376) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1h69wU-0002VL-Kq for qemu-devel@nongnu.org; Tue, 19 Mar 2019 04:17:51 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Mar 2019 01:17:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,497,1544515200"; d="scan'208";a="153013259" Received: from local-michael-cet-test.sh.intel.com ([10.239.159.128]) by fmsmga002.fm.intel.com with ESMTP; 19 Mar 2019 01:17:42 -0700 From: Yang Weijiang To: pbonzini@redhat.com, qemu-devel@nongnu.org, mst@redhat.com, cdupontd@redhat.com, rkrcmar@redhat.com, sean.j.christopherson@intel.com Date: Mon, 18 Mar 2019 23:11:31 +0800 Message-Id: <20190318151131.15649-6-weijiang.yang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190318151131.15649-1-weijiang.yang@intel.com> References: <20190318151131.15649-1-weijiang.yang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.100 Subject: [Qemu-devel] [RFC PATCH v4 5/5] Add CET MSR save/restore support for migration X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yang Weijiang Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" To support features such as live-migration, CET runtime MSRs need to be saved in source machine and restored on destination machine, this patch is to save and restore CET_U, CET_S, PL0_SSP/PL1_SSP/PL2_SSP/PL3_SSP and SSP_TABL_ADDR MSRs. Signed-off-by: Yang Weijiang --- target/i386/cpu.h | 16 +++++ target/i386/kvm.c | 53 ++++++++++++++++ target/i386/machine.c | 141 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 210 insertions(+) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 7a181cb95f..e12f33e829 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -460,6 +460,14 @@ typedef enum X86Seg { #define MSR_IA32_BNDCFGS 0x00000d90 #define MSR_IA32_XSS 0x00000da0 +#define MSR_IA32_U_CET 0x6a0 +#define MSR_IA32_S_CET 0x6a2 +#define MSR_IA32_PL0_SSP 0x6a4 +#define MSR_IA32_PL1_SSP 0x6a5 +#define MSR_IA32_PL2_SSP 0x6a6 +#define MSR_IA32_PL3_SSP 0x6a7 +#define MSR_IA32_INTR_SSP_TBL 0x6a8 + #define XSTATE_FP_BIT 0 #define XSTATE_SSE_BIT 1 #define XSTATE_YMM_BIT 2 @@ -1322,6 +1330,14 @@ typedef struct CPUX86State { uintptr_t retaddr; + uint64_t u_cet; + uint64_t s_cet; + uint64_t pl0_ssp; + uint64_t pl1_ssp; + uint64_t pl2_ssp; + uint64_t pl3_ssp; + uint64_t ssp_tabl_addr; + /* Fields up to this point are cleared by a CPU reset */ struct {} end_reset_fields; diff --git a/target/i386/kvm.c b/target/i386/kvm.c index f524e7d929..597cec0aaa 100644 --- a/target/i386/kvm.c +++ b/target/i386/kvm.c @@ -63,6 +63,8 @@ /* A 4096-byte buffer can hold the 8-byte kvm_msrs header, plus * 255 kvm_msr_entry structs */ #define MSR_BUF_SIZE 4096 +#define HAS_CET_CAP(env) (env->features[FEAT_7_0_ECX] & 0x80 || \ + env->features[FEAT_7_0_EDX] & 0x100000) const KVMCapabilityInfo kvm_arch_required_capabilities[] = { KVM_CAP_INFO(SET_TSS_ADDR), @@ -2197,6 +2199,21 @@ static int kvm_put_msrs(X86CPU *cpu, int level) } } + if (HAS_CET_CAP(env)) { + /* + * DO NOT change below register sequence, the first 3 are + * written to guest fpu xsave area as a whole, the rest 2 + * are written to vmcs guest fields. + */ + kvm_msr_entry_add(cpu, MSR_IA32_U_CET, env->u_cet); + kvm_msr_entry_add(cpu, MSR_IA32_PL0_SSP, env->pl0_ssp); + kvm_msr_entry_add(cpu, MSR_IA32_PL1_SSP, env->pl1_ssp); + kvm_msr_entry_add(cpu, MSR_IA32_PL2_SSP, env->pl2_ssp); + kvm_msr_entry_add(cpu, MSR_IA32_PL3_SSP, env->pl3_ssp); + kvm_msr_entry_add(cpu, MSR_IA32_S_CET, env->s_cet); + kvm_msr_entry_add(cpu, MSR_IA32_INTR_SSP_TBL, env->ssp_tabl_addr); + } + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_MSRS, cpu->kvm_msr_buf); if (ret < 0) { return ret; @@ -2516,6 +2533,21 @@ static int kvm_get_msrs(X86CPU *cpu) } } + if (HAS_CET_CAP(env)) { + /* + * DO NOT change below register sequence, the first 3 are + * read from guest fpu xsave area as a whole, the rest 2 + * are read from vmcs guest fields. + */ + kvm_msr_entry_add(cpu, MSR_IA32_U_CET, 0); + kvm_msr_entry_add(cpu, MSR_IA32_PL0_SSP, 0); + kvm_msr_entry_add(cpu, MSR_IA32_PL1_SSP, 0); + kvm_msr_entry_add(cpu, MSR_IA32_PL2_SSP, 0); + kvm_msr_entry_add(cpu, MSR_IA32_PL3_SSP, 0); + kvm_msr_entry_add(cpu, MSR_IA32_S_CET, 0); + kvm_msr_entry_add(cpu, MSR_IA32_INTR_SSP_TBL, 0); + } + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, cpu->kvm_msr_buf); if (ret < 0) { return ret; @@ -2789,6 +2821,27 @@ static int kvm_get_msrs(X86CPU *cpu) case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B: env->msr_rtit_addrs[index - MSR_IA32_RTIT_ADDR0_A] = msrs[i].data; break; + case MSR_IA32_U_CET: + env->u_cet = msrs[i].data; + break; + case MSR_IA32_S_CET: + env->s_cet = msrs[i].data; + break; + case MSR_IA32_PL0_SSP: + env->pl0_ssp = msrs[i].data; + break; + case MSR_IA32_PL1_SSP: + env->pl1_ssp = msrs[i].data; + break; + case MSR_IA32_PL2_SSP: + env->pl2_ssp = msrs[i].data; + break; + case MSR_IA32_PL3_SSP: + env->pl3_ssp = msrs[i].data; + break; + case MSR_IA32_INTR_SSP_TBL: + env->ssp_tabl_addr = msrs[i].data; + break; } } diff --git a/target/i386/machine.c b/target/i386/machine.c index 225b5d433b..5f933dedfa 100644 --- a/target/i386/machine.c +++ b/target/i386/machine.c @@ -810,6 +810,140 @@ static const VMStateDescription vmstate_xss = { } }; +static bool u_cet_needed(void *opaque) +{ + X86CPU *cpu = opaque; + CPUX86State *env = &cpu->env; + + return env->u_cet != 0; +} + +static const VMStateDescription vmstate_u_cet = { + .name = "cpu/u_cet", + .version_id = 1, + .minimum_version_id = 1, + .needed = u_cet_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT64(env.u_cet, X86CPU), + VMSTATE_END_OF_LIST() + } +}; + +static bool s_cet_needed(void *opaque) +{ + X86CPU *cpu = opaque; + CPUX86State *env = &cpu->env; + + return env->s_cet != 0; +} + +static const VMStateDescription vmstate_s_cet = { + .name = "cpu/s_cet", + .version_id = 1, + .minimum_version_id = 1, + .needed = s_cet_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT64(env.s_cet, X86CPU), + VMSTATE_END_OF_LIST() + } +}; + +static bool pl0_ssp_needed(void *opaque) +{ + X86CPU *cpu = opaque; + CPUX86State *env = &cpu->env; + + return env->pl0_ssp != 0; +} + +static const VMStateDescription vmstate_pl0_ssp = { + .name = "cpu/pl0_ssp", + .version_id = 1, + .minimum_version_id = 1, + .needed = pl0_ssp_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT64(env.pl0_ssp, X86CPU), + VMSTATE_END_OF_LIST() + } +}; + +static bool pl1_ssp_needed(void *opaque) +{ + X86CPU *cpu = opaque; + CPUX86State *env = &cpu->env; + + return env->pl1_ssp != 0; +} + +static const VMStateDescription vmstate_pl1_ssp = { + .name = "cpu/pl1_ssp", + .version_id = 1, + .minimum_version_id = 1, + .needed = pl1_ssp_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT64(env.pl1_ssp, X86CPU), + VMSTATE_END_OF_LIST() + } +}; + +static bool pl2_ssp_needed(void *opaque) +{ + X86CPU *cpu = opaque; + CPUX86State *env = &cpu->env; + + return env->pl2_ssp != 0; +} + +static const VMStateDescription vmstate_pl2_ssp = { + .name = "cpu/pl2_ssp", + .version_id = 1, + .minimum_version_id = 1, + .needed = pl2_ssp_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT64(env.pl2_ssp, X86CPU), + VMSTATE_END_OF_LIST() + } +}; + + +static bool pl3_ssp_needed(void *opaque) +{ + X86CPU *cpu = opaque; + CPUX86State *env = &cpu->env; + + return env->pl3_ssp != 0; +} + +static const VMStateDescription vmstate_pl3_ssp = { + .name = "cpu/pl3_ssp", + .version_id = 1, + .minimum_version_id = 1, + .needed = pl3_ssp_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT64(env.pl3_ssp, X86CPU), + VMSTATE_END_OF_LIST() + } +}; + +static bool ssp_tabl_addr_needed(void *opaque) +{ + X86CPU *cpu = opaque; + CPUX86State *env = &cpu->env; + + return env->ssp_tabl_addr != 0; +} + +static const VMStateDescription vmstate_ssp_tabl_addr = { + .name = "cpu/ssp_tabl_addr", + .version_id = 1, + .minimum_version_id = 1, + .needed = ssp_tabl_addr_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT64(env.ssp_tabl_addr, X86CPU), + VMSTATE_END_OF_LIST() + } +}; + #ifdef TARGET_X86_64 static bool pkru_needed(void *opaque) { @@ -1089,6 +1223,13 @@ VMStateDescription vmstate_x86_cpu = { &vmstate_msr_intel_pt, &vmstate_msr_virt_ssbd, &vmstate_svm_npt, + &vmstate_u_cet, + &vmstate_s_cet, + &vmstate_pl0_ssp, + &vmstate_pl1_ssp, + &vmstate_pl2_ssp, + &vmstate_pl3_ssp, + &vmstate_ssp_tabl_addr, NULL } };