diff mbox

[Lucid,CVE-2014-3610] KVM: x86: Check non-canonical addresses upon WRMSR

Message ID 1417515403-15454-1-git-send-email-luis.henriques@canonical.com
State New
Headers show

Commit Message

Luis Henriques Dec. 2, 2014, 10:16 a.m. UTC
From: Nadav Amit <namit@cs.technion.ac.il>

Upon WRMSR, the CPU should inject #GP if a non-canonical value (address) is
written to certain MSRs. The behavior is "almost" identical for AMD and Intel
(ignoring MSRs that are not implemented in either architecture since they would
anyhow #GP). However, IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if
non-canonical address is written on Intel but not on AMD (which ignores the top
32-bits).

Accordingly, this patch injects a #GP on the MSRs which behave identically on
Intel and AMD.  To eliminate the differences between the architecutres, the
value which is written to IA32_SYSENTER_ESP and IA32_SYSENTER_EIP is turned to
canonical value before writing instead of injecting a #GP.

Some references from Intel and AMD manuals:

According to Intel SDM description of WRMSR instruction #GP is expected on
WRMSR "If the source register contains a non-canonical address and ECX
specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE,
IA32_KERNEL_GS_BASE, IA32_LSTAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP."

According to AMD manual instruction manual:
LSTAR/CSTAR (SYSCALL): "The WRMSR instruction loads the target RIP into the
LSTAR and CSTAR registers.  If an RIP written by WRMSR is not in canonical
form, a general-protection exception (#GP) occurs."
IA32_GS_BASE and IA32_FS_BASE (WRFSBASE/WRGSBASE): "The address written to the
base field must be in canonical form or a #GP fault will occur."
IA32_KERNEL_GS_BASE (SWAPGS): "The address stored in the KernelGSbase MSR must
be in canonical form."

This patch fixes CVE-2014-3610.

Cc: stable@vger.kernel.org
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(backported from commit 854e8bb1aa06c578c2c9145fa6bfe3680ef63b23)
[ luis: based on upstream backport to 3.2 kernel ]
CVE-2014-3610
BugLink: http://bugs.launchpad.net/bugs/1384539
Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
---
 arch/x86/include/asm/kvm_host.h | 14 ++++++++++++++
 arch/x86/kvm/svm.c              |  2 +-
 arch/x86/kvm/vmx.c              |  2 +-
 arch/x86/kvm/x86.c              | 27 ++++++++++++++++++++++++++-
 4 files changed, 42 insertions(+), 3 deletions(-)

Comments

Seth Forshee Dec. 2, 2014, 3:02 p.m. UTC | #1

Brad Figg Dec. 2, 2014, 4:55 p.m. UTC | #2
On 12/02/2014 02:16 AM, Luis Henriques wrote:
> From: Nadav Amit <namit@cs.technion.ac.il>
> 
> Upon WRMSR, the CPU should inject #GP if a non-canonical value (address) is
> written to certain MSRs. The behavior is "almost" identical for AMD and Intel
> (ignoring MSRs that are not implemented in either architecture since they would
> anyhow #GP). However, IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if
> non-canonical address is written on Intel but not on AMD (which ignores the top
> 32-bits).
> 
> Accordingly, this patch injects a #GP on the MSRs which behave identically on
> Intel and AMD.  To eliminate the differences between the architecutres, the
> value which is written to IA32_SYSENTER_ESP and IA32_SYSENTER_EIP is turned to
> canonical value before writing instead of injecting a #GP.
> 
> Some references from Intel and AMD manuals:
> 
> According to Intel SDM description of WRMSR instruction #GP is expected on
> WRMSR "If the source register contains a non-canonical address and ECX
> specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE,
> IA32_KERNEL_GS_BASE, IA32_LSTAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP."
> 
> According to AMD manual instruction manual:
> LSTAR/CSTAR (SYSCALL): "The WRMSR instruction loads the target RIP into the
> LSTAR and CSTAR registers.  If an RIP written by WRMSR is not in canonical
> form, a general-protection exception (#GP) occurs."
> IA32_GS_BASE and IA32_FS_BASE (WRFSBASE/WRGSBASE): "The address written to the
> base field must be in canonical form or a #GP fault will occur."
> IA32_KERNEL_GS_BASE (SWAPGS): "The address stored in the KernelGSbase MSR must
> be in canonical form."
> 
> This patch fixes CVE-2014-3610.
> 
> Cc: stable@vger.kernel.org
> Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> (backported from commit 854e8bb1aa06c578c2c9145fa6bfe3680ef63b23)
> [ luis: based on upstream backport to 3.2 kernel ]
> CVE-2014-3610
> BugLink: http://bugs.launchpad.net/bugs/1384539
> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 14 ++++++++++++++
>  arch/x86/kvm/svm.c              |  2 +-
>  arch/x86/kvm/vmx.c              |  2 +-
>  arch/x86/kvm/x86.c              | 27 ++++++++++++++++++++++++++-
>  4 files changed, 42 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 08bc2ff8dcc8..38cc47489f4a 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -741,6 +741,20 @@ static inline void kvm_inject_gp(struct kvm_vcpu *vcpu, u32 error_code)
>  	kvm_queue_exception_e(vcpu, GP_VECTOR, error_code);
>  }
>  
> +static inline u64 get_canonical(u64 la)
> +{
> +	return ((int64_t)la << 16) >> 16;
> +}
> +
> +static inline bool is_noncanonical_address(u64 la)
> +{
> +#ifdef CONFIG_X86_64
> +	return get_canonical(la) != la;
> +#else
> +	return false;
> +#endif
> +}
> +
>  #define TSS_IOPB_BASE_OFFSET 0x66
>  #define TSS_BASE_SIZE 0x68
>  #define TSS_IOPB_SIZE (65536 / 8)
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index b1539ca50528..fad6e3af97aa 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -2325,7 +2325,7 @@ static int wrmsr_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
>  	trace_kvm_msr_write(ecx, data);
>  
>  	svm->next_rip = kvm_rip_read(&svm->vcpu) + 2;
> -	if (svm_set_msr(&svm->vcpu, ecx, data))
> +	if (kvm_set_msr(&svm->vcpu, ecx, data))
>  		kvm_inject_gp(&svm->vcpu, 0);
>  	else
>  		skip_emulated_instruction(&svm->vcpu);
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 0f9cc4de6e97..8163478eb192 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -3121,7 +3121,7 @@ static int handle_wrmsr(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
>  
>  	trace_kvm_msr_write(ecx, data);
>  
> -	if (vmx_set_msr(vcpu, ecx, data) != 0) {
> +	if (kvm_set_msr(vcpu, ecx, data) != 0) {
>  		kvm_inject_gp(vcpu, 0);
>  		return 1;
>  	}
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 4043ce90b775..0f0323365515 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -528,7 +528,6 @@ void kvm_enable_efer_bits(u64 mask)
>  }
>  EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
>  
> -
>  /*
>   * Writes msr value into into the appropriate "register".
>   * Returns 0 on success, non-0 otherwise.
> @@ -536,8 +535,34 @@ EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
>   */
>  int kvm_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
>  {
> +	switch (msr_index) {
> +	case MSR_FS_BASE:
> +	case MSR_GS_BASE:
> +	case MSR_KERNEL_GS_BASE:
> +	case MSR_CSTAR:
> +	case MSR_LSTAR:
> +		if (is_noncanonical_address(data))
> +			return 1;
> +		break;
> +	case MSR_IA32_SYSENTER_EIP:
> +	case MSR_IA32_SYSENTER_ESP:
> +		/*
> +		 * IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if
> +		 * non-canonical address is written on Intel but not on
> +		 * AMD (which ignores the top 32-bits, because it does
> +		 * not implement 64-bit SYSENTER).
> +		 *
> +		 * 64-bit code should hence be able to write a non-canonical
> +		 * value on AMD.  Making the address canonical ensures that
> +		 * vmentry does not fail on Intel after writing a non-canonical
> +		 * value, and that something deterministic happens if the guest
> +		 * invokes 64-bit SYSENTER.
> +		 */
> +		data = get_canonical(data);
> +	}
>  	return kvm_x86_ops->set_msr(vcpu, msr_index, data);
>  }
> +EXPORT_SYMBOL_GPL(kvm_set_msr);
>  
>  /*
>   * Adapt set_msr() to msr_io()'s calling convention
>
Brad Figg Dec. 2, 2014, 5:20 p.m. UTC | #3
On 12/02/2014 02:16 AM, Luis Henriques wrote:
> From: Nadav Amit <namit@cs.technion.ac.il>
> 
> Upon WRMSR, the CPU should inject #GP if a non-canonical value (address) is
> written to certain MSRs. The behavior is "almost" identical for AMD and Intel
> (ignoring MSRs that are not implemented in either architecture since they would
> anyhow #GP). However, IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if
> non-canonical address is written on Intel but not on AMD (which ignores the top
> 32-bits).
> 
> Accordingly, this patch injects a #GP on the MSRs which behave identically on
> Intel and AMD.  To eliminate the differences between the architecutres, the
> value which is written to IA32_SYSENTER_ESP and IA32_SYSENTER_EIP is turned to
> canonical value before writing instead of injecting a #GP.
> 
> Some references from Intel and AMD manuals:
> 
> According to Intel SDM description of WRMSR instruction #GP is expected on
> WRMSR "If the source register contains a non-canonical address and ECX
> specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE,
> IA32_KERNEL_GS_BASE, IA32_LSTAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP."
> 
> According to AMD manual instruction manual:
> LSTAR/CSTAR (SYSCALL): "The WRMSR instruction loads the target RIP into the
> LSTAR and CSTAR registers.  If an RIP written by WRMSR is not in canonical
> form, a general-protection exception (#GP) occurs."
> IA32_GS_BASE and IA32_FS_BASE (WRFSBASE/WRGSBASE): "The address written to the
> base field must be in canonical form or a #GP fault will occur."
> IA32_KERNEL_GS_BASE (SWAPGS): "The address stored in the KernelGSbase MSR must
> be in canonical form."
> 
> This patch fixes CVE-2014-3610.
> 
> Cc: stable@vger.kernel.org
> Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> (backported from commit 854e8bb1aa06c578c2c9145fa6bfe3680ef63b23)
> [ luis: based on upstream backport to 3.2 kernel ]
> CVE-2014-3610
> BugLink: http://bugs.launchpad.net/bugs/1384539
> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 14 ++++++++++++++
>  arch/x86/kvm/svm.c              |  2 +-
>  arch/x86/kvm/vmx.c              |  2 +-
>  arch/x86/kvm/x86.c              | 27 ++++++++++++++++++++++++++-
>  4 files changed, 42 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 08bc2ff8dcc8..38cc47489f4a 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -741,6 +741,20 @@ static inline void kvm_inject_gp(struct kvm_vcpu *vcpu, u32 error_code)
>  	kvm_queue_exception_e(vcpu, GP_VECTOR, error_code);
>  }
>  
> +static inline u64 get_canonical(u64 la)
> +{
> +	return ((int64_t)la << 16) >> 16;
> +}
> +
> +static inline bool is_noncanonical_address(u64 la)
> +{
> +#ifdef CONFIG_X86_64
> +	return get_canonical(la) != la;
> +#else
> +	return false;
> +#endif
> +}
> +
>  #define TSS_IOPB_BASE_OFFSET 0x66
>  #define TSS_BASE_SIZE 0x68
>  #define TSS_IOPB_SIZE (65536 / 8)
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index b1539ca50528..fad6e3af97aa 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -2325,7 +2325,7 @@ static int wrmsr_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
>  	trace_kvm_msr_write(ecx, data);
>  
>  	svm->next_rip = kvm_rip_read(&svm->vcpu) + 2;
> -	if (svm_set_msr(&svm->vcpu, ecx, data))
> +	if (kvm_set_msr(&svm->vcpu, ecx, data))
>  		kvm_inject_gp(&svm->vcpu, 0);
>  	else
>  		skip_emulated_instruction(&svm->vcpu);
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 0f9cc4de6e97..8163478eb192 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -3121,7 +3121,7 @@ static int handle_wrmsr(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
>  
>  	trace_kvm_msr_write(ecx, data);
>  
> -	if (vmx_set_msr(vcpu, ecx, data) != 0) {
> +	if (kvm_set_msr(vcpu, ecx, data) != 0) {
>  		kvm_inject_gp(vcpu, 0);
>  		return 1;
>  	}
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 4043ce90b775..0f0323365515 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -528,7 +528,6 @@ void kvm_enable_efer_bits(u64 mask)
>  }
>  EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
>  
> -
>  /*
>   * Writes msr value into into the appropriate "register".
>   * Returns 0 on success, non-0 otherwise.
> @@ -536,8 +535,34 @@ EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
>   */
>  int kvm_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
>  {
> +	switch (msr_index) {
> +	case MSR_FS_BASE:
> +	case MSR_GS_BASE:
> +	case MSR_KERNEL_GS_BASE:
> +	case MSR_CSTAR:
> +	case MSR_LSTAR:
> +		if (is_noncanonical_address(data))
> +			return 1;
> +		break;
> +	case MSR_IA32_SYSENTER_EIP:
> +	case MSR_IA32_SYSENTER_ESP:
> +		/*
> +		 * IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if
> +		 * non-canonical address is written on Intel but not on
> +		 * AMD (which ignores the top 32-bits, because it does
> +		 * not implement 64-bit SYSENTER).
> +		 *
> +		 * 64-bit code should hence be able to write a non-canonical
> +		 * value on AMD.  Making the address canonical ensures that
> +		 * vmentry does not fail on Intel after writing a non-canonical
> +		 * value, and that something deterministic happens if the guest
> +		 * invokes 64-bit SYSENTER.
> +		 */
> +		data = get_canonical(data);
> +	}
>  	return kvm_x86_ops->set_msr(vcpu, msr_index, data);
>  }
> +EXPORT_SYMBOL_GPL(kvm_set_msr);
>  
>  /*
>   * Adapt set_msr() to msr_io()'s calling convention
>
diff mbox

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 08bc2ff8dcc8..38cc47489f4a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -741,6 +741,20 @@  static inline void kvm_inject_gp(struct kvm_vcpu *vcpu, u32 error_code)
 	kvm_queue_exception_e(vcpu, GP_VECTOR, error_code);
 }
 
+static inline u64 get_canonical(u64 la)
+{
+	return ((int64_t)la << 16) >> 16;
+}
+
+static inline bool is_noncanonical_address(u64 la)
+{
+#ifdef CONFIG_X86_64
+	return get_canonical(la) != la;
+#else
+	return false;
+#endif
+}
+
 #define TSS_IOPB_BASE_OFFSET 0x66
 #define TSS_BASE_SIZE 0x68
 #define TSS_IOPB_SIZE (65536 / 8)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b1539ca50528..fad6e3af97aa 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2325,7 +2325,7 @@  static int wrmsr_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
 	trace_kvm_msr_write(ecx, data);
 
 	svm->next_rip = kvm_rip_read(&svm->vcpu) + 2;
-	if (svm_set_msr(&svm->vcpu, ecx, data))
+	if (kvm_set_msr(&svm->vcpu, ecx, data))
 		kvm_inject_gp(&svm->vcpu, 0);
 	else
 		skip_emulated_instruction(&svm->vcpu);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 0f9cc4de6e97..8163478eb192 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3121,7 +3121,7 @@  static int handle_wrmsr(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 
 	trace_kvm_msr_write(ecx, data);
 
-	if (vmx_set_msr(vcpu, ecx, data) != 0) {
+	if (kvm_set_msr(vcpu, ecx, data) != 0) {
 		kvm_inject_gp(vcpu, 0);
 		return 1;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4043ce90b775..0f0323365515 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -528,7 +528,6 @@  void kvm_enable_efer_bits(u64 mask)
 }
 EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
 
-
 /*
  * Writes msr value into into the appropriate "register".
  * Returns 0 on success, non-0 otherwise.
@@ -536,8 +535,34 @@  EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
  */
 int kvm_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
 {
+	switch (msr_index) {
+	case MSR_FS_BASE:
+	case MSR_GS_BASE:
+	case MSR_KERNEL_GS_BASE:
+	case MSR_CSTAR:
+	case MSR_LSTAR:
+		if (is_noncanonical_address(data))
+			return 1;
+		break;
+	case MSR_IA32_SYSENTER_EIP:
+	case MSR_IA32_SYSENTER_ESP:
+		/*
+		 * IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if
+		 * non-canonical address is written on Intel but not on
+		 * AMD (which ignores the top 32-bits, because it does
+		 * not implement 64-bit SYSENTER).
+		 *
+		 * 64-bit code should hence be able to write a non-canonical
+		 * value on AMD.  Making the address canonical ensures that
+		 * vmentry does not fail on Intel after writing a non-canonical
+		 * value, and that something deterministic happens if the guest
+		 * invokes 64-bit SYSENTER.
+		 */
+		data = get_canonical(data);
+	}
 	return kvm_x86_ops->set_msr(vcpu, msr_index, data);
 }
+EXPORT_SYMBOL_GPL(kvm_set_msr);
 
 /*
  * Adapt set_msr() to msr_io()'s calling convention