From patchwork Thu Apr 26 00:14:17 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brad Figg X-Patchwork-Id: 155141 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 17A7EB6FF7 for ; Thu, 26 Apr 2012 10:14:34 +1000 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SNCLq-00008e-3L; Thu, 26 Apr 2012 00:14:26 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SNCLm-000080-Tk for kernel-team@lists.ubuntu.com; Thu, 26 Apr 2012 00:14:22 +0000 Received: from static-50-53-107-235.bvtn.or.frontiernet.net ([50.53.107.235] helo=localhost) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1SNCLm-0004iC-DD for kernel-team@lists.ubuntu.com; Thu, 26 Apr 2012 00:14:22 +0000 From: Brad Figg To: kernel-team@lists.ubuntu.com Subject: [RESEND] [PATCH 1/1] [CVE-2012-1601 [HARDY] KVM: Ensure all vcpus are consistent with in-kernel irqchip settings Date: Wed, 25 Apr 2012 17:14:17 -0700 Message-Id: <1335399257-927-2-git-send-email-brad.figg@canonical.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1335399257-927-1-git-send-email-brad.figg@canonical.com> References: <1335399257-927-1-git-send-email-brad.figg@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com From: Avi Kivity CVE-2012-1601 BugLink: http://bugs.launchpad.net/bugs/971685 If some vcpus are created before KVM_CREATE_IRQCHIP, then irqchip_in_kernel() and vcpu->arch.apic will be inconsistent, leading to potential NULL pointer dereferences. Fix by: - ensuring that no vcpus are installed when KVM_CREATE_IRQCHIP is called - ensuring that a vcpu has an apic if it is installed after KVM_CREATE_IRQCHIP This is somewhat long winded because vcpu->arch.apic is created without kvm->lock held. Based on earlier patch by Michael Ellerman. Signed-off-by: Michael Ellerman Signed-off-by: Avi Kivity (backported from commit 3e515705a1f46beb1c942bb8043c16f8ac7b1e9e upstream) Signed-off-by: Brad Figg --- arch/x86/kvm/x86.c | 9 +++++++++ .../binary-custom.d/openvz/src/arch/x86/kvm/x86.c | 9 +++++++++ .../openvz/src/include/linux/kvm_host.h | 2 ++ .../binary-custom.d/openvz/src/virt/kvm/kvm_main.c | 5 +++++ debian/binary-custom.d/xen/src/arch/x86/kvm/x86.c | 9 +++++++++ .../xen/src/include/linux/kvm_host.h | 2 ++ debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c | 5 +++++ include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 5 +++++ 9 files changed, 48 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2085040..f036054 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1582,6 +1582,9 @@ long kvm_arch_vm_ioctl(struct file *filp, break; } case KVM_CREATE_IRQCHIP: + r = -EINVAL; + if (atomic_read(&kvm->online_vcpus)) + goto out; r = -ENOMEM; kvm->arch.vpic = kvm_create_pic(kvm); if (kvm->arch.vpic) { @@ -3350,6 +3353,11 @@ void kvm_arch_check_processor_compat(void *rtn) kvm_x86_ops->check_processor_compatibility(rtn); } +bool kvm_vcpu_compatible(struct kvm_vcpu *vcpu) +{ + return irqchip_in_kernel(vcpu->kvm) == (vcpu->arch.apic != NULL); +} + int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) { struct page *page; @@ -3435,6 +3443,7 @@ static void kvm_free_vcpus(struct kvm *kvm) } } + atomic_set(&kvm->online_vcpus, 0); } void kvm_arch_destroy_vm(struct kvm *kvm) diff --git a/debian/binary-custom.d/openvz/src/arch/x86/kvm/x86.c b/debian/binary-custom.d/openvz/src/arch/x86/kvm/x86.c index 2085040..f036054 100644 --- a/debian/binary-custom.d/openvz/src/arch/x86/kvm/x86.c +++ b/debian/binary-custom.d/openvz/src/arch/x86/kvm/x86.c @@ -1582,6 +1582,9 @@ long kvm_arch_vm_ioctl(struct file *filp, break; } case KVM_CREATE_IRQCHIP: + r = -EINVAL; + if (atomic_read(&kvm->online_vcpus)) + goto out; r = -ENOMEM; kvm->arch.vpic = kvm_create_pic(kvm); if (kvm->arch.vpic) { @@ -3350,6 +3353,11 @@ void kvm_arch_check_processor_compat(void *rtn) kvm_x86_ops->check_processor_compatibility(rtn); } +bool kvm_vcpu_compatible(struct kvm_vcpu *vcpu) +{ + return irqchip_in_kernel(vcpu->kvm) == (vcpu->arch.apic != NULL); +} + int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) { struct page *page; @@ -3435,6 +3443,7 @@ static void kvm_free_vcpus(struct kvm *kvm) } } + atomic_set(&kvm->online_vcpus, 0); } void kvm_arch_destroy_vm(struct kvm *kvm) diff --git a/debian/binary-custom.d/openvz/src/include/linux/kvm_host.h b/debian/binary-custom.d/openvz/src/include/linux/kvm_host.h index 958e003..01055ae 100644 --- a/debian/binary-custom.d/openvz/src/include/linux/kvm_host.h +++ b/debian/binary-custom.d/openvz/src/include/linux/kvm_host.h @@ -121,6 +121,7 @@ struct kvm { struct kvm_memory_slot memslots[KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; + atomic_t online_vcpus; struct list_head vm_list; struct file *filp; struct kvm_io_bus mmio_bus; @@ -306,5 +307,6 @@ struct kvm_stats_debugfs_item { struct dentry *dentry; }; extern struct kvm_stats_debugfs_item debugfs_entries[]; +bool kvm_vcpu_compatible(struct kvm_vcpu *vcpu); #endif diff --git a/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c b/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c index 240156e..b128dcc 100644 --- a/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c +++ b/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c @@ -798,12 +798,17 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) goto vcpu_destroy; mutex_lock(&kvm->lock); + if (!kvm_vcpu_compatible(vcpu)) { + r = -EINVAL; + goto vcpu_destroy; + } if (kvm->vcpus[n]) { r = -EEXIST; mutex_unlock(&kvm->lock); goto vcpu_destroy; } kvm->vcpus[n] = vcpu; + atomic_inc(&kvm->online_vcpus); mutex_unlock(&kvm->lock); /* Now it's all set up, let userspace reach it */ diff --git a/debian/binary-custom.d/xen/src/arch/x86/kvm/x86.c b/debian/binary-custom.d/xen/src/arch/x86/kvm/x86.c index 2085040..f036054 100644 --- a/debian/binary-custom.d/xen/src/arch/x86/kvm/x86.c +++ b/debian/binary-custom.d/xen/src/arch/x86/kvm/x86.c @@ -1582,6 +1582,9 @@ long kvm_arch_vm_ioctl(struct file *filp, break; } case KVM_CREATE_IRQCHIP: + r = -EINVAL; + if (atomic_read(&kvm->online_vcpus)) + goto out; r = -ENOMEM; kvm->arch.vpic = kvm_create_pic(kvm); if (kvm->arch.vpic) { @@ -3350,6 +3353,11 @@ void kvm_arch_check_processor_compat(void *rtn) kvm_x86_ops->check_processor_compatibility(rtn); } +bool kvm_vcpu_compatible(struct kvm_vcpu *vcpu) +{ + return irqchip_in_kernel(vcpu->kvm) == (vcpu->arch.apic != NULL); +} + int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) { struct page *page; @@ -3435,6 +3443,7 @@ static void kvm_free_vcpus(struct kvm *kvm) } } + atomic_set(&kvm->online_vcpus, 0); } void kvm_arch_destroy_vm(struct kvm *kvm) diff --git a/debian/binary-custom.d/xen/src/include/linux/kvm_host.h b/debian/binary-custom.d/xen/src/include/linux/kvm_host.h index 958e003..01055ae 100644 --- a/debian/binary-custom.d/xen/src/include/linux/kvm_host.h +++ b/debian/binary-custom.d/xen/src/include/linux/kvm_host.h @@ -121,6 +121,7 @@ struct kvm { struct kvm_memory_slot memslots[KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; + atomic_t online_vcpus; struct list_head vm_list; struct file *filp; struct kvm_io_bus mmio_bus; @@ -306,5 +307,6 @@ struct kvm_stats_debugfs_item { struct dentry *dentry; }; extern struct kvm_stats_debugfs_item debugfs_entries[]; +bool kvm_vcpu_compatible(struct kvm_vcpu *vcpu); #endif diff --git a/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c b/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c index 240156e..b128dcc 100644 --- a/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c +++ b/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c @@ -798,12 +798,17 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) goto vcpu_destroy; mutex_lock(&kvm->lock); + if (!kvm_vcpu_compatible(vcpu)) { + r = -EINVAL; + goto vcpu_destroy; + } if (kvm->vcpus[n]) { r = -EEXIST; mutex_unlock(&kvm->lock); goto vcpu_destroy; } kvm->vcpus[n] = vcpu; + atomic_inc(&kvm->online_vcpus); mutex_unlock(&kvm->lock); /* Now it's all set up, let userspace reach it */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 958e003..01055ae 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -121,6 +121,7 @@ struct kvm { struct kvm_memory_slot memslots[KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; + atomic_t online_vcpus; struct list_head vm_list; struct file *filp; struct kvm_io_bus mmio_bus; @@ -306,5 +307,6 @@ struct kvm_stats_debugfs_item { struct dentry *dentry; }; extern struct kvm_stats_debugfs_item debugfs_entries[]; +bool kvm_vcpu_compatible(struct kvm_vcpu *vcpu); #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 240156e..b128dcc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -798,12 +798,17 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) goto vcpu_destroy; mutex_lock(&kvm->lock); + if (!kvm_vcpu_compatible(vcpu)) { + r = -EINVAL; + goto vcpu_destroy; + } if (kvm->vcpus[n]) { r = -EEXIST; mutex_unlock(&kvm->lock); goto vcpu_destroy; } kvm->vcpus[n] = vcpu; + atomic_inc(&kvm->online_vcpus); mutex_unlock(&kvm->lock); /* Now it's all set up, let userspace reach it */