Patchwork KVM: fix backport of 3e51570 on hardy

login
register
mail settings
Submitter Herton Ronaldo Krzesinski
Date July 11, 2012, 4:09 p.m.
Message ID <1342022973-6783-1-git-send-email-herton.krzesinski@canonical.com>
Download mbox | patch
Permalink /patch/170488/
State New
Headers show

Comments

Herton Ronaldo Krzesinski - July 11, 2012, 4:09 p.m.
CVE-2012-1601

BugLink: http://bugs.launchpad.net/bugs/971685

John Johansen reported that our backport of 3e51570 ("KVM: Ensure all
vcpus are consistent with in-kernel irqchip settings") has a bug, and
suggested possible fixes. We increment kvm->online_vcpus, but not
decrement it in the case create_vcpu_fd fails, which could cause issues
if it fails and vm is not destroyed after (counter will be out of sync).
In the upstream change this is not a problem since the increment is done
after create_vcpu_fd is called. The solution chosen here is to just
decrement it on the failure path.

Reported-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
---
 virt/kvm/kvm_main.c |    1 +
 1 file changed, 1 insertion(+)
Brad Figg - July 11, 2012, 4:37 p.m.
On 07/11/2012 09:09 AM, Herton Ronaldo Krzesinski wrote:
> CVE-2012-1601
> 
> BugLink: http://bugs.launchpad.net/bugs/971685
> 
> John Johansen reported that our backport of 3e51570 ("KVM: Ensure all
> vcpus are consistent with in-kernel irqchip settings") has a bug, and
> suggested possible fixes. We increment kvm->online_vcpus, but not
> decrement it in the case create_vcpu_fd fails, which could cause issues
> if it fails and vm is not destroyed after (counter will be out of sync).
> In the upstream change this is not a problem since the increment is done
> after create_vcpu_fd is called. The solution chosen here is to just
> decrement it on the failure path.
> 
> Reported-by: John Johansen <john.johansen@canonical.com>
> Signed-off-by: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
> ---
>  virt/kvm/kvm_main.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index d9a8ae0..61c18ba 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -823,6 +823,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n)
>  unlink:
>  	mutex_lock(&kvm->lock);
>  	kvm->vcpus[n] = NULL;
> +	atomic_dec(&kvm->online_vcpus);
>  vcpu_destroy:
>  	mutex_unlock(&kvm->lock);
>  	kvm_arch_vcpu_destroy(vcpu);
>
Tim Gardner - July 11, 2012, 4:53 p.m.
On 07/11/2012 10:09 AM, Herton Ronaldo Krzesinski wrote:
> CVE-2012-1601
> 
> BugLink: http://bugs.launchpad.net/bugs/971685
> 
> John Johansen reported that our backport of 3e51570 ("KVM: Ensure all
> vcpus are consistent with in-kernel irqchip settings") has a bug, and
> suggested possible fixes. We increment kvm->online_vcpus, but not
> decrement it in the case create_vcpu_fd fails, which could cause issues
> if it fails and vm is not destroyed after (counter will be out of sync).
> In the upstream change this is not a problem since the increment is done
> after create_vcpu_fd is called. The solution chosen here is to just
> decrement it on the failure path.
> 
> Reported-by: John Johansen <john.johansen@canonical.com>
> Signed-off-by: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
> ---
>  virt/kvm/kvm_main.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index d9a8ae0..61c18ba 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -823,6 +823,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n)
>  unlink:
>  	mutex_lock(&kvm->lock);
>  	kvm->vcpus[n] = NULL;
> +	atomic_dec(&kvm->online_vcpus);
>  vcpu_destroy:
>  	mutex_unlock(&kvm->lock);
>  	kvm_arch_vcpu_destroy(vcpu);
> 

Isn't this needed in the other custom binary files ?

./virt/kvm/kvm_main.c
./debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c
./debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c

rtg
Herton Ronaldo Krzesinski - July 11, 2012, 5:28 p.m.
On Wed, Jul 11, 2012 at 01:09:33PM -0300, Herton Ronaldo Krzesinski wrote:
> CVE-2012-1601
> 
> BugLink: http://bugs.launchpad.net/bugs/971685
> 
> John Johansen reported that our backport of 3e51570 ("KVM: Ensure all
> vcpus are consistent with in-kernel irqchip settings") has a bug, and
> suggested possible fixes. We increment kvm->online_vcpus, but not
> decrement it in the case create_vcpu_fd fails, which could cause issues
> if it fails and vm is not destroyed after (counter will be out of sync).
> In the upstream change this is not a problem since the increment is done
> after create_vcpu_fd is called. The solution chosen here is to just
> decrement it on the failure path.
> 
> Reported-by: John Johansen <john.johansen@canonical.com>

Sorry, the attribution was wrong here, it was spotted by Sasha Levin:

Reported-by: Sasha Levin <sasha.levin@oracle.com>

> Signed-off-by: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
> ---
>  virt/kvm/kvm_main.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index d9a8ae0..61c18ba 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -823,6 +823,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n)
>  unlink:
>  	mutex_lock(&kvm->lock);
>  	kvm->vcpus[n] = NULL;
> +	atomic_dec(&kvm->online_vcpus);
>  vcpu_destroy:
>  	mutex_unlock(&kvm->lock);
>  	kvm_arch_vcpu_destroy(vcpu);
> -- 
> 1.7.9.5
> 
> 
> -- 
> kernel-team mailing list
> kernel-team@lists.ubuntu.com
> https://lists.ubuntu.com/mailman/listinfo/kernel-team
>
Stefan Bader - July 12, 2012, 8:55 a.m.
Looking at the previous change and the fix it looks good. I'll try to fix up the
attribution and custom binary parts. Though I doubt the Xen part is really
necessary.
Stefan Bader - July 12, 2012, 9:05 a.m.
... and pushed to hardy master-next

Patch

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d9a8ae0..61c18ba 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -823,6 +823,7 @@  static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n)
 unlink:
 	mutex_lock(&kvm->lock);
 	kvm->vcpus[n] = NULL;
+	atomic_dec(&kvm->online_vcpus);
 vcpu_destroy:
 	mutex_unlock(&kvm->lock);
 	kvm_arch_vcpu_destroy(vcpu);