diff mbox

[Precise,CVE-2014-8369] kvm: fix excessive pages un-pinning in kvm_iommu_map error path.

Message ID 1417519612-25841-1-git-send-email-luis.henriques@canonical.com
State New
Headers show

Commit Message

Luis Henriques Dec. 2, 2014, 11:26 a.m. UTC
From: Quentin Casasnovas <quentin.casasnovas@oracle.com>

The third parameter of kvm_unpin_pages() when called from
kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin
and not the page size.

This error was facilitated with an inconsistent API: kvm_pin_pages() takes
a size, but kvn_unpin_pages() takes a number of pages, so fix the problem
by matching the two.

This was introduced by commit 350b8bd ("kvm: iommu: fix the third parameter
of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of
un-pinning for pages intended to be un-pinned (i.e. memory leak) but
unfortunately potentially aggravated the number of pages we un-pin that
should have stayed pinned. As far as I understand though, the same
practical mitigations apply.

This issue was found during review of Red Hat 6.6 patches to prepare
Ksplice rebootless updates.

Thanks to Vegard for his time on a late Friday evening to help me in
understanding this code.

Fixes: 350b8bd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)")
Cc: stable@vger.kernel.org
Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Jamie Iles <jamie.iles@oracle.com>
Reviewed-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(backported from commit 3d32e4dbe71374a6780eaf51d719d76f9a9bf22f)
CVE-2014-8369
BugLink: http://bugs.launchpad.net/bugs/1386395
Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
---
 virt/kvm/iommu.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Seth Forshee Dec. 2, 2014, 3:41 p.m. UTC | #1

Brad Figg Dec. 2, 2014, 4:55 p.m. UTC | #2
On 12/02/2014 03:26 AM, Luis Henriques wrote:
> From: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> 
> The third parameter of kvm_unpin_pages() when called from
> kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin
> and not the page size.
> 
> This error was facilitated with an inconsistent API: kvm_pin_pages() takes
> a size, but kvn_unpin_pages() takes a number of pages, so fix the problem
> by matching the two.
> 
> This was introduced by commit 350b8bd ("kvm: iommu: fix the third parameter
> of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of
> un-pinning for pages intended to be un-pinned (i.e. memory leak) but
> unfortunately potentially aggravated the number of pages we un-pin that
> should have stayed pinned. As far as I understand though, the same
> practical mitigations apply.
> 
> This issue was found during review of Red Hat 6.6 patches to prepare
> Ksplice rebootless updates.
> 
> Thanks to Vegard for his time on a late Friday evening to help me in
> understanding this code.
> 
> Fixes: 350b8bd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)")
> Cc: stable@vger.kernel.org
> Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
> Signed-off-by: Jamie Iles <jamie.iles@oracle.com>
> Reviewed-by: Sasha Levin <sasha.levin@oracle.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> (backported from commit 3d32e4dbe71374a6780eaf51d719d76f9a9bf22f)
> CVE-2014-8369
> BugLink: http://bugs.launchpad.net/bugs/1386395
> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
> ---
>  virt/kvm/iommu.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c
> index c946700927fa..e32c93ca6446 100644
> --- a/virt/kvm/iommu.c
> +++ b/virt/kvm/iommu.c
> @@ -43,13 +43,13 @@ static void kvm_iommu_put_pages(struct kvm *kvm,
>  				gfn_t base_gfn, unsigned long npages);
>  
>  static pfn_t kvm_pin_pages(struct kvm *kvm, struct kvm_memory_slot *slot,
> -			   gfn_t gfn, unsigned long size)
> +			   gfn_t gfn, unsigned long npages)
>  {
>  	gfn_t end_gfn;
>  	pfn_t pfn;
>  
>  	pfn     = gfn_to_pfn_memslot(kvm, slot, gfn);
> -	end_gfn = gfn + (size >> PAGE_SHIFT);
> +	end_gfn = gfn + npages;
>  	gfn    += 1;
>  
>  	if (is_error_pfn(pfn))
> @@ -117,7 +117,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
>  		 * Pin all pages we are about to map in memory. This is
>  		 * important because we unmap and unpin in 4kb steps later.
>  		 */
> -		pfn = kvm_pin_pages(kvm, slot, gfn, page_size);
> +		pfn = kvm_pin_pages(kvm, slot, gfn, page_size >> PAGE_SHIFT);
>  		if (is_error_pfn(pfn)) {
>  			gfn += 1;
>  			continue;
> @@ -129,7 +129,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
>  		if (r) {
>  			printk(KERN_ERR "kvm_iommu_map_address:"
>  			       "iommu failed to map pfn=%llx\n", pfn);
> -			kvm_unpin_pages(kvm, pfn, page_size);
> +			kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT);
>  			goto unmap_pages;
>  		}
>  
>
Brad Figg Dec. 2, 2014, 5:26 p.m. UTC | #3
On 12/02/2014 03:26 AM, Luis Henriques wrote:
> From: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> 
> The third parameter of kvm_unpin_pages() when called from
> kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin
> and not the page size.
> 
> This error was facilitated with an inconsistent API: kvm_pin_pages() takes
> a size, but kvn_unpin_pages() takes a number of pages, so fix the problem
> by matching the two.
> 
> This was introduced by commit 350b8bd ("kvm: iommu: fix the third parameter
> of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of
> un-pinning for pages intended to be un-pinned (i.e. memory leak) but
> unfortunately potentially aggravated the number of pages we un-pin that
> should have stayed pinned. As far as I understand though, the same
> practical mitigations apply.
> 
> This issue was found during review of Red Hat 6.6 patches to prepare
> Ksplice rebootless updates.
> 
> Thanks to Vegard for his time on a late Friday evening to help me in
> understanding this code.
> 
> Fixes: 350b8bd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)")
> Cc: stable@vger.kernel.org
> Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
> Signed-off-by: Jamie Iles <jamie.iles@oracle.com>
> Reviewed-by: Sasha Levin <sasha.levin@oracle.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> (backported from commit 3d32e4dbe71374a6780eaf51d719d76f9a9bf22f)
> CVE-2014-8369
> BugLink: http://bugs.launchpad.net/bugs/1386395
> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
> ---
>  virt/kvm/iommu.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c
> index c946700927fa..e32c93ca6446 100644
> --- a/virt/kvm/iommu.c
> +++ b/virt/kvm/iommu.c
> @@ -43,13 +43,13 @@ static void kvm_iommu_put_pages(struct kvm *kvm,
>  				gfn_t base_gfn, unsigned long npages);
>  
>  static pfn_t kvm_pin_pages(struct kvm *kvm, struct kvm_memory_slot *slot,
> -			   gfn_t gfn, unsigned long size)
> +			   gfn_t gfn, unsigned long npages)
>  {
>  	gfn_t end_gfn;
>  	pfn_t pfn;
>  
>  	pfn     = gfn_to_pfn_memslot(kvm, slot, gfn);
> -	end_gfn = gfn + (size >> PAGE_SHIFT);
> +	end_gfn = gfn + npages;
>  	gfn    += 1;
>  
>  	if (is_error_pfn(pfn))
> @@ -117,7 +117,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
>  		 * Pin all pages we are about to map in memory. This is
>  		 * important because we unmap and unpin in 4kb steps later.
>  		 */
> -		pfn = kvm_pin_pages(kvm, slot, gfn, page_size);
> +		pfn = kvm_pin_pages(kvm, slot, gfn, page_size >> PAGE_SHIFT);
>  		if (is_error_pfn(pfn)) {
>  			gfn += 1;
>  			continue;
> @@ -129,7 +129,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
>  		if (r) {
>  			printk(KERN_ERR "kvm_iommu_map_address:"
>  			       "iommu failed to map pfn=%llx\n", pfn);
> -			kvm_unpin_pages(kvm, pfn, page_size);
> +			kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT);
>  			goto unmap_pages;
>  		}
>  
>
diff mbox

Patch

diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c
index c946700927fa..e32c93ca6446 100644
--- a/virt/kvm/iommu.c
+++ b/virt/kvm/iommu.c
@@ -43,13 +43,13 @@  static void kvm_iommu_put_pages(struct kvm *kvm,
 				gfn_t base_gfn, unsigned long npages);
 
 static pfn_t kvm_pin_pages(struct kvm *kvm, struct kvm_memory_slot *slot,
-			   gfn_t gfn, unsigned long size)
+			   gfn_t gfn, unsigned long npages)
 {
 	gfn_t end_gfn;
 	pfn_t pfn;
 
 	pfn     = gfn_to_pfn_memslot(kvm, slot, gfn);
-	end_gfn = gfn + (size >> PAGE_SHIFT);
+	end_gfn = gfn + npages;
 	gfn    += 1;
 
 	if (is_error_pfn(pfn))
@@ -117,7 +117,7 @@  int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
 		 * Pin all pages we are about to map in memory. This is
 		 * important because we unmap and unpin in 4kb steps later.
 		 */
-		pfn = kvm_pin_pages(kvm, slot, gfn, page_size);
+		pfn = kvm_pin_pages(kvm, slot, gfn, page_size >> PAGE_SHIFT);
 		if (is_error_pfn(pfn)) {
 			gfn += 1;
 			continue;
@@ -129,7 +129,7 @@  int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
 		if (r) {
 			printk(KERN_ERR "kvm_iommu_map_address:"
 			       "iommu failed to map pfn=%llx\n", pfn);
-			kvm_unpin_pages(kvm, pfn, page_size);
+			kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT);
 			goto unmap_pages;
 		}