diff mbox

kvm: fix incorrect length in a loop over kvm dirty pages map

Message ID 1353375647-31268-1-git-send-email-aik@ozlabs.ru
State New
Headers show

Commit Message

Alexey Kardashevskiy Nov. 20, 2012, 1:40 a.m. UTC
QEMU allocates a map enough for 4k pages. However the system page size
can be 64K (for example on POWER) and the host kernel uses only a small
part of it as one big stores a dirty flag for 16 pages 4K each,
the hpratio variable stores this ratio and
the kvm_get_dirty_pages_log_range function handles it correctly.

However kvm_get_dirty_pages_log_range still goes beyond the data
provided by the host kernel which is not correct. It does not cause
errors at the moment as the whole bitmap is zeroed before doing KVM ioctl.

The patch reduces number of iterations over the map.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 kvm-all.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Alexander Graf Nov. 20, 2012, 9:06 a.m. UTC | #1
On 20.11.2012, at 02:40, Alexey Kardashevskiy wrote:

> QEMU allocates a map enough for 4k pages. However the system page size
> can be 64K (for example on POWER) and the host kernel uses only a small
> part of it as one big stores a dirty flag for 16 pages 4K each,
> the hpratio variable stores this ratio and
> the kvm_get_dirty_pages_log_range function handles it correctly.
> 
> However kvm_get_dirty_pages_log_range still goes beyond the data
> provided by the host kernel which is not correct. It does not cause
> errors at the moment as the whole bitmap is zeroed before doing KVM ioctl.
> 
> The patch reduces number of iterations over the map.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

While at at, could you please also double-check whether the coalesced mmio code does the right thing? It also uses TARGET_PAGE_SIZE, which looks bogus to me. Since we don't support coalesced mmio (yet), it's not too big of a deal, but it'd be nice to get right.

Thanks, applied to ppc-next.


Alex

> ---
> kvm-all.c |    2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kvm-all.c b/kvm-all.c
> index b6d0483..c99997f 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -364,7 +364,7 @@ static int kvm_get_dirty_pages_log_range(MemoryRegionSection *section,
>     unsigned int i, j;
>     unsigned long page_number, c;
>     hwaddr addr, addr1;
> -    unsigned int len = ((section->size / TARGET_PAGE_SIZE) + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
> +    unsigned int len = ((section->size / getpagesize()) + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
>     unsigned long hpratio = getpagesize() / TARGET_PAGE_SIZE;
> 
>     /*
> -- 
> 1.7.10.4
> 
>
David Gibson Nov. 21, 2012, 12:51 a.m. UTC | #2
On Tue, Nov 20, 2012 at 10:06:02AM +0100, Alexander Graf wrote:
> 
> On 20.11.2012, at 02:40, Alexey Kardashevskiy wrote:
> 
> > QEMU allocates a map enough for 4k pages. However the system page size
> > can be 64K (for example on POWER) and the host kernel uses only a small
> > part of it as one big stores a dirty flag for 16 pages 4K each,
> > the hpratio variable stores this ratio and
> > the kvm_get_dirty_pages_log_range function handles it correctly.
> > 
> > However kvm_get_dirty_pages_log_range still goes beyond the data
> > provided by the host kernel which is not correct. It does not cause
> > errors at the moment as the whole bitmap is zeroed before doing KVM ioctl.
> > 
> > The patch reduces number of iterations over the map.
> > 
> > Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> 
> While at at, could you please also double-check whether the
> coalesced mmio code does the right thing? It also uses
> TARGET_PAGE_SIZE, which looks bogus to me. Since we don't support
> coalesced mmio (yet), it's not too big of a deal, but it'd be nice
> to get right.

Hrm.  I'd really prefer to leave that until we do implement coalesced
mmio and so have something to test against.  Otherwise we're likely to
just make whatever's there more subtly wrong than it is now.

> Thanks, applied to ppc-next.

However, ther is another change that should definitely go with this
one; the called of kvm_get_dirty_pages_log_range() has the same error
when calculating the size of the bitmap to allocate.  In this case
it's harmless (it will always overallocate), but we should fix it too.
diff mbox

Patch

diff --git a/kvm-all.c b/kvm-all.c
index b6d0483..c99997f 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -364,7 +364,7 @@  static int kvm_get_dirty_pages_log_range(MemoryRegionSection *section,
     unsigned int i, j;
     unsigned long page_number, c;
     hwaddr addr, addr1;
-    unsigned int len = ((section->size / TARGET_PAGE_SIZE) + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
+    unsigned int len = ((section->size / getpagesize()) + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
     unsigned long hpratio = getpagesize() / TARGET_PAGE_SIZE;
 
     /*