Patchwork [v2,4/5] exec.c: refactor cpu_physical_memory_map

login
register
mail settings
Submitter Stefano Stabellini
Date May 19, 2011, 5:35 p.m.
Message ID <1305826546-19059-4-git-send-email-stefano.stabellini@eu.citrix.com>
Download mbox | patch
Permalink /patch/96439/
State New
Headers show

Comments

Stefano Stabellini - May 19, 2011, 5:35 p.m.
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Introduce qemu_ram_ptr_length that takes an address and a size as
parameters rather than just an address.

Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
once rather than calling qemu_get_ram_ptr one time per page.
This is not only more efficient but also tries to simplify the logic of
the function.
Currently we are relying on the fact that all the pages are mapped
contiguously in qemu's address space: we have a check to make sure that
the virtual address returned by qemu_get_ram_ptr from the second call on
is consecutive. Now we are making this more explicit replacing all the
calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
passing a size argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: agraf@suse.de
CC: anthony@codemonkey.ws
---
 cpu-common.h |    1 +
 exec.c       |   51 ++++++++++++++++++++++++++++++++++-----------------
 2 files changed, 35 insertions(+), 17 deletions(-)
Jan Kiszka - July 11, 2011, 10:17 p.m.
On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Introduce qemu_ram_ptr_length that takes an address and a size as
> parameters rather than just an address.
> 
> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
> once rather than calling qemu_get_ram_ptr one time per page.
> This is not only more efficient but also tries to simplify the logic of
> the function.
> Currently we are relying on the fact that all the pages are mapped
> contiguously in qemu's address space: we have a check to make sure that
> the virtual address returned by qemu_get_ram_ptr from the second call on
> is consecutive. Now we are making this more explicit replacing all the
> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
> passing a size argument.

This breaks cpu_physical_memory_map for >4G addresses on PC.
Effectively, it doesn't account for the PCI gap, ie. that the RAM block
is actually mapped in two chunks into the guest physical memory. One
outcome is that QEMU aborts when we try to process an address that is
now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
memory, even without KVM.

Please fix or revert.

Thanks,
Jan
Alexander Graf - July 12, 2011, 6:14 a.m.
On 12.07.2011, at 00:17, Jan Kiszka wrote:

> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make sure that
>> the virtual address returned by qemu_get_ram_ptr from the second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
> 
> This breaks cpu_physical_memory_map for >4G addresses on PC.
> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
> is actually mapped in two chunks into the guest physical memory. One
> outcome is that QEMU aborts when we try to process an address that is
> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
> memory, even without KVM.

Do you have some reliable test case? I can't seem to reproduce the issue. It works just fine for me with -m 10G, virtio nic and disk, polluting all the guest page cache.


Alex
Alexander Graf - July 12, 2011, 6:21 a.m.
On 12.07.2011, at 00:17, Jan Kiszka wrote:

> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make sure that
>> the virtual address returned by qemu_get_ram_ptr from the second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
> 
> This breaks cpu_physical_memory_map for >4G addresses on PC.
> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
> is actually mapped in two chunks into the guest physical memory. One
> outcome is that QEMU aborts when we try to process an address that is
> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
> memory, even without KVM.

Ah, I see what you mean now. It breaks on current HEAD, but not on my last xen-next branch which already included that patch, so I'd assume it's something different that came in later.


Alex
Alexander Graf - July 12, 2011, 6:28 a.m.
On 12.07.2011, at 00:17, Jan Kiszka wrote:

> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make sure that
>> the virtual address returned by qemu_get_ram_ptr from the second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
> 
> This breaks cpu_physical_memory_map for >4G addresses on PC.
> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
> is actually mapped in two chunks into the guest physical memory. One
> outcome is that QEMU aborts when we try to process an address that is
> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
> memory, even without KVM.

Yeah, that's what happens when you read mails too early in the morning :). The xen branch didn't get pulled yet, so upstream is missing the following patch:

commit f221e5ac378feea71d9857ddaa40f829c511742f
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Mon Jun 27 18:26:06 2011 +0100

    qemu_ram_ptr_length: take ram_addr_t as arguments
    
    qemu_ram_ptr_length should take ram_addr_t as argument rather than
    target_phys_addr_t because is doing comparisons with RAMBlock addresses.
    
    cpu_physical_memory_map should create a ram_addr_t address to pass to
    qemu_ram_ptr_length from PhysPageDesc phys_offset.
    
    Remove code after abort() in qemu_ram_ptr_length.
    
    Changes in v2:
    
    - handle 0 size in qemu_ram_ptr_length;
    
    - rename addr1 to raddr;
    
    - initialize raddr to ULONG_MAX.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
    Signed-off-by: Alexander Graf <agraf@suse.de>


Anthony?

Alex
Jan Kiszka - July 12, 2011, 7:15 a.m.
On 2011-07-12 08:28, Alexander Graf wrote:
> 
> On 12.07.2011, at 00:17, Jan Kiszka wrote:
> 
>> On 2011-05-19 19:35, stefano.stabellini@eu.citrix.com wrote:
>>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>>
>>> Introduce qemu_ram_ptr_length that takes an address and a size as
>>> parameters rather than just an address.
>>>
>>> Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
>>> once rather than calling qemu_get_ram_ptr one time per page.
>>> This is not only more efficient but also tries to simplify the logic of
>>> the function.
>>> Currently we are relying on the fact that all the pages are mapped
>>> contiguously in qemu's address space: we have a check to make sure that
>>> the virtual address returned by qemu_get_ram_ptr from the second call on
>>> is consecutive. Now we are making this more explicit replacing all the
>>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>>> passing a size argument.
>>
>> This breaks cpu_physical_memory_map for >4G addresses on PC.
>> Effectively, it doesn't account for the PCI gap, ie. that the RAM block
>> is actually mapped in two chunks into the guest physical memory. One
>> outcome is that QEMU aborts when we try to process an address that is
>> now "outside" RAM. Simple to reproduce with a virtio NIC and 5G guest
>> memory, even without KVM.
> 
> Yeah, that's what happens when you read mails too early in the morning :). The xen branch didn't get pulled yet, so upstream is missing the following patch:
> 
> commit f221e5ac378feea71d9857ddaa40f829c511742f
> Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Date:   Mon Jun 27 18:26:06 2011 +0100
> 
>     qemu_ram_ptr_length: take ram_addr_t as arguments
>     
>     qemu_ram_ptr_length should take ram_addr_t as argument rather than
>     target_phys_addr_t because is doing comparisons with RAMBlock addresses.
>     
>     cpu_physical_memory_map should create a ram_addr_t address to pass to
>     qemu_ram_ptr_length from PhysPageDesc phys_offset.
>     
>     Remove code after abort() in qemu_ram_ptr_length.
>     
>     Changes in v2:
>     
>     - handle 0 size in qemu_ram_ptr_length;
>     
>     - rename addr1 to raddr;
>     
>     - initialize raddr to ULONG_MAX.
>     
>     Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>     Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
>     Signed-off-by: Alexander Graf <agraf@suse.de>

Maybe subject or changlog can reflect what this all fixes?

> 
> Anthony?

Am I the only one under the impression that too many patches are in
limbo ATM?

Jan
Paolo Bonzini - July 12, 2011, 7:18 a.m.
On 07/12/2011 09:15 AM, Jan Kiszka wrote:
> Am I the only one under the impression that too many patches are in
> limbo ATM?

No. :)

Paolo
Markus Armbruster - July 12, 2011, 7:48 a.m.
Paolo Bonzini <pbonzini@redhat.com> writes:

> On 07/12/2011 09:15 AM, Jan Kiszka wrote:
>> Am I the only one under the impression that too many patches are in
>> limbo ATM?
>
> No. :)

There's another place for patches to go?
Liu Yu-B13201 - July 22, 2011, 5:42 a.m.
> -----Original Message-----
> From: qemu-devel-bounces+yu.liu=freescale.com@nongnu.org 
> [mailto:qemu-devel-bounces+yu.liu=freescale.com@nongnu.org] 
> On Behalf Of stefano.stabellini@eu.citrix.com
> Sent: Friday, May 20, 2011 1:36 AM
> To: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xensource.com; agraf@suse.de; Stefano Stabellini
> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
> cpu_physical_memory_map
> 
> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Introduce qemu_ram_ptr_length that takes an address and a size as
> parameters rather than just an address.
> 
> Refactor cpu_physical_memory_map so that we call 
> qemu_ram_ptr_length only
> once rather than calling qemu_get_ram_ptr one time per page.
> This is not only more efficient but also tries to simplify 
> the logic of
> the function.
> Currently we are relying on the fact that all the pages are mapped
> contiguously in qemu's address space: we have a check to make 
> sure that
> the virtual address returned by qemu_get_ram_ptr from the 
> second call on
> is consecutive. Now we are making this more explicit replacing all the
> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
> passing a size argument.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: agraf@suse.de
> CC: anthony@codemonkey.ws
> ---
>  cpu-common.h |    1 +
>  exec.c       |   51 
> ++++++++++++++++++++++++++++++++++-----------------
>  2 files changed, 35 insertions(+), 17 deletions(-)
> 
> diff --git a/cpu-common.h b/cpu-common.h
> index 151c32c..085aacb 100644
> --- a/cpu-common.h
> +++ b/cpu-common.h
> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
>  void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
>  /* This should only be used for ram local to a device.  */
>  void *qemu_get_ram_ptr(ram_addr_t addr);
> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> target_phys_addr_t *size);
>  /* Same but slower, to use for migration, where the order of
>   * RAMBlocks must not change. */
>  void *qemu_safe_ram_ptr(ram_addr_t addr);
> diff --git a/exec.c b/exec.c
> index 21f21f0..ff9c174 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
>      return NULL;
>  }
>  
> +/* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
> + * but takes a size argument */
> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> target_phys_addr_t *size)
> +{
> +    if (xen_mapcache_enabled())
> +        return qemu_map_cache(addr, *size, 1);
> +    else {
> +        RAMBlock *block;
> +
> +        QLIST_FOREACH(block, &ram_list.blocks, next) {
> +            if (addr - block->offset < block->length) {
> +                if (addr - block->offset + *size > block->length)
> +                    *size = block->length - addr + block->offset;
> +                return block->host + (addr - block->offset);
> +            }
> +        }
> +
> +        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", 
> (uint64_t)addr);
> +        abort();
> +
> +        *size = 0;
> +        return NULL;
> +    }
> +}
> +
>  void qemu_put_ram_ptr(void *addr)
>  {
>      trace_qemu_put_ram_ptr(addr);
> @@ -3972,14 +3997,12 @@ void 
> *cpu_physical_memory_map(target_phys_addr_t addr,
>                                int is_write)
>  {
>      target_phys_addr_t len = *plen;
> -    target_phys_addr_t done = 0;
> +    target_phys_addr_t todo = 0;
>      int l;
> -    uint8_t *ret = NULL;
> -    uint8_t *ptr;
>      target_phys_addr_t page;
>      unsigned long pd;
>      PhysPageDesc *p;
> -    unsigned long addr1;
> +    target_phys_addr_t addr1 = addr;
>  
>      while (len > 0) {
>          page = addr & TARGET_PAGE_MASK;
> @@ -3994,7 +4017,7 @@ void 
> *cpu_physical_memory_map(target_phys_addr_t addr,
>          }
>  
>          if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
> -            if (done || bounce.buffer) {
> +            if (todo || bounce.buffer) {
>                  break;
>              }
>              bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, 
> TARGET_PAGE_SIZE);
> @@ -4003,23 +4026,17 @@ void 
> *cpu_physical_memory_map(target_phys_addr_t addr,
>              if (!is_write) {
>                  cpu_physical_memory_read(addr, bounce.buffer, l);
>              }
> -            ptr = bounce.buffer;
> -        } else {
> -            addr1 = (pd & TARGET_PAGE_MASK) + (addr & 
> ~TARGET_PAGE_MASK);
> -            ptr = qemu_get_ram_ptr(addr1);
> -        }
> -        if (!done) {
> -            ret = ptr;
> -        } else if (ret + done != ptr) {
> -            break;
> +
> +            *plen = l;
> +            return bounce.buffer;
>          }
>  
>          len -= l;
>          addr += l;
> -        done += l;
> +        todo += l;
>      }
> -    *plen = done;
> -    return ret;
> +    *plen = todo;
> +    return qemu_ram_ptr_length(addr1, plen);
>  }
>  
>  /* Unmaps a memory region previously mapped by 
> cpu_physical_memory_map().
> -- 
> 1.7.2.3

Hello Stefano,

This commit breaks the case that guest memory doesn't start from 0.

In previous code
	addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
This transfer guest physical addr to qemu ram_addr, and so that it can pass the ram range checking.

But current code
	addr1 = addr
this make it fail to pass the ram range checking.


Thanks,
Yu
Alexander Graf - July 22, 2011, 5:59 a.m.
On 22.07.2011, at 07:42, Liu Yu-B13201 wrote:

> 
> 
>> -----Original Message-----
>> From: qemu-devel-bounces+yu.liu=freescale.com@nongnu.org 
>> [mailto:qemu-devel-bounces+yu.liu=freescale.com@nongnu.org] 
>> On Behalf Of stefano.stabellini@eu.citrix.com
>> Sent: Friday, May 20, 2011 1:36 AM
>> To: qemu-devel@nongnu.org
>> Cc: xen-devel@lists.xensource.com; agraf@suse.de; Stefano Stabellini
>> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
>> cpu_physical_memory_map
>> 
>> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Introduce qemu_ram_ptr_length that takes an address and a size as
>> parameters rather than just an address.
>> 
>> Refactor cpu_physical_memory_map so that we call 
>> qemu_ram_ptr_length only
>> once rather than calling qemu_get_ram_ptr one time per page.
>> This is not only more efficient but also tries to simplify 
>> the logic of
>> the function.
>> Currently we are relying on the fact that all the pages are mapped
>> contiguously in qemu's address space: we have a check to make 
>> sure that
>> the virtual address returned by qemu_get_ram_ptr from the 
>> second call on
>> is consecutive. Now we are making this more explicit replacing all the
>> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
>> passing a size argument.
>> 
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> CC: agraf@suse.de
>> CC: anthony@codemonkey.ws
>> ---
>> cpu-common.h |    1 +
>> exec.c       |   51 
>> ++++++++++++++++++++++++++++++++++-----------------
>> 2 files changed, 35 insertions(+), 17 deletions(-)
>> 
>> diff --git a/cpu-common.h b/cpu-common.h
>> index 151c32c..085aacb 100644
>> --- a/cpu-common.h
>> +++ b/cpu-common.h
>> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
>> void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
>> /* This should only be used for ram local to a device.  */
>> void *qemu_get_ram_ptr(ram_addr_t addr);
>> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
>> target_phys_addr_t *size);
>> /* Same but slower, to use for migration, where the order of
>>  * RAMBlocks must not change. */
>> void *qemu_safe_ram_ptr(ram_addr_t addr);
>> diff --git a/exec.c b/exec.c
>> index 21f21f0..ff9c174 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
>>     return NULL;
>> }
>> 
>> +/* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
>> + * but takes a size argument */
>> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
>> target_phys_addr_t *size)
>> +{
>> +    if (xen_mapcache_enabled())
>> +        return qemu_map_cache(addr, *size, 1);
>> +    else {
>> +        RAMBlock *block;
>> +
>> +        QLIST_FOREACH(block, &ram_list.blocks, next) {
>> +            if (addr - block->offset < block->length) {
>> +                if (addr - block->offset + *size > block->length)
>> +                    *size = block->length - addr + block->offset;
>> +                return block->host + (addr - block->offset);
>> +            }
>> +        }
>> +
>> +        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", 
>> (uint64_t)addr);
>> +        abort();
>> +
>> +        *size = 0;
>> +        return NULL;
>> +    }
>> +}
>> +
>> void qemu_put_ram_ptr(void *addr)
>> {
>>     trace_qemu_put_ram_ptr(addr);
>> @@ -3972,14 +3997,12 @@ void 
>> *cpu_physical_memory_map(target_phys_addr_t addr,
>>                               int is_write)
>> {
>>     target_phys_addr_t len = *plen;
>> -    target_phys_addr_t done = 0;
>> +    target_phys_addr_t todo = 0;
>>     int l;
>> -    uint8_t *ret = NULL;
>> -    uint8_t *ptr;
>>     target_phys_addr_t page;
>>     unsigned long pd;
>>     PhysPageDesc *p;
>> -    unsigned long addr1;
>> +    target_phys_addr_t addr1 = addr;
>> 
>>     while (len > 0) {
>>         page = addr & TARGET_PAGE_MASK;
>> @@ -3994,7 +4017,7 @@ void 
>> *cpu_physical_memory_map(target_phys_addr_t addr,
>>         }
>> 
>>         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
>> -            if (done || bounce.buffer) {
>> +            if (todo || bounce.buffer) {
>>                 break;
>>             }
>>             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, 
>> TARGET_PAGE_SIZE);
>> @@ -4003,23 +4026,17 @@ void 
>> *cpu_physical_memory_map(target_phys_addr_t addr,
>>             if (!is_write) {
>>                 cpu_physical_memory_read(addr, bounce.buffer, l);
>>             }
>> -            ptr = bounce.buffer;
>> -        } else {
>> -            addr1 = (pd & TARGET_PAGE_MASK) + (addr & 
>> ~TARGET_PAGE_MASK);
>> -            ptr = qemu_get_ram_ptr(addr1);
>> -        }
>> -        if (!done) {
>> -            ret = ptr;
>> -        } else if (ret + done != ptr) {
>> -            break;
>> +
>> +            *plen = l;
>> +            return bounce.buffer;
>>         }
>> 
>>         len -= l;
>>         addr += l;
>> -        done += l;
>> +        todo += l;
>>     }
>> -    *plen = done;
>> -    return ret;
>> +    *plen = todo;
>> +    return qemu_ram_ptr_length(addr1, plen);
>> }
>> 
>> /* Unmaps a memory region previously mapped by 
>> cpu_physical_memory_map().
>> -- 
>> 1.7.2.3
> 
> Hello Stefano,
> 
> This commit breaks the case that guest memory doesn't start from 0.
> 
> In previous code
> 	addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
> This transfer guest physical addr to qemu ram_addr, and so that it can pass the ram range checking.
> 
> But current code
> 	addr1 = addr
> this make it fail to pass the ram range checking.

Are you sure it's still broken with commit 8ab934f93b5ad3d0af4ad419d2531235a75d672c? If so, mind to pinpoint where exactly?


Alex
Liu Yu-B13201 - July 22, 2011, 6:14 a.m.
> -----Original Message-----
> From: Alexander Graf [mailto:agraf@suse.de] 
> Sent: Friday, July 22, 2011 2:00 PM
> To: Liu Yu-B13201
> Cc: stefano.stabellini@eu.citrix.com; qemu-devel@nongnu.org; 
> xen-devel@lists.xensource.com; Yoder Stuart-B08248
> Subject: Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
> cpu_physical_memory_map
> 
> 
> On 22.07.2011, at 07:42, Liu Yu-B13201 wrote:
> 
> > 
> > 
> >> -----Original Message-----
> >> From: qemu-devel-bounces+yu.liu=freescale.com@nongnu.org 
> >> [mailto:qemu-devel-bounces+yu.liu=freescale.com@nongnu.org] 
> >> On Behalf Of stefano.stabellini@eu.citrix.com
> >> Sent: Friday, May 20, 2011 1:36 AM
> >> To: qemu-devel@nongnu.org
> >> Cc: xen-devel@lists.xensource.com; agraf@suse.de; Stefano 
> Stabellini
> >> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor 
> >> cpu_physical_memory_map
> >> 
> >> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> 
> >> Introduce qemu_ram_ptr_length that takes an address and a size as
> >> parameters rather than just an address.
> >> 
> >> Refactor cpu_physical_memory_map so that we call 
> >> qemu_ram_ptr_length only
> >> once rather than calling qemu_get_ram_ptr one time per page.
> >> This is not only more efficient but also tries to simplify 
> >> the logic of
> >> the function.
> >> Currently we are relying on the fact that all the pages are mapped
> >> contiguously in qemu's address space: we have a check to make 
> >> sure that
> >> the virtual address returned by qemu_get_ram_ptr from the 
> >> second call on
> >> is consecutive. Now we are making this more explicit 
> replacing all the
> >> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
> >> passing a size argument.
> >> 
> >> Signed-off-by: Stefano Stabellini 
> <stefano.stabellini@eu.citrix.com>
> >> CC: agraf@suse.de
> >> CC: anthony@codemonkey.ws
> >> ---
> >> cpu-common.h |    1 +
> >> exec.c       |   51 
> >> ++++++++++++++++++++++++++++++++++-----------------
> >> 2 files changed, 35 insertions(+), 17 deletions(-)
> >> 
> >> diff --git a/cpu-common.h b/cpu-common.h
> >> index 151c32c..085aacb 100644
> >> --- a/cpu-common.h
> >> +++ b/cpu-common.h
> >> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr);
> >> void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
> >> /* This should only be used for ram local to a device.  */
> >> void *qemu_get_ram_ptr(ram_addr_t addr);
> >> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> >> target_phys_addr_t *size);
> >> /* Same but slower, to use for migration, where the order of
> >>  * RAMBlocks must not change. */
> >> void *qemu_safe_ram_ptr(ram_addr_t addr);
> >> diff --git a/exec.c b/exec.c
> >> index 21f21f0..ff9c174 100644
> >> --- a/exec.c
> >> +++ b/exec.c
> >> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
> >>     return NULL;
> >> }
> >> 
> >> +/* Return a host pointer to guest's ram. Similar to 
> qemu_get_ram_ptr
> >> + * but takes a size argument */
> >> +void *qemu_ram_ptr_length(target_phys_addr_t addr, 
> >> target_phys_addr_t *size)
> >> +{
> >> +    if (xen_mapcache_enabled())
> >> +        return qemu_map_cache(addr, *size, 1);
> >> +    else {
> >> +        RAMBlock *block;
> >> +
> >> +        QLIST_FOREACH(block, &ram_list.blocks, next) {
> >> +            if (addr - block->offset < block->length) {
> >> +                if (addr - block->offset + *size > block->length)
> >> +                    *size = block->length - addr + block->offset;
> >> +                return block->host + (addr - block->offset);
> >> +            }
> >> +        }
> >> +
> >> +        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", 
> >> (uint64_t)addr);
> >> +        abort();
> >> +
> >> +        *size = 0;
> >> +        return NULL;
> >> +    }
> >> +}
> >> +
> >> void qemu_put_ram_ptr(void *addr)
> >> {
> >>     trace_qemu_put_ram_ptr(addr);
> >> @@ -3972,14 +3997,12 @@ void 
> >> *cpu_physical_memory_map(target_phys_addr_t addr,
> >>                               int is_write)
> >> {
> >>     target_phys_addr_t len = *plen;
> >> -    target_phys_addr_t done = 0;
> >> +    target_phys_addr_t todo = 0;
> >>     int l;
> >> -    uint8_t *ret = NULL;
> >> -    uint8_t *ptr;
> >>     target_phys_addr_t page;
> >>     unsigned long pd;
> >>     PhysPageDesc *p;
> >> -    unsigned long addr1;
> >> +    target_phys_addr_t addr1 = addr;
> >> 
> >>     while (len > 0) {
> >>         page = addr & TARGET_PAGE_MASK;
> >> @@ -3994,7 +4017,7 @@ void 
> >> *cpu_physical_memory_map(target_phys_addr_t addr,
> >>         }
> >> 
> >>         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
> >> -            if (done || bounce.buffer) {
> >> +            if (todo || bounce.buffer) {
> >>                 break;
> >>             }
> >>             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, 
> >> TARGET_PAGE_SIZE);
> >> @@ -4003,23 +4026,17 @@ void 
> >> *cpu_physical_memory_map(target_phys_addr_t addr,
> >>             if (!is_write) {
> >>                 cpu_physical_memory_read(addr, bounce.buffer, l);
> >>             }
> >> -            ptr = bounce.buffer;
> >> -        } else {
> >> -            addr1 = (pd & TARGET_PAGE_MASK) + (addr & 
> >> ~TARGET_PAGE_MASK);
> >> -            ptr = qemu_get_ram_ptr(addr1);
> >> -        }
> >> -        if (!done) {
> >> -            ret = ptr;
> >> -        } else if (ret + done != ptr) {
> >> -            break;
> >> +
> >> +            *plen = l;
> >> +            return bounce.buffer;
> >>         }
> >> 
> >>         len -= l;
> >>         addr += l;
> >> -        done += l;
> >> +        todo += l;
> >>     }
> >> -    *plen = done;
> >> -    return ret;
> >> +    *plen = todo;
> >> +    return qemu_ram_ptr_length(addr1, plen);
> >> }
> >> 
> >> /* Unmaps a memory region previously mapped by 
> >> cpu_physical_memory_map().
> >> -- 
> >> 1.7.2.3
> > 
> > Hello Stefano,
> > 
> > This commit breaks the case that guest memory doesn't start from 0.
> > 
> > In previous code
> > 	addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
> > This transfer guest physical addr to qemu ram_addr, and so 
> that it can pass the ram range checking.
> > 
> > But current code
> > 	addr1 = addr
> > this make it fail to pass the ram range checking.
> 
> Are you sure it's still broken with commit 
> 8ab934f93b5ad3d0af4ad419d2531235a75d672c? If so, mind to 
> pinpoint where exactly?
> 

Ah, miss this one.
Yes, it then works with commit 8ab934f93b5ad3d0af4ad419d2531235a75d672c.
Sorry, please ignore this.


Thanks,
Yu

Patch

diff --git a/cpu-common.h b/cpu-common.h
index 151c32c..085aacb 100644
--- a/cpu-common.h
+++ b/cpu-common.h
@@ -64,6 +64,7 @@  void qemu_ram_free(ram_addr_t addr);
 void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
 /* This should only be used for ram local to a device.  */
 void *qemu_get_ram_ptr(ram_addr_t addr);
+void *qemu_ram_ptr_length(target_phys_addr_t addr, target_phys_addr_t *size);
 /* Same but slower, to use for migration, where the order of
  * RAMBlocks must not change. */
 void *qemu_safe_ram_ptr(ram_addr_t addr);
diff --git a/exec.c b/exec.c
index 21f21f0..ff9c174 100644
--- a/exec.c
+++ b/exec.c
@@ -3111,6 +3111,31 @@  void *qemu_safe_ram_ptr(ram_addr_t addr)
     return NULL;
 }
 
+/* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
+ * but takes a size argument */
+void *qemu_ram_ptr_length(target_phys_addr_t addr, target_phys_addr_t *size)
+{
+    if (xen_mapcache_enabled())
+        return qemu_map_cache(addr, *size, 1);
+    else {
+        RAMBlock *block;
+
+        QLIST_FOREACH(block, &ram_list.blocks, next) {
+            if (addr - block->offset < block->length) {
+                if (addr - block->offset + *size > block->length)
+                    *size = block->length - addr + block->offset;
+                return block->host + (addr - block->offset);
+            }
+        }
+
+        fprintf(stderr, "Bad ram offset %" PRIx64 "\n", (uint64_t)addr);
+        abort();
+
+        *size = 0;
+        return NULL;
+    }
+}
+
 void qemu_put_ram_ptr(void *addr)
 {
     trace_qemu_put_ram_ptr(addr);
@@ -3972,14 +3997,12 @@  void *cpu_physical_memory_map(target_phys_addr_t addr,
                               int is_write)
 {
     target_phys_addr_t len = *plen;
-    target_phys_addr_t done = 0;
+    target_phys_addr_t todo = 0;
     int l;
-    uint8_t *ret = NULL;
-    uint8_t *ptr;
     target_phys_addr_t page;
     unsigned long pd;
     PhysPageDesc *p;
-    unsigned long addr1;
+    target_phys_addr_t addr1 = addr;
 
     while (len > 0) {
         page = addr & TARGET_PAGE_MASK;
@@ -3994,7 +4017,7 @@  void *cpu_physical_memory_map(target_phys_addr_t addr,
         }
 
         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
-            if (done || bounce.buffer) {
+            if (todo || bounce.buffer) {
                 break;
             }
             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, TARGET_PAGE_SIZE);
@@ -4003,23 +4026,17 @@  void *cpu_physical_memory_map(target_phys_addr_t addr,
             if (!is_write) {
                 cpu_physical_memory_read(addr, bounce.buffer, l);
             }
-            ptr = bounce.buffer;
-        } else {
-            addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
-            ptr = qemu_get_ram_ptr(addr1);
-        }
-        if (!done) {
-            ret = ptr;
-        } else if (ret + done != ptr) {
-            break;
+
+            *plen = l;
+            return bounce.buffer;
         }
 
         len -= l;
         addr += l;
-        done += l;
+        todo += l;
     }
-    *plen = done;
-    return ret;
+    *plen = todo;
+    return qemu_ram_ptr_length(addr1, plen);
 }
 
 /* Unmaps a memory region previously mapped by cpu_physical_memory_map().