diff mbox

[RFC,V4,10/14] Introduce qemu_ram_ptr_unlock.

Message ID 1285686097-13036-11-git-send-email-anthony.perard@citrix.com
State New
Headers show

Commit Message

Anthony PERARD Sept. 28, 2010, 3:01 p.m. UTC
From: Anthony PERARD <anthony.perard@citrix.com>

This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After
a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when
used with Xen.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 cpu-common.h   |    1 +
 exec.c         |   32 +++++++++++++++++++++++++++++---
 xen-mapcache.c |   34 ++++++++++++++++++++++++++++++++++
 3 files changed, 64 insertions(+), 3 deletions(-)

Comments

Anthony Liguori Sept. 28, 2010, 4:01 p.m. UTC | #1
On 09/28/2010 10:25 AM, Stefano Stabellini wrote:
> On Tue, 28 Sep 2010, Anthony Liguori wrote:
>    
>> On 09/28/2010 10:01 AM, anthony.perard@citrix.com wrote:
>>      
>>> From: Anthony PERARD<anthony.perard@citrix.com>
>>>
>>> This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After
>>> a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when
>>> used with Xen.
>>>
>>> Signed-off-by: Anthony PERARD<anthony.perard@citrix.com>
>>>
>>>        
>> Why isn't hooking cpu_physical_memory_{map,unmap}() not enough?  That's
>> really the intention of the API.
>>
>> You only really care about guest RAM, not device memory, correct?
>>      
> Yes, however at the moment all the calls to qemu_get_ram_ptr imply the
> mapping in qemu address space to remain valid for an unlimited amount of
> time.
>    

Yes, but qemu_get_ram_ptr() is not a general purpose API.  It really 
should only have one use--within exec.c to implement 
cpu_physical_memory_* functions.  There are a few uses in hw/* but 
they're all wrong and should be removed.  Fortunately, for the purposes 
of the Xen machine, almost none of them actually matter.

What I'm thinking is that RAM in Xen should not be backed at all from a 
RAMBlock.  Instead, cpu_physical_memory_* functions should call an 
explicit map/unmap() function that can be implemented as 
qemu_get_ram_ptr() and a nop in the TCG/KVM case and as explicit map 
cache operations in the Xen case.

Regards,

Anthony Liguori

> While we can do that because now the mapcache allows to "lock" a
> mapping, it would be nice if an explicit qemu_ram_ptr_unlock would be
> provided. It is not required for xen support though.
>
>
>
Stefano Stabellini Sept. 28, 2010, 6:04 p.m. UTC | #2
On Tue, 28 Sep 2010, Anthony Liguori wrote:
> On 09/28/2010 10:25 AM, Stefano Stabellini wrote:
> > On Tue, 28 Sep 2010, Anthony Liguori wrote:
> >    
> >> On 09/28/2010 10:01 AM, anthony.perard@citrix.com wrote:
> >>      
> >>> From: Anthony PERARD<anthony.perard@citrix.com>
> >>>
> >>> This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After
> >>> a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when
> >>> used with Xen.
> >>>
> >>> Signed-off-by: Anthony PERARD<anthony.perard@citrix.com>
> >>>
> >>>        
> >> Why isn't hooking cpu_physical_memory_{map,unmap}() not enough?  That's
> >> really the intention of the API.
> >>
> >> You only really care about guest RAM, not device memory, correct?
> >>      
> > Yes, however at the moment all the calls to qemu_get_ram_ptr imply the
> > mapping in qemu address space to remain valid for an unlimited amount of
> > time.
> >    
> 
> Yes, but qemu_get_ram_ptr() is not a general purpose API.  It really 
> should only have one use--within exec.c to implement 
> cpu_physical_memory_* functions.  There are a few uses in hw/* but 
> they're all wrong and should be removed.  Fortunately, for the purposes 
> of the Xen machine, almost none of them actually matter.
> 

If this is the case, it is even better :)
Can we replace the call to qemu_get_ram_ptr with cpu_physical_memory_map
in the vga code?


> What I'm thinking is that RAM in Xen should not be backed at all from a 
> RAMBlock.  Instead, cpu_physical_memory_* functions should call an 
> explicit map/unmap() function that can be implemented as 
> qemu_get_ram_ptr() and a nop in the TCG/KVM case and as explicit map 
> cache operations in the Xen case.
> 

Yes, we basically followed a very similar line of thought: in the current
implementation we have just one ramblock as a placeholder for the
guest's ram, then we have three hooks in qemu_ram_alloc_from_ptr,
qemu_get_ram_ptr and qemu_ram_free for xen specific ways to allocate,
map and free memory but we reuse everything else.

Let's take cpu_physical_memory_map for example: we completely reuse the
generic implementation, that ends up calling either
qemu_get_ram_ptr or cpu_physical_memory_rw.
In case of qemu_get_ram_ptr, we still reuse the generic code but we have
a xen specific hook to call the mapcache.
In case of cpu_physical_memory_rw, we didn't need to change anything to
do the mapping because it is implemented using qemu_get_ram_ptr (see
above), we just added a call to qemu_ram_ptr_unlock to unlock the
mapping at the end of the function (a nop for TCG/KVM).
So qemu_get_ram_ptr and qemu_ram_ptr_unlock are basically the explicit
map/unmap() functions you are referring to.

We could probably remove the single ramblock we added and provide a xen
specific implementation of qemu_ram_alloc_from_ptr and qemu_ram_free
that don't iterate over the ramblock list if you think is better.
Gerd Hoffmann Sept. 29, 2010, 7:38 a.m. UTC | #3
On 09/28/10 17:14, Anthony Liguori wrote:
> On 09/28/2010 10:01 AM, anthony.perard@citrix.com wrote:
>> From: Anthony PERARD<anthony.perard@citrix.com>
>>
>> This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After
>> a call to qemu_ram_ptr_unlock, the pointer may be unmap from QEMU when
>> used with Xen.
>>
>> Signed-off-by: Anthony PERARD<anthony.perard@citrix.com>
>
> Why isn't hooking cpu_physical_memory_{map,unmap}() not enough? That's
> really the intention of the API.

I think quite a bunch of stuff stops working then because it doesn't 
properly use the map/unmap API ...

cheers,
   Gerd
diff mbox

Patch

diff --git a/cpu-common.h b/cpu-common.h
index 0426bc8..378eea8 100644
--- a/cpu-common.h
+++ b/cpu-common.h
@@ -46,6 +46,7 @@  ram_addr_t qemu_ram_alloc(DeviceState *dev, const char *name, ram_addr_t size);
 void qemu_ram_free(ram_addr_t addr);
 /* This should only be used for ram local to a device.  */
 void *qemu_get_ram_ptr(ram_addr_t addr);
+void qemu_ram_ptr_unlock(void *addr);
 /* This should not be used by devices.  */
 ram_addr_t qemu_ram_addr_from_host(void *ptr);
 
diff --git a/exec.c b/exec.c
index 0de9e32..0612ee4 100644
--- a/exec.c
+++ b/exec.c
@@ -2961,6 +2961,13 @@  void *qemu_get_ram_ptr(ram_addr_t addr)
     return NULL;
 }
 
+void qemu_ram_ptr_unlock(void *addr)
+{
+    if (xen_mapcache_enabled()) {
+        qemu_map_cache_unlock(addr);
+    }
+}
+
 /* Some of the softmmu routines need to translate from a host pointer
    (typically a TLB entry) back to a ram offset.  */
 ram_addr_t qemu_ram_addr_from_host(void *ptr)
@@ -3067,6 +3074,7 @@  static void notdirty_mem_writeb(void *opaque, target_phys_addr_t ram_addr,
                                 uint32_t val)
 {
     int dirty_flags;
+    void *vaddr;
     dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr);
     if (!(dirty_flags & CODE_DIRTY_FLAG)) {
 #if !defined(CONFIG_USER_ONLY)
@@ -3074,19 +3082,22 @@  static void notdirty_mem_writeb(void *opaque, target_phys_addr_t ram_addr,
         dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr);
 #endif
     }
-    stb_p(qemu_get_ram_ptr(ram_addr), val);
+    vaddr = qemu_get_ram_ptr(ram_addr);
+    stb_p(vaddr, val);
     dirty_flags |= (0xff & ~CODE_DIRTY_FLAG);
     cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags);
     /* we remove the notdirty callback only if the code has been
        flushed */
     if (dirty_flags == 0xff)
         tlb_set_dirty(cpu_single_env, cpu_single_env->mem_io_vaddr);
+    qemu_ram_ptr_unlock(vaddr);
 }
 
 static void notdirty_mem_writew(void *opaque, target_phys_addr_t ram_addr,
                                 uint32_t val)
 {
     int dirty_flags;
+    void *vaddr;
     dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr);
     if (!(dirty_flags & CODE_DIRTY_FLAG)) {
 #if !defined(CONFIG_USER_ONLY)
@@ -3094,19 +3105,22 @@  static void notdirty_mem_writew(void *opaque, target_phys_addr_t ram_addr,
         dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr);
 #endif
     }
-    stw_p(qemu_get_ram_ptr(ram_addr), val);
+    vaddr = qemu_get_ram_ptr(ram_addr);
+    stw_p(vaddr, val);
     dirty_flags |= (0xff & ~CODE_DIRTY_FLAG);
     cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags);
     /* we remove the notdirty callback only if the code has been
        flushed */
     if (dirty_flags == 0xff)
         tlb_set_dirty(cpu_single_env, cpu_single_env->mem_io_vaddr);
+    qemu_ram_ptr_unlock(vaddr);
 }
 
 static void notdirty_mem_writel(void *opaque, target_phys_addr_t ram_addr,
                                 uint32_t val)
 {
     int dirty_flags;
+    void *vaddr;
     dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr);
     if (!(dirty_flags & CODE_DIRTY_FLAG)) {
 #if !defined(CONFIG_USER_ONLY)
@@ -3114,13 +3128,15 @@  static void notdirty_mem_writel(void *opaque, target_phys_addr_t ram_addr,
         dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr);
 #endif
     }
-    stl_p(qemu_get_ram_ptr(ram_addr), val);
+    vaddr = qemu_get_ram_ptr(ram_addr);
+    stl_p(vaddr, val);
     dirty_flags |= (0xff & ~CODE_DIRTY_FLAG);
     cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags);
     /* we remove the notdirty callback only if the code has been
        flushed */
     if (dirty_flags == 0xff)
         tlb_set_dirty(cpu_single_env, cpu_single_env->mem_io_vaddr);
+    qemu_ram_ptr_unlock(vaddr);
 }
 
 static CPUReadMemoryFunc * const error_mem_read[3] = {
@@ -3540,6 +3556,7 @@  void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
                     cpu_physical_memory_set_dirty_flags(
                         addr1, (0xff & ~CODE_DIRTY_FLAG));
                 }
+                qemu_ram_ptr_unlock(ptr);
             }
         } else {
             if ((pd & ~TARGET_PAGE_MASK) > IO_MEM_ROM &&
@@ -3570,6 +3587,7 @@  void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
                 ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) +
                     (addr & ~TARGET_PAGE_MASK);
                 memcpy(buf, ptr, l);
+                qemu_ram_ptr_unlock(ptr);
             }
         }
         len -= l;
@@ -3610,6 +3628,7 @@  void cpu_physical_memory_write_rom(target_phys_addr_t addr,
             /* ROM/RAM case */
             ptr = qemu_get_ram_ptr(addr1);
             memcpy(ptr, buf, l);
+            qemu_ram_ptr_unlock(ptr);
         }
         len -= l;
         buf += l;
@@ -3792,6 +3811,7 @@  uint32_t ldl_phys(target_phys_addr_t addr)
         ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) +
             (addr & ~TARGET_PAGE_MASK);
         val = ldl_p(ptr);
+        qemu_ram_ptr_unlock(ptr);
     }
     return val;
 }
@@ -3830,6 +3850,7 @@  uint64_t ldq_phys(target_phys_addr_t addr)
         ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) +
             (addr & ~TARGET_PAGE_MASK);
         val = ldq_p(ptr);
+        qemu_ram_ptr_unlock(ptr);
     }
     return val;
 }
@@ -3870,6 +3891,7 @@  uint32_t lduw_phys(target_phys_addr_t addr)
         ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) +
             (addr & ~TARGET_PAGE_MASK);
         val = lduw_p(ptr);
+        qemu_ram_ptr_unlock(ptr);
     }
     return val;
 }
@@ -3900,6 +3922,7 @@  void stl_phys_notdirty(target_phys_addr_t addr, uint32_t val)
         unsigned long addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK);
         ptr = qemu_get_ram_ptr(addr1);
         stl_p(ptr, val);
+        qemu_ram_ptr_unlock(ptr);
 
         if (unlikely(in_migration)) {
             if (!cpu_physical_memory_is_dirty(addr1)) {
@@ -3942,6 +3965,7 @@  void stq_phys_notdirty(target_phys_addr_t addr, uint64_t val)
         ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) +
             (addr & ~TARGET_PAGE_MASK);
         stq_p(ptr, val);
+        qemu_ram_ptr_unlock(ptr);
     }
 }
 
@@ -3971,6 +3995,7 @@  void stl_phys(target_phys_addr_t addr, uint32_t val)
         /* RAM case */
         ptr = qemu_get_ram_ptr(addr1);
         stl_p(ptr, val);
+        qemu_ram_ptr_unlock(ptr);
         if (!cpu_physical_memory_is_dirty(addr1)) {
             /* invalidate code */
             tb_invalidate_phys_page_range(addr1, addr1 + 4, 0);
@@ -4014,6 +4039,7 @@  void stw_phys(target_phys_addr_t addr, uint32_t val)
         /* RAM case */
         ptr = qemu_get_ram_ptr(addr1);
         stw_p(ptr, val);
+        qemu_ram_ptr_unlock(ptr);
         if (!cpu_physical_memory_is_dirty(addr1)) {
             /* invalidate code */
             tb_invalidate_phys_page_range(addr1, addr1 + 2, 0);
diff --git a/xen-mapcache.c b/xen-mapcache.c
index c7e69e6..e407949 100644
--- a/xen-mapcache.c
+++ b/xen-mapcache.c
@@ -187,6 +187,40 @@  uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, target_phys_addr_t size, u
     return mapcache->last_address_vaddr + address_offset;
 }
 
+void qemu_map_cache_unlock(void *buffer)
+{
+    MapCacheEntry *entry = NULL, *pentry = NULL;
+    MapCacheRev *reventry;
+    target_phys_addr_t paddr_index;
+    int found = 0;
+
+    QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
+        if (reventry->vaddr_req == buffer) {
+            paddr_index = reventry->paddr_index;
+            found = 1;
+            break;
+        }
+    }
+    if (!found) {
+        return;
+    }
+    QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next);
+    qemu_free(reventry);
+
+    entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
+    while (entry && entry->paddr_index != paddr_index) {
+        pentry = entry;
+        entry = entry->next;
+    }
+    if (!entry) {
+        return;
+    }
+    entry->lock--;
+    if (entry->lock > 0) {
+        entry->lock--;
+    }
+}
+
 ram_addr_t qemu_ram_addr_from_mapcache(void *ptr)
 {
     MapCacheRev *reventry;