diff mbox series

[1/2] memory: Replace has_coalesced_range with add/del flags

Message ID 20190817093237.27967-2-peterx@redhat.com
State New
Headers show
Series [1/2] memory: Replace has_coalesced_range with add/del flags | expand

Commit Message

Peter Xu Aug. 17, 2019, 9:32 a.m. UTC
The previous has_coalesced_range counter has a problem in that it only
works for additions of coalesced mmio ranges but not deletions.  The
reason is that has_coalesced_range information can be lost when the
FlatView updates the topology again when the updated region is not
covering the coalesced regions. When that happens, due to
flatrange_equal() is not checking against has_coalesced_range, the new
FlatRange will be seen as the same one as the old and the new
instance (whose has_coalesced_range will be zero) will replace the old
instance (whose has_coalesced_range _could_ be non-zero).

To fix it, we don't cache has_coalesced_range at all in the FlatRange.
Instead we introduce two flags to make sure the coalesced_io_{add|del}
will only be called once for every FlatRange instance.  This will even
work if another FlatRange replaces current one.

Without this patch, MemoryListener.coalesced_io_del is hardly being
called due to has_coalesced_range will be mostly zero in
flat_range_coalesced_io_del() when topologies frequently change for
the "memory" address space.

Fixes: 3ac7d43a6fbb5d4a3
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 memory.c | 21 ++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

Comments

Paolo Bonzini Aug. 19, 2019, 2:30 p.m. UTC | #1
On 17/08/19 11:32, Peter Xu wrote:
> The previous has_coalesced_range counter has a problem in that it only
> works for additions of coalesced mmio ranges but not deletions.  The
> reason is that has_coalesced_range information can be lost when the
> FlatView updates the topology again when the updated region is not
> covering the coalesced regions. When that happens, due to
> flatrange_equal() is not checking against has_coalesced_range, the new
> FlatRange will be seen as the same one as the old and the new
> instance (whose has_coalesced_range will be zero) will replace the old
> instance (whose has_coalesced_range _could_ be non-zero).
> 
> To fix it, we don't cache has_coalesced_range at all in the FlatRange.
> Instead we introduce two flags to make sure the coalesced_io_{add|del}
> will only be called once for every FlatRange instance.  This will even
> work if another FlatRange replaces current one.

It's still a bit ugly that coalesced_mmio_add_done ends up not being set
on the new (but equal) FlatRange.

Would something like this work too?

diff --git a/memory.c b/memory.c
index edd0c13..fc91f06 100644
--- a/memory.c
+++ b/memory.c
@@ -939,6 +939,7 @@ static void address_space_update_topology_pass(AddressSpace *as,
             /* In both and unchanged (except logging may have changed) */
 
             if (adding) {
+                frnew->has_coalesced_range = frold->has_coalesced_range;
                 MEMORY_LISTENER_UPDATE_REGION(frnew, as, Forward, region_nop);
                 if (frnew->dirty_log_mask & ~frold->dirty_log_mask) {
                     MEMORY_LISTENER_UPDATE_REGION(frnew, as, Forward, log_start,

Thanks,

Paolo

> Without this patch, MemoryListener.coalesced_io_del is hardly being
> called due to has_coalesced_range will be mostly zero in
> flat_range_coalesced_io_del() when topologies frequently change for
> the "memory" address space.
Peter Xu Aug. 20, 2019, 4:52 a.m. UTC | #2
On Mon, Aug 19, 2019 at 04:30:45PM +0200, Paolo Bonzini wrote:
> On 17/08/19 11:32, Peter Xu wrote:
> > The previous has_coalesced_range counter has a problem in that it only
> > works for additions of coalesced mmio ranges but not deletions.  The
> > reason is that has_coalesced_range information can be lost when the
> > FlatView updates the topology again when the updated region is not
> > covering the coalesced regions. When that happens, due to
> > flatrange_equal() is not checking against has_coalesced_range, the new
> > FlatRange will be seen as the same one as the old and the new
> > instance (whose has_coalesced_range will be zero) will replace the old
> > instance (whose has_coalesced_range _could_ be non-zero).
> > 
> > To fix it, we don't cache has_coalesced_range at all in the FlatRange.
> > Instead we introduce two flags to make sure the coalesced_io_{add|del}
> > will only be called once for every FlatRange instance.  This will even
> > work if another FlatRange replaces current one.
> 
> It's still a bit ugly that coalesced_mmio_add_done ends up not being set
> on the new (but equal) FlatRange.
> 
> Would something like this work too?
> 
> diff --git a/memory.c b/memory.c
> index edd0c13..fc91f06 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -939,6 +939,7 @@ static void address_space_update_topology_pass(AddressSpace *as,
>              /* In both and unchanged (except logging may have changed) */
>  
>              if (adding) {
> +                frnew->has_coalesced_range = frold->has_coalesced_range;
>                  MEMORY_LISTENER_UPDATE_REGION(frnew, as, Forward, region_nop);
>                  if (frnew->dirty_log_mask & ~frold->dirty_log_mask) {
>                      MEMORY_LISTENER_UPDATE_REGION(frnew, as, Forward, log_start,

This seems to be a much better (and, shorter) idea. :-)

I'll verify it and repost if it goes well.

Regards,
diff mbox series

Patch

diff --git a/memory.c b/memory.c
index 8141486832..1a2b465a96 100644
--- a/memory.c
+++ b/memory.c
@@ -217,7 +217,13 @@  struct FlatRange {
     bool romd_mode;
     bool readonly;
     bool nonvolatile;
-    int has_coalesced_range;
+    /*
+     * Flags to show whether we have delievered the
+     * coalesced_io_{add|del} events to the listeners for this
+     * FlatRange.
+     */
+    bool coalesced_mmio_add_done;
+    bool coalesced_mmio_del_done;
 };
 
 #define FOR_EACH_FLAT_RANGE(var, view)          \
@@ -654,7 +660,8 @@  static void render_memory_region(FlatView *view,
     fr.romd_mode = mr->romd_mode;
     fr.readonly = readonly;
     fr.nonvolatile = nonvolatile;
-    fr.has_coalesced_range = 0;
+    fr.coalesced_mmio_add_done = false;
+    fr.coalesced_mmio_del_done = false;
 
     /* Render the region itself into any gaps left by the current view. */
     for (i = 0; i < view->nr && int128_nz(remain); ++i) {
@@ -857,14 +864,16 @@  static void address_space_update_ioeventfds(AddressSpace *as)
 
 static void flat_range_coalesced_io_del(FlatRange *fr, AddressSpace *as)
 {
-    if (!fr->has_coalesced_range) {
+    if (QTAILQ_EMPTY(&fr->mr->coalesced)) {
         return;
     }
 
-    if (--fr->has_coalesced_range > 0) {
+    if (fr->coalesced_mmio_del_done) {
         return;
     }
 
+    fr->coalesced_mmio_del_done = true;
+
     MEMORY_LISTENER_UPDATE_REGION(fr, as, Reverse, coalesced_io_del,
                                   int128_get64(fr->addr.start),
                                   int128_get64(fr->addr.size));
@@ -880,10 +889,12 @@  static void flat_range_coalesced_io_add(FlatRange *fr, AddressSpace *as)
         return;
     }
 
-    if (fr->has_coalesced_range++) {
+    if (fr->coalesced_mmio_add_done) {
         return;
     }
 
+    fr->coalesced_mmio_add_done = true;
+
     QTAILQ_FOREACH(cmr, &mr->coalesced, link) {
         tmp = addrrange_shift(cmr->addr,
                               int128_sub(fr->addr.start,