Message ID | 47c076c8519f27cdfbf3f1e55432a162e5334e02.1340607659.git.jan.kiszka@siemens.com |
---|---|
State | New |
Headers | show |
Am 25.06.2012 09:01, schrieb Jan Kiszka: > Flush pending coalesced MMIO before performing mapping or state changes > that could affect the event orderings or route the buffered requests to > a wrong region. > > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> > > In addition, we also have to Stray paste or missing addition to the above? Andreas
On 06/25/2012 10:01 AM, Jan Kiszka wrote: > Flush pending coalesced MMIO before performing mapping or state changes > that could affect the event orderings or route the buffered requests to > a wrong region. > > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> > > In addition, we also have to Yes, we do. > > void memory_region_transaction_begin(void) > { > + qemu_flush_coalesced_mmio_buffer(); > ++memory_region_transaction_depth; > } Why is this needed? > > @@ -1109,6 +1110,9 @@ void memory_region_sync_dirty_bitmap(MemoryRegion *mr) > > void memory_region_set_readonly(MemoryRegion *mr, bool readonly) > { > + if (!QTAILQ_EMPTY(&mr->coalesced)) { > + qemu_flush_coalesced_mmio_buffer(); > + } The readonly attribute is inherited by subregions and alias targets, so this check is insufficient. See render_memory_region(). Need to flush unconditionally. > if (mr->readonly != readonly) { > mr->readonly = readonly; > memory_region_update_topology(mr); > @@ -1117,6 +1121,9 @@ void memory_region_set_readonly(MemoryRegion *mr, bool readonly) > > void memory_region_rom_device_set_readable(MemoryRegion *mr, bool readable) > { > + if (!QTAILQ_EMPTY(&mr->coalesced)) { > + qemu_flush_coalesced_mmio_buffer(); > + } This property is not inherited, but let's flush unconditionally just the same, to reduce fragility. > @@ -1219,6 +1228,9 @@ void memory_region_add_eventfd(MemoryRegion *mr, > }; > unsigned i; > > + if (!QTAILQ_EMPTY(&mr->coalesced)) { > + qemu_flush_coalesced_mmio_buffer(); > + } Ditto. It's possible that an eventfd overlays a subregion which has coalescing enabled. It's not really defined what happens in this case, and such code and its submitter should be perma-nacked, but let's play it safe here since there isn't much to be gained by avoiding the flush. This code is a very slow path anyway, including and rcu and/or srcu synchronization, and a rebuild of the dispatch radix tree (trees when we dma-enable this code). > for (i = 0; i < mr->ioeventfd_nb; ++i) { > if (memory_region_ioeventfd_before(mrfd, mr->ioeventfds[i])) { > break; > @@ -1249,6 +1261,9 @@ void memory_region_del_eventfd(MemoryRegion *mr, > }; > unsigned i; > > + if (!QTAILQ_EMPTY(&mr->coalesced)) { > + qemu_flush_coalesced_mmio_buffer(); > + } Same. The repetitiveness of this code suggests a different way of doing this: make every API call be its own subtransaction and perform the flush in memory_region_begin_transaction() (maybe that's the answer to my question above).
On 2012-06-25 10:57, Avi Kivity wrote: > On 06/25/2012 10:01 AM, Jan Kiszka wrote: >> Flush pending coalesced MMIO before performing mapping or state changes >> that could affect the event orderings or route the buffered requests to >> a wrong region. >> >> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> >> >> In addition, we also have to > > Yes, we do. > >> >> void memory_region_transaction_begin(void) >> { >> + qemu_flush_coalesced_mmio_buffer(); >> ++memory_region_transaction_depth; >> } > > Why is this needed? > >> >> @@ -1109,6 +1110,9 @@ void memory_region_sync_dirty_bitmap(MemoryRegion *mr) >> >> void memory_region_set_readonly(MemoryRegion *mr, bool readonly) >> { >> + if (!QTAILQ_EMPTY(&mr->coalesced)) { >> + qemu_flush_coalesced_mmio_buffer(); >> + } > > The readonly attribute is inherited by subregions and alias targets, so > this check is insufficient. See render_memory_region(). Need to flush > unconditionally. > >> if (mr->readonly != readonly) { >> mr->readonly = readonly; >> memory_region_update_topology(mr); >> @@ -1117,6 +1121,9 @@ void memory_region_set_readonly(MemoryRegion *mr, bool readonly) >> >> void memory_region_rom_device_set_readable(MemoryRegion *mr, bool readable) >> { >> + if (!QTAILQ_EMPTY(&mr->coalesced)) { >> + qemu_flush_coalesced_mmio_buffer(); >> + } > > This property is not inherited, but let's flush unconditionally just the > same, to reduce fragility. > >> @@ -1219,6 +1228,9 @@ void memory_region_add_eventfd(MemoryRegion *mr, >> }; >> unsigned i; >> >> + if (!QTAILQ_EMPTY(&mr->coalesced)) { >> + qemu_flush_coalesced_mmio_buffer(); >> + } > > Ditto. It's possible that an eventfd overlays a subregion which has > coalescing enabled. It's not really defined what happens in this case, > and such code and its submitter should be perma-nacked, but let's play > it safe here since there isn't much to be gained by avoiding the flush. > This code is a very slow path anyway, including and rcu and/or srcu > synchronization, and a rebuild of the dispatch radix tree (trees when we > dma-enable this code). > >> for (i = 0; i < mr->ioeventfd_nb; ++i) { >> if (memory_region_ioeventfd_before(mrfd, mr->ioeventfds[i])) { >> break; >> @@ -1249,6 +1261,9 @@ void memory_region_del_eventfd(MemoryRegion *mr, >> }; >> unsigned i; >> >> + if (!QTAILQ_EMPTY(&mr->coalesced)) { >> + qemu_flush_coalesced_mmio_buffer(); >> + } > > Same. OK. > > The repetitiveness of this code suggests a different way of doing this: > make every API call be its own subtransaction and perform the flush in > memory_region_begin_transaction() (maybe that's the answer to my > question above). So you want me to wrap the core of those services in begin/commit_transaction instead? Just to be sure I got the idea. Jan
On 2012-06-25 12:15, Jan Kiszka wrote: > On 2012-06-25 10:57, Avi Kivity wrote: >> The repetitiveness of this code suggests a different way of doing this: >> make every API call be its own subtransaction and perform the flush in >> memory_region_begin_transaction() (maybe that's the answer to my >> question above). > > So you want me to wrap the core of those services in > begin/commit_transaction instead? Just to be sure I got the idea. What we would lose this way (transaction_commit) is the ability to skip updates on !mr->enabled. Jan
On 06/25/2012 01:26 PM, Jan Kiszka wrote: > On 2012-06-25 12:15, Jan Kiszka wrote: >> On 2012-06-25 10:57, Avi Kivity wrote: >>> The repetitiveness of this code suggests a different way of doing this: >>> make every API call be its own subtransaction and perform the flush in >>> memory_region_begin_transaction() (maybe that's the answer to my >>> question above). >> >> So you want me to wrap the core of those services in >> begin/commit_transaction instead? Just to be sure I got the idea. > > What we would lose this way (transaction_commit) is the ability to skip > updates on !mr->enabled. We could have an internal memory_region_begin_transaction_mr() which checks mr->enabled and notes it if its clear (but anything else would have to undo this). I don't think it's worth it, let's lose the optimization and see if it shows up anywhere.
On 2012-06-25 13:01, Avi Kivity wrote: > On 06/25/2012 01:26 PM, Jan Kiszka wrote: >> On 2012-06-25 12:15, Jan Kiszka wrote: >>> On 2012-06-25 10:57, Avi Kivity wrote: >>>> The repetitiveness of this code suggests a different way of doing this: >>>> make every API call be its own subtransaction and perform the flush in >>>> memory_region_begin_transaction() (maybe that's the answer to my >>>> question above). >>> >>> So you want me to wrap the core of those services in >>> begin/commit_transaction instead? Just to be sure I got the idea. >> >> What we would lose this way (transaction_commit) is the ability to skip >> updates on !mr->enabled. > > We could have an internal memory_region_begin_transaction_mr() which > checks mr->enabled and notes it if its clear (but anything else would > have to undo this). I don't think it's worth it, let's lose the > optimization and see if it shows up anywhere. OK, will send a new series with a conversion of this included. Jan
diff --git a/memory.c b/memory.c index ba55b3e..141f92b 100644 --- a/memory.c +++ b/memory.c @@ -759,6 +759,7 @@ static void memory_region_update_topology(MemoryRegion *mr) void memory_region_transaction_begin(void) { + qemu_flush_coalesced_mmio_buffer(); ++memory_region_transaction_depth; } @@ -1109,6 +1110,9 @@ void memory_region_sync_dirty_bitmap(MemoryRegion *mr) void memory_region_set_readonly(MemoryRegion *mr, bool readonly) { + if (!QTAILQ_EMPTY(&mr->coalesced)) { + qemu_flush_coalesced_mmio_buffer(); + } if (mr->readonly != readonly) { mr->readonly = readonly; memory_region_update_topology(mr); @@ -1117,6 +1121,9 @@ void memory_region_set_readonly(MemoryRegion *mr, bool readonly) void memory_region_rom_device_set_readable(MemoryRegion *mr, bool readable) { + if (!QTAILQ_EMPTY(&mr->coalesced)) { + qemu_flush_coalesced_mmio_buffer(); + } if (mr->readable != readable) { mr->readable = readable; memory_region_update_topology(mr); @@ -1190,6 +1197,8 @@ void memory_region_clear_coalescing(MemoryRegion *mr) { CoalescedMemoryRange *cmr; + qemu_flush_coalesced_mmio_buffer(); + while (!QTAILQ_EMPTY(&mr->coalesced)) { cmr = QTAILQ_FIRST(&mr->coalesced); QTAILQ_REMOVE(&mr->coalesced, cmr, link); @@ -1219,6 +1228,9 @@ void memory_region_add_eventfd(MemoryRegion *mr, }; unsigned i; + if (!QTAILQ_EMPTY(&mr->coalesced)) { + qemu_flush_coalesced_mmio_buffer(); + } for (i = 0; i < mr->ioeventfd_nb; ++i) { if (memory_region_ioeventfd_before(mrfd, mr->ioeventfds[i])) { break; @@ -1249,6 +1261,9 @@ void memory_region_del_eventfd(MemoryRegion *mr, }; unsigned i; + if (!QTAILQ_EMPTY(&mr->coalesced)) { + qemu_flush_coalesced_mmio_buffer(); + } for (i = 0; i < mr->ioeventfd_nb; ++i) { if (memory_region_ioeventfd_equal(mrfd, mr->ioeventfds[i])) { break; @@ -1269,6 +1284,8 @@ static void memory_region_add_subregion_common(MemoryRegion *mr, { MemoryRegion *other; + qemu_flush_coalesced_mmio_buffer(); + assert(!subregion->parent); subregion->parent = mr; subregion->addr = offset; @@ -1327,6 +1344,8 @@ void memory_region_add_subregion_overlap(MemoryRegion *mr, void memory_region_del_subregion(MemoryRegion *mr, MemoryRegion *subregion) { + qemu_flush_coalesced_mmio_buffer(); + assert(subregion->parent == mr); subregion->parent = NULL; QTAILQ_REMOVE(&mr->subregions, subregion, subregions_link); @@ -1335,6 +1354,8 @@ void memory_region_del_subregion(MemoryRegion *mr, void memory_region_set_enabled(MemoryRegion *mr, bool enabled) { + qemu_flush_coalesced_mmio_buffer(); + if (enabled == mr->enabled) { return; } @@ -1367,6 +1388,8 @@ void memory_region_set_alias_offset(MemoryRegion *mr, target_phys_addr_t offset) { target_phys_addr_t old_offset = mr->alias_offset; + qemu_flush_coalesced_mmio_buffer(); + assert(mr->alias); mr->alias_offset = offset;
Flush pending coalesced MMIO before performing mapping or state changes that could affect the event orderings or route the buffered requests to a wrong region. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> In addition, we also have to --- memory.c | 23 +++++++++++++++++++++++ 1 files changed, 23 insertions(+), 0 deletions(-)