diff mbox series

[2/2] thread-pool: use ThreadPool from the running thread

Message ID 20220609134452.1146309-3-eesposit@redhat.com
State New
Headers show
Series AioContext removal: LinuxAioState and ThreadPool | expand

Commit Message

Emanuele Giuseppe Esposito June 9, 2022, 1:44 p.m. UTC
Remove usage of aio_context_acquire by always submitting work items
to the current thread's ThreadPool.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 block/file-posix.c    | 19 +++++++++----------
 block/file-win32.c    |  2 +-
 block/qcow2-threads.c |  2 +-
 util/thread-pool.c    |  6 +-----
 4 files changed, 12 insertions(+), 17 deletions(-)

Comments

Kevin Wolf Sept. 29, 2022, 3:30 p.m. UTC | #1
Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
> Remove usage of aio_context_acquire by always submitting work items
> to the current thread's ThreadPool.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>

The thread pool is used by things outside of the file-* block drivers,
too. Even outside the block layer. Not all of these seem to submit work
in the same thread.


For example:

postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
qemu_loadvm_section_start_full() -> vmstate_load() ->
vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:

ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
...
thread_pool_submit_aio(pool, flush_worker_cb, state,
                       spapr_nvdimm_flush_completion_cb, state);

So it seems to me that we may be submitting work for the main thread
from a postcopy migration thread.

I believe the other direct callers of thread_pool_submit_aio() all
submit work for the main thread and also run in the main thread.


For thread_pool_submit_co(), pr_manager_execute() calls it with the pool
it gets passed as a parameter. This is still bdrv_get_aio_context(bs) in
hdev_co_ioctl() and should probably be changed the same way as for the
AIO call in file-posix, i.e. use qemu_get_current_aio_context().


We could consider either asserting in thread_pool_submit_aio() that we
are really in the expected thread, or like I suggested for LinuxAio drop
the pool parameter and always get it from the current thread (obviously
this is only possible if migration could in fact schedule the work on
its current thread - if it schedules it on the main thread and then
exits the migration thread (which destroys the thread pool), that
wouldn't be good).

Kevin
Emanuele Giuseppe Esposito Sept. 30, 2022, 12:17 p.m. UTC | #2
Am 29/09/2022 um 17:30 schrieb Kevin Wolf:
> Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
>> Remove usage of aio_context_acquire by always submitting work items
>> to the current thread's ThreadPool.
>>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> 
> The thread pool is used by things outside of the file-* block drivers,
> too. Even outside the block layer. Not all of these seem to submit work
> in the same thread.
> 
> 
> For example:
> 
> postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
> qemu_loadvm_section_start_full() -> vmstate_load() ->
> vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:
> 
> ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
> ...
> thread_pool_submit_aio(pool, flush_worker_cb, state,
>                        spapr_nvdimm_flush_completion_cb, state);
> 
> So it seems to me that we may be submitting work for the main thread
> from a postcopy migration thread.
> 
> I believe the other direct callers of thread_pool_submit_aio() all
> submit work for the main thread and also run in the main thread.
> 
> 
> For thread_pool_submit_co(), pr_manager_execute() calls it with the pool
> it gets passed as a parameter. This is still bdrv_get_aio_context(bs) in
> hdev_co_ioctl() and should probably be changed the same way as for the
> AIO call in file-posix, i.e. use qemu_get_current_aio_context().
> 
> 
> We could consider either asserting in thread_pool_submit_aio() that we
> are really in the expected thread, or like I suggested for LinuxAio drop
> the pool parameter and always get it from the current thread (obviously
> this is only possible if migration could in fact schedule the work on
> its current thread - if it schedules it on the main thread and then
> exits the migration thread (which destroys the thread pool), that
> wouldn't be good).

Dumb question: why not extend the already-existing poll->lock to cover
also the necessary fields like pool->head that are accessed by other
threads (only case I could find with thread_pool_submit_aio is the one
you pointed above)?

Thank you,
Emanuele
Emanuele Giuseppe Esposito Sept. 30, 2022, 2:46 p.m. UTC | #3
Am 30/09/2022 um 14:17 schrieb Emanuele Giuseppe Esposito:
> 
> 
> Am 29/09/2022 um 17:30 schrieb Kevin Wolf:
>> Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
>>> Remove usage of aio_context_acquire by always submitting work items
>>> to the current thread's ThreadPool.
>>>
>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>>
>> The thread pool is used by things outside of the file-* block drivers,
>> too. Even outside the block layer. Not all of these seem to submit work
>> in the same thread.
>>
>>
>> For example:
>>
>> postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
>> qemu_loadvm_section_start_full() -> vmstate_load() ->
>> vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:
>>
>> ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
>> ...
>> thread_pool_submit_aio(pool, flush_worker_cb, state,
>>                        spapr_nvdimm_flush_completion_cb, state);
>>
>> So it seems to me that we may be submitting work for the main thread
>> from a postcopy migration thread.
>>
>> I believe the other direct callers of thread_pool_submit_aio() all
>> submit work for the main thread and also run in the main thread.
>>
>>
>> For thread_pool_submit_co(), pr_manager_execute() calls it with the pool
>> it gets passed as a parameter. This is still bdrv_get_aio_context(bs) in
>> hdev_co_ioctl() and should probably be changed the same way as for the
>> AIO call in file-posix, i.e. use qemu_get_current_aio_context().
>>
>>
>> We could consider either asserting in thread_pool_submit_aio() that we
>> are really in the expected thread, or like I suggested for LinuxAio drop
>> the pool parameter and always get it from the current thread (obviously
>> this is only possible if migration could in fact schedule the work on
>> its current thread - if it schedules it on the main thread and then
>> exits the migration thread (which destroys the thread pool), that
>> wouldn't be good).
> 
> Dumb question: why not extend the already-existing poll->lock to cover
> also the necessary fields like pool->head that are accessed by other
> threads (only case I could find with thread_pool_submit_aio is the one
> you pointed above)?
> 

That would be a good replacement for the aio_context lock in
thread_pool_completion_bh(), I think.

> Thank you,
> Emanuele
>
Kevin Wolf Sept. 30, 2022, 3:45 p.m. UTC | #4
Am 30.09.2022 um 14:17 hat Emanuele Giuseppe Esposito geschrieben:
> Am 29/09/2022 um 17:30 schrieb Kevin Wolf:
> > Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
> >> Remove usage of aio_context_acquire by always submitting work items
> >> to the current thread's ThreadPool.
> >>
> >> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> >> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> > 
> > The thread pool is used by things outside of the file-* block drivers,
> > too. Even outside the block layer. Not all of these seem to submit work
> > in the same thread.
> > 
> > 
> > For example:
> > 
> > postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
> > qemu_loadvm_section_start_full() -> vmstate_load() ->
> > vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:
> > 
> > ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
> > ...
> > thread_pool_submit_aio(pool, flush_worker_cb, state,
> >                        spapr_nvdimm_flush_completion_cb, state);
> > 
> > So it seems to me that we may be submitting work for the main thread
> > from a postcopy migration thread.
> > 
> > I believe the other direct callers of thread_pool_submit_aio() all
> > submit work for the main thread and also run in the main thread.
> > 
> > 
> > For thread_pool_submit_co(), pr_manager_execute() calls it with the pool
> > it gets passed as a parameter. This is still bdrv_get_aio_context(bs) in
> > hdev_co_ioctl() and should probably be changed the same way as for the
> > AIO call in file-posix, i.e. use qemu_get_current_aio_context().
> > 
> > 
> > We could consider either asserting in thread_pool_submit_aio() that we
> > are really in the expected thread, or like I suggested for LinuxAio drop
> > the pool parameter and always get it from the current thread (obviously
> > this is only possible if migration could in fact schedule the work on
> > its current thread - if it schedules it on the main thread and then
> > exits the migration thread (which destroys the thread pool), that
> > wouldn't be good).
> 
> Dumb question: why not extend the already-existing poll->lock to cover
> also the necessary fields like pool->head that are accessed by other
> threads (only case I could find with thread_pool_submit_aio is the one
> you pointed above)?

Other people are more familiar with this code, but I believe this could
have performance implications. I seem to remember that this code is
careful to avoid locking to synchronise between worker threads and the
main thread.

But looking at the patch again, I have actually a dumb question, too:
The locking you're removing is in thread_pool_completion_bh(). As this
is a BH, it's running the the ThreadPool's context either way, no matter
which thread called thread_pool_submit_aio().

I'm not sure what this aio_context_acquire/release pair is actually
supposed to protect. Paolo's commit 1919631e6b5 introduced it. Was it
just more careful than it needs to be?

Kevin
Emanuele Giuseppe Esposito Oct. 3, 2022, 8:52 a.m. UTC | #5
Am 30/09/2022 um 17:45 schrieb Kevin Wolf:
> Am 30.09.2022 um 14:17 hat Emanuele Giuseppe Esposito geschrieben:
>> Am 29/09/2022 um 17:30 schrieb Kevin Wolf:
>>> Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
>>>> Remove usage of aio_context_acquire by always submitting work items
>>>> to the current thread's ThreadPool.
>>>>
>>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>>>
>>> The thread pool is used by things outside of the file-* block drivers,
>>> too. Even outside the block layer. Not all of these seem to submit work
>>> in the same thread.
>>>
>>>
>>> For example:
>>>
>>> postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
>>> qemu_loadvm_section_start_full() -> vmstate_load() ->
>>> vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:
>>>
>>> ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
>>> ...
>>> thread_pool_submit_aio(pool, flush_worker_cb, state,
>>>                        spapr_nvdimm_flush_completion_cb, state);
>>>
>>> So it seems to me that we may be submitting work for the main thread
>>> from a postcopy migration thread.
>>>
>>> I believe the other direct callers of thread_pool_submit_aio() all
>>> submit work for the main thread and also run in the main thread.
>>>
>>>
>>> For thread_pool_submit_co(), pr_manager_execute() calls it with the pool
>>> it gets passed as a parameter. This is still bdrv_get_aio_context(bs) in
>>> hdev_co_ioctl() and should probably be changed the same way as for the
>>> AIO call in file-posix, i.e. use qemu_get_current_aio_context().
>>>
>>>
>>> We could consider either asserting in thread_pool_submit_aio() that we
>>> are really in the expected thread, or like I suggested for LinuxAio drop
>>> the pool parameter and always get it from the current thread (obviously
>>> this is only possible if migration could in fact schedule the work on
>>> its current thread - if it schedules it on the main thread and then
>>> exits the migration thread (which destroys the thread pool), that
>>> wouldn't be good).
>>
>> Dumb question: why not extend the already-existing poll->lock to cover
>> also the necessary fields like pool->head that are accessed by other
>> threads (only case I could find with thread_pool_submit_aio is the one
>> you pointed above)?
> 
> Other people are more familiar with this code, but I believe this could
> have performance implications. I seem to remember that this code is
> careful to avoid locking to synchronise between worker threads and the
> main thread.
> 
> But looking at the patch again, I have actually a dumb question, too:
> The locking you're removing is in thread_pool_completion_bh(). As this
> is a BH, it's running the the ThreadPool's context either way, no matter
> which thread called thread_pool_submit_aio().
> 
> I'm not sure what this aio_context_acquire/release pair is actually
> supposed to protect. Paolo's commit 1919631e6b5 introduced it. Was it
> just more careful than it needs to be?
> 

I think the goal is still to protect pool->head, but if so the
aiocontext lock is put in the wrong place, because as you said the bh is
always run in the thread pool context. Otherwise it seems to make no sense.

On the other side, thread_pool_submit_aio could be called by other
threads on behalf of the main loop, which means pool->head could be
modified (iothread calls thread_pool_submit_aio) while being read by the
main loop (another worker thread schedules thread_pool_completion_bh).

What are the performance implications? I mean, if the aiocontext lock in
the bh is actually useful and the bh really has to wait to take it,
being taken in much more places throughout the block layer won't be
better than extending the poll->lock I guess.

Thank you,
Emanuele
Stefan Hajnoczi Oct. 20, 2022, 3:39 p.m. UTC | #6
On Mon, Oct 03, 2022 at 10:52:33AM +0200, Emanuele Giuseppe Esposito wrote:
> 
> 
> Am 30/09/2022 um 17:45 schrieb Kevin Wolf:
> > Am 30.09.2022 um 14:17 hat Emanuele Giuseppe Esposito geschrieben:
> >> Am 29/09/2022 um 17:30 schrieb Kevin Wolf:
> >>> Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
> >>>> Remove usage of aio_context_acquire by always submitting work items
> >>>> to the current thread's ThreadPool.
> >>>>
> >>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> >>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> >>>
> >>> The thread pool is used by things outside of the file-* block drivers,
> >>> too. Even outside the block layer. Not all of these seem to submit work
> >>> in the same thread.
> >>>
> >>>
> >>> For example:
> >>>
> >>> postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
> >>> qemu_loadvm_section_start_full() -> vmstate_load() ->
> >>> vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:
> >>>
> >>> ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
                         ^^^^^^^^^^^^^^^^^^^

aio_get_thread_pool() isn't thread safe either:

  ThreadPool *aio_get_thread_pool(AioContext *ctx)
  {
      if (!ctx->thread_pool) {
          ctx->thread_pool = thread_pool_new(ctx);
	  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Two threads could race in aio_get_thread_pool().

I think post-copy is broken here: it's calling code that was only
designed to be called from the main loop thread.

I have CCed Juan and David.

> >>> ...
> >>> thread_pool_submit_aio(pool, flush_worker_cb, state,
> >>>                        spapr_nvdimm_flush_completion_cb, state);
> >>>
> >>> So it seems to me that we may be submitting work for the main thread
> >>> from a postcopy migration thread.
> >>>
> >>> I believe the other direct callers of thread_pool_submit_aio() all
> >>> submit work for the main thread and also run in the main thread.
> >>>
> >>>
> >>> For thread_pool_submit_co(), pr_manager_execute() calls it with the pool
> >>> it gets passed as a parameter. This is still bdrv_get_aio_context(bs) in
> >>> hdev_co_ioctl() and should probably be changed the same way as for the
> >>> AIO call in file-posix, i.e. use qemu_get_current_aio_context().
> >>>
> >>>
> >>> We could consider either asserting in thread_pool_submit_aio() that we
> >>> are really in the expected thread, or like I suggested for LinuxAio drop
> >>> the pool parameter and always get it from the current thread (obviously
> >>> this is only possible if migration could in fact schedule the work on
> >>> its current thread - if it schedules it on the main thread and then
> >>> exits the migration thread (which destroys the thread pool), that
> >>> wouldn't be good).
> >>
> >> Dumb question: why not extend the already-existing poll->lock to cover
> >> also the necessary fields like pool->head that are accessed by other
> >> threads (only case I could find with thread_pool_submit_aio is the one
> >> you pointed above)?
> > 
> > Other people are more familiar with this code, but I believe this could
> > have performance implications. I seem to remember that this code is
> > careful to avoid locking to synchronise between worker threads and the
> > main thread.
> > 
> > But looking at the patch again, I have actually a dumb question, too:
> > The locking you're removing is in thread_pool_completion_bh(). As this
> > is a BH, it's running the the ThreadPool's context either way, no matter
> > which thread called thread_pool_submit_aio().
> > 
> > I'm not sure what this aio_context_acquire/release pair is actually
> > supposed to protect. Paolo's commit 1919631e6b5 introduced it. Was it
> > just more careful than it needs to be?
> > 
> 
> I think the goal is still to protect pool->head, but if so the
> aiocontext lock is put in the wrong place, because as you said the bh is
> always run in the thread pool context. Otherwise it seems to make no sense.
> 
> On the other side, thread_pool_submit_aio could be called by other
> threads on behalf of the main loop, which means pool->head could be
> modified (iothread calls thread_pool_submit_aio) while being read by the
> main loop (another worker thread schedules thread_pool_completion_bh).
> 
> What are the performance implications? I mean, if the aiocontext lock in
> the bh is actually useful and the bh really has to wait to take it,
> being taken in much more places throughout the block layer won't be
> better than extending the poll->lock I guess.

thread_pool_submit_aio() is missing documentation on how it is supposed
to be called.

Taking pool->lock is conservative and fine in the short-term.

In the longer term we need to clarify how thread_pool_submit_aio() is
supposed to be used and remove locking to protect pool->head if
possible.

A bunch of the event loop APIs are thread-safe (aio_set_fd_handler(),
qemu_schedule_bh(), etc) so it's somewhat natural to make
thread_pool_submit_aio() thread-safe too. However, it would be nice to
avoid synchronization and existing callers mostly call it from the same
event loop thread that runs the BH and we can avoid locking in that
case.

Stefan
Dr. David Alan Gilbert Oct. 20, 2022, 4:22 p.m. UTC | #7
* Stefan Hajnoczi (stefanha@redhat.com) wrote:
> On Mon, Oct 03, 2022 at 10:52:33AM +0200, Emanuele Giuseppe Esposito wrote:
> > 
> > 
> > Am 30/09/2022 um 17:45 schrieb Kevin Wolf:
> > > Am 30.09.2022 um 14:17 hat Emanuele Giuseppe Esposito geschrieben:
> > >> Am 29/09/2022 um 17:30 schrieb Kevin Wolf:
> > >>> Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
> > >>>> Remove usage of aio_context_acquire by always submitting work items
> > >>>> to the current thread's ThreadPool.
> > >>>>
> > >>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > >>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> > >>>
> > >>> The thread pool is used by things outside of the file-* block drivers,
> > >>> too. Even outside the block layer. Not all of these seem to submit work
> > >>> in the same thread.
> > >>>
> > >>>
> > >>> For example:
> > >>>
> > >>> postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
> > >>> qemu_loadvm_section_start_full() -> vmstate_load() ->
> > >>> vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:
> > >>>
> > >>> ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
>                          ^^^^^^^^^^^^^^^^^^^
> 
> aio_get_thread_pool() isn't thread safe either:
> 
>   ThreadPool *aio_get_thread_pool(AioContext *ctx)
>   {
>       if (!ctx->thread_pool) {
>           ctx->thread_pool = thread_pool_new(ctx);
> 	  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> Two threads could race in aio_get_thread_pool().
> 
> I think post-copy is broken here: it's calling code that was only
> designed to be called from the main loop thread.
> 
> I have CCed Juan and David.

In theory the path that you describe there shouldn't happen - although
there is perhaps not enough protection on the load side to stop it
happening if presented with a bad stream.
This is documented in docs/devel/migration.rst under 'Destination
behaviour'; but to recap, during postcopy load we have a problem that we
need to be able to load incoming iterative (ie. RAM) pages during the
loading of normal devices, because the loading of a device may access
RAM that's not yet been transferred.

To do that, the device state of all the non-iterative devices (which I
think includes your spapr_nvdimm) is serialised into a separate
migration stream and sent as a 'package'.

We read the package off the stream on the main thread, but don't process
it until we fire off the 'listen' thread - which you spotted the
creation of above; the listen thread now takes over reading the
migration stream to process RAM pages, and since it's in the same
format, it calls qemu_loadvm_state_main() - but it doesn't expect
any devices in that other than the RAM devices; it's just expecting RAM.

In parallel with that, the main thread carries on loading the contents
of the 'package' - and that contains your spapr_nvdimm device (and any
other 'normal' devices); but that's OK because that's the main thread.

Now if something was very broken and sent a header for the spapr-nvdimm
down the main thread rather than into the package then, yes, we'd
trigger your case, but that shouldn't happen.

Dave

> > >>> ...
> > >>> thread_pool_submit_aio(pool, flush_worker_cb, state,
> > >>>                        spapr_nvdimm_flush_completion_cb, state);
> > >>>
> > >>> So it seems to me that we may be submitting work for the main thread
> > >>> from a postcopy migration thread.
> > >>>
> > >>> I believe the other direct callers of thread_pool_submit_aio() all
> > >>> submit work for the main thread and also run in the main thread.
> > >>>
> > >>>
> > >>> For thread_pool_submit_co(), pr_manager_execute() calls it with the pool
> > >>> it gets passed as a parameter. This is still bdrv_get_aio_context(bs) in
> > >>> hdev_co_ioctl() and should probably be changed the same way as for the
> > >>> AIO call in file-posix, i.e. use qemu_get_current_aio_context().
> > >>>
> > >>>
> > >>> We could consider either asserting in thread_pool_submit_aio() that we
> > >>> are really in the expected thread, or like I suggested for LinuxAio drop
> > >>> the pool parameter and always get it from the current thread (obviously
> > >>> this is only possible if migration could in fact schedule the work on
> > >>> its current thread - if it schedules it on the main thread and then
> > >>> exits the migration thread (which destroys the thread pool), that
> > >>> wouldn't be good).
> > >>
> > >> Dumb question: why not extend the already-existing poll->lock to cover
> > >> also the necessary fields like pool->head that are accessed by other
> > >> threads (only case I could find with thread_pool_submit_aio is the one
> > >> you pointed above)?
> > > 
> > > Other people are more familiar with this code, but I believe this could
> > > have performance implications. I seem to remember that this code is
> > > careful to avoid locking to synchronise between worker threads and the
> > > main thread.
> > > 
> > > But looking at the patch again, I have actually a dumb question, too:
> > > The locking you're removing is in thread_pool_completion_bh(). As this
> > > is a BH, it's running the the ThreadPool's context either way, no matter
> > > which thread called thread_pool_submit_aio().
> > > 
> > > I'm not sure what this aio_context_acquire/release pair is actually
> > > supposed to protect. Paolo's commit 1919631e6b5 introduced it. Was it
> > > just more careful than it needs to be?
> > > 
> > 
> > I think the goal is still to protect pool->head, but if so the
> > aiocontext lock is put in the wrong place, because as you said the bh is
> > always run in the thread pool context. Otherwise it seems to make no sense.
> > 
> > On the other side, thread_pool_submit_aio could be called by other
> > threads on behalf of the main loop, which means pool->head could be
> > modified (iothread calls thread_pool_submit_aio) while being read by the
> > main loop (another worker thread schedules thread_pool_completion_bh).
> > 
> > What are the performance implications? I mean, if the aiocontext lock in
> > the bh is actually useful and the bh really has to wait to take it,
> > being taken in much more places throughout the block layer won't be
> > better than extending the poll->lock I guess.
> 
> thread_pool_submit_aio() is missing documentation on how it is supposed
> to be called.
> 
> Taking pool->lock is conservative and fine in the short-term.
> 
> In the longer term we need to clarify how thread_pool_submit_aio() is
> supposed to be used and remove locking to protect pool->head if
> possible.
> 
> A bunch of the event loop APIs are thread-safe (aio_set_fd_handler(),
> qemu_schedule_bh(), etc) so it's somewhat natural to make
> thread_pool_submit_aio() thread-safe too. However, it would be nice to
> avoid synchronization and existing callers mostly call it from the same
> event loop thread that runs the BH and we can avoid locking in that
> case.
> 
> Stefan
Stefan Hajnoczi Oct. 24, 2022, 6:49 p.m. UTC | #8
On Thu, Oct 20, 2022 at 05:22:17PM +0100, Dr. David Alan Gilbert wrote:
> * Stefan Hajnoczi (stefanha@redhat.com) wrote:
> > On Mon, Oct 03, 2022 at 10:52:33AM +0200, Emanuele Giuseppe Esposito wrote:
> > > 
> > > 
> > > Am 30/09/2022 um 17:45 schrieb Kevin Wolf:
> > > > Am 30.09.2022 um 14:17 hat Emanuele Giuseppe Esposito geschrieben:
> > > >> Am 29/09/2022 um 17:30 schrieb Kevin Wolf:
> > > >>> Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
> > > >>>> Remove usage of aio_context_acquire by always submitting work items
> > > >>>> to the current thread's ThreadPool.
> > > >>>>
> > > >>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > > >>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> > > >>>
> > > >>> The thread pool is used by things outside of the file-* block drivers,
> > > >>> too. Even outside the block layer. Not all of these seem to submit work
> > > >>> in the same thread.
> > > >>>
> > > >>>
> > > >>> For example:
> > > >>>
> > > >>> postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
> > > >>> qemu_loadvm_section_start_full() -> vmstate_load() ->
> > > >>> vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:
> > > >>>
> > > >>> ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
> >                          ^^^^^^^^^^^^^^^^^^^
> > 
> > aio_get_thread_pool() isn't thread safe either:
> > 
> >   ThreadPool *aio_get_thread_pool(AioContext *ctx)
> >   {
> >       if (!ctx->thread_pool) {
> >           ctx->thread_pool = thread_pool_new(ctx);
> > 	  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > 
> > Two threads could race in aio_get_thread_pool().
> > 
> > I think post-copy is broken here: it's calling code that was only
> > designed to be called from the main loop thread.
> > 
> > I have CCed Juan and David.
> 
> In theory the path that you describe there shouldn't happen - although
> there is perhaps not enough protection on the load side to stop it
> happening if presented with a bad stream.
> This is documented in docs/devel/migration.rst under 'Destination
> behaviour'; but to recap, during postcopy load we have a problem that we
> need to be able to load incoming iterative (ie. RAM) pages during the
> loading of normal devices, because the loading of a device may access
> RAM that's not yet been transferred.
> 
> To do that, the device state of all the non-iterative devices (which I
> think includes your spapr_nvdimm) is serialised into a separate
> migration stream and sent as a 'package'.
> 
> We read the package off the stream on the main thread, but don't process
> it until we fire off the 'listen' thread - which you spotted the
> creation of above; the listen thread now takes over reading the
> migration stream to process RAM pages, and since it's in the same
> format, it calls qemu_loadvm_state_main() - but it doesn't expect
> any devices in that other than the RAM devices; it's just expecting RAM.
> 
> In parallel with that, the main thread carries on loading the contents
> of the 'package' - and that contains your spapr_nvdimm device (and any
> other 'normal' devices); but that's OK because that's the main thread.
> 
> Now if something was very broken and sent a header for the spapr-nvdimm
> down the main thread rather than into the package then, yes, we'd
> trigger your case, but that shouldn't happen.

Thanks for explaining that. A way to restrict the listen thread to only
process RAM pages would be good both as documentation and to prevent
invalid migration streams for causing problems.

For Emanuele and Kevin's original question about this code, it seems the
thread pool won't be called from the listen thread.

Stefan
diff mbox series

Patch

diff --git a/block/file-posix.c b/block/file-posix.c
index 33f92f004a..15765453b3 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2053,11 +2053,10 @@  out:
     return result;
 }
 
-static int coroutine_fn raw_thread_pool_submit(BlockDriverState *bs,
-                                               ThreadPoolFunc func, void *arg)
+static int coroutine_fn raw_thread_pool_submit(ThreadPoolFunc func, void *arg)
 {
     /* @bs can be NULL, bdrv_get_aio_context() returns the main context then */
-    ThreadPool *pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
+    ThreadPool *pool = aio_get_thread_pool(qemu_get_current_aio_context());
     return thread_pool_submit_co(pool, func, arg);
 }
 
@@ -2107,7 +2106,7 @@  static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
     };
 
     assert(qiov->size == bytes);
-    return raw_thread_pool_submit(bs, handle_aiocb_rw, &acb);
+    return raw_thread_pool_submit(handle_aiocb_rw, &acb);
 }
 
 static int coroutine_fn raw_co_preadv(BlockDriverState *bs, int64_t offset,
@@ -2182,7 +2181,7 @@  static int raw_co_flush_to_disk(BlockDriverState *bs)
         return luring_co_submit(bs, aio, s->fd, 0, NULL, QEMU_AIO_FLUSH);
     }
 #endif
-    return raw_thread_pool_submit(bs, handle_aiocb_flush, &acb);
+    return raw_thread_pool_submit(handle_aiocb_flush, &acb);
 }
 
 static void raw_aio_attach_aio_context(BlockDriverState *bs,
@@ -2244,7 +2243,7 @@  raw_regular_truncate(BlockDriverState *bs, int fd, int64_t offset,
         },
     };
 
-    return raw_thread_pool_submit(bs, handle_aiocb_truncate, &acb);
+    return raw_thread_pool_submit(handle_aiocb_truncate, &acb);
 }
 
 static int coroutine_fn raw_co_truncate(BlockDriverState *bs, int64_t offset,
@@ -2994,7 +2993,7 @@  raw_do_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes,
         acb.aio_type |= QEMU_AIO_BLKDEV;
     }
 
-    ret = raw_thread_pool_submit(bs, handle_aiocb_discard, &acb);
+    ret = raw_thread_pool_submit(handle_aiocb_discard, &acb);
     raw_account_discard(s, bytes, ret);
     return ret;
 }
@@ -3069,7 +3068,7 @@  raw_do_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int64_t bytes,
         handler = handle_aiocb_write_zeroes;
     }
 
-    return raw_thread_pool_submit(bs, handler, &acb);
+    return raw_thread_pool_submit(handler, &acb);
 }
 
 static int coroutine_fn raw_co_pwrite_zeroes(
@@ -3280,7 +3279,7 @@  static int coroutine_fn raw_co_copy_range_to(BlockDriverState *bs,
         },
     };
 
-    return raw_thread_pool_submit(bs, handle_aiocb_copy_range, &acb);
+    return raw_thread_pool_submit(handle_aiocb_copy_range, &acb);
 }
 
 BlockDriver bdrv_file = {
@@ -3626,7 +3625,7 @@  hdev_co_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
         },
     };
 
-    return raw_thread_pool_submit(bs, handle_aiocb_ioctl, &acb);
+    return raw_thread_pool_submit(handle_aiocb_ioctl, &acb);
 }
 #endif /* linux */
 
diff --git a/block/file-win32.c b/block/file-win32.c
index ec9d64d0e4..3d7f59a592 100644
--- a/block/file-win32.c
+++ b/block/file-win32.c
@@ -167,7 +167,7 @@  static BlockAIOCB *paio_submit(BlockDriverState *bs, HANDLE hfile,
     acb->aio_offset = offset;
 
     trace_file_paio_submit(acb, opaque, offset, count, type);
-    pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
+    pool = aio_get_thread_pool(qemu_get_current_aio_context());
     return thread_pool_submit_aio(pool, aio_worker, acb, cb, opaque);
 }
 
diff --git a/block/qcow2-threads.c b/block/qcow2-threads.c
index 1914baf456..9e370acbb3 100644
--- a/block/qcow2-threads.c
+++ b/block/qcow2-threads.c
@@ -42,7 +42,7 @@  qcow2_co_process(BlockDriverState *bs, ThreadPoolFunc *func, void *arg)
 {
     int ret;
     BDRVQcow2State *s = bs->opaque;
-    ThreadPool *pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
+    ThreadPool *pool = aio_get_thread_pool(qemu_get_current_aio_context());
 
     qemu_co_mutex_lock(&s->lock);
     while (s->nb_threads >= QCOW2_MAX_THREADS) {
diff --git a/util/thread-pool.c b/util/thread-pool.c
index 31113b5860..74ce35f7a6 100644
--- a/util/thread-pool.c
+++ b/util/thread-pool.c
@@ -48,7 +48,7 @@  struct ThreadPoolElement {
     /* Access to this list is protected by lock.  */
     QTAILQ_ENTRY(ThreadPoolElement) reqs;
 
-    /* Access to this list is protected by the global mutex.  */
+    /* This list is only written by the thread pool's mother thread.  */
     QLIST_ENTRY(ThreadPoolElement) all;
 };
 
@@ -175,7 +175,6 @@  static void thread_pool_completion_bh(void *opaque)
     ThreadPool *pool = opaque;
     ThreadPoolElement *elem, *next;
 
-    aio_context_acquire(pool->ctx);
 restart:
     QLIST_FOREACH_SAFE(elem, &pool->head, all, next) {
         if (elem->state != THREAD_DONE) {
@@ -195,9 +194,7 @@  restart:
              */
             qemu_bh_schedule(pool->completion_bh);
 
-            aio_context_release(pool->ctx);
             elem->common.cb(elem->common.opaque, elem->ret);
-            aio_context_acquire(pool->ctx);
 
             /* We can safely cancel the completion_bh here regardless of someone
              * else having scheduled it meanwhile because we reenter the
@@ -211,7 +208,6 @@  restart:
             qemu_aio_unref(elem);
         }
     }
-    aio_context_release(pool->ctx);
 }
 
 static void thread_pool_cancel(BlockAIOCB *acb)