diff mbox

[for-2.4] block: delete bottom halves before the AioContext is freed

Message ID 1438086628-13000-1-git-send-email-pbonzini@redhat.com
State New
Headers show

Commit Message

Paolo Bonzini July 28, 2015, 12:30 p.m. UTC
Other uses of aio_bh_new are safe as long as all scheduled bottom
halves are run before an iothread is destroy, which bdrv_drain will
ensure:

- ctx->notify_dummy_bh: cleared by aio_ctx_finalize

- archipelago_finish_aiocb: BH deletes itself

- inject_error: BH deletes itself

- blkverify_aio_bh: BH deletes itself

- abort_aio_request: BH deletes itself

- curl_aio_readv: BH deletes itself

- gluster_finish_aiocb: BH deletes itself

- bdrv_aio_rw_vector: BH deletes itself

- bdrv_co_maybe_schedule_bh: BH deletes itself

- iscsi_schedule_bh, iscsi_co_generic_cb: BH deletes itself

- laio_attach_aio_context: deleted in laio_detach_aio_context,
called through bdrv_detach_aio_context before deleting the iothread

- nfs_co_generic_cb: BH deletes itself

- null_aio_common: BH deletes itself

- qed_aio_complete: BH deletes itself

- rbd_finish_aiocb: BH deletes itself

- dma_blk_cb: BH deletes itself

- virtio_blk_dma_restart_cb: BH deletes itself

- qemu_bh_new: main loop AioContext is never destroyed

- test-aio.c: bh_delete_cb deletes itself, otherwise deleted in
the same function that calls aio_bh_new

Reported-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 async.c                         | 2 +-
 hw/block/dataplane/virtio-blk.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

Comments

Cornelia Huck July 28, 2015, 1:11 p.m. UTC | #1
On Tue, 28 Jul 2015 14:30:28 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:

> diff --git a/async.c b/async.c
> index 9ca7095..efce14b 100644
> --- a/async.c
> +++ b/async.c
> @@ -233,6 +233,7 @@ aio_ctx_finalize(GSource     *source)
>      AioContext *ctx = (AioContext *) source;
> 
>      qemu_bh_delete(ctx->notify_dummy_bh);
> +    thread_pool_free(ctx->thread_pool);
> 
>      qemu_mutex_lock(&ctx->bh_lock);
>      while (ctx->first_bh) {
> @@ -246,7 +247,6 @@ aio_ctx_finalize(GSource     *source)
>      }
>      qemu_mutex_unlock(&ctx->bh_lock);
> 
> -    thread_pool_free(ctx->thread_pool);
>      aio_set_event_notifier(ctx, &ctx->notifier, NULL);
>      event_notifier_cleanup(&ctx->notifier);
>      rfifolock_destroy(&ctx->lock);
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 3db139b..6106e46 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -223,8 +223,8 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
>      virtio_blk_data_plane_stop(s);
>      blk_op_unblock_all(s->conf->conf.blk, s->blocker);
>      error_free(s->blocker);
> -    object_unref(OBJECT(s->iothread));
>      qemu_bh_delete(s->bh);
> +    object_unref(OBJECT(s->iothread));
>      g_free(s);
>  }
> 

With this applied on top of Stefan's AioContext patches, both
managedsave and device_del with dataplane devices work for me.

Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Stefan Hajnoczi July 28, 2015, 2:06 p.m. UTC | #2
On Tue, Jul 28, 2015 at 2:11 PM, Cornelia Huck <cornelia.huck@de.ibm.com> wrote:
> On Tue, 28 Jul 2015 14:30:28 +0200
> Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>> diff --git a/async.c b/async.c
>> index 9ca7095..efce14b 100644
>> --- a/async.c
>> +++ b/async.c
>> @@ -233,6 +233,7 @@ aio_ctx_finalize(GSource     *source)
>>      AioContext *ctx = (AioContext *) source;
>>
>>      qemu_bh_delete(ctx->notify_dummy_bh);
>> +    thread_pool_free(ctx->thread_pool);
>>
>>      qemu_mutex_lock(&ctx->bh_lock);
>>      while (ctx->first_bh) {
>> @@ -246,7 +247,6 @@ aio_ctx_finalize(GSource     *source)
>>      }
>>      qemu_mutex_unlock(&ctx->bh_lock);
>>
>> -    thread_pool_free(ctx->thread_pool);
>>      aio_set_event_notifier(ctx, &ctx->notifier, NULL);
>>      event_notifier_cleanup(&ctx->notifier);
>>      rfifolock_destroy(&ctx->lock);

My v2 patch includes something similar so that thread_pool_free() is
called before freeing BHs.

We can take this patch on top of my v1.

>> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
>> index 3db139b..6106e46 100644
>> --- a/hw/block/dataplane/virtio-blk.c
>> +++ b/hw/block/dataplane/virtio-blk.c
>> @@ -223,8 +223,8 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
>>      virtio_blk_data_plane_stop(s);
>>      blk_op_unblock_all(s->conf->conf.blk, s->blocker);
>>      error_free(s->blocker);
>> -    object_unref(OBJECT(s->iothread));
>>      qemu_bh_delete(s->bh);
>> +    object_unref(OBJECT(s->iothread));
>>      g_free(s);
>>  }
>>
>
> With this applied on top of Stefan's AioContext patches, both
> managedsave and device_del with dataplane devices work for me.
>
> Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
diff mbox

Patch

diff --git a/async.c b/async.c
index 9ca7095..efce14b 100644
--- a/async.c
+++ b/async.c
@@ -233,6 +233,7 @@  aio_ctx_finalize(GSource     *source)
     AioContext *ctx = (AioContext *) source;
 
     qemu_bh_delete(ctx->notify_dummy_bh);
+    thread_pool_free(ctx->thread_pool);
 
     qemu_mutex_lock(&ctx->bh_lock);
     while (ctx->first_bh) {
@@ -246,7 +247,6 @@  aio_ctx_finalize(GSource     *source)
     }
     qemu_mutex_unlock(&ctx->bh_lock);
 
-    thread_pool_free(ctx->thread_pool);
     aio_set_event_notifier(ctx, &ctx->notifier, NULL);
     event_notifier_cleanup(&ctx->notifier);
     rfifolock_destroy(&ctx->lock);
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 3db139b..6106e46 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -223,8 +223,8 @@  void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
     virtio_blk_data_plane_stop(s);
     blk_op_unblock_all(s->conf->conf.blk, s->blocker);
     error_free(s->blocker);
-    object_unref(OBJECT(s->iothread));
     qemu_bh_delete(s->bh);
+    object_unref(OBJECT(s->iothread));
     g_free(s);
 }