Message ID | 1476380062-18001-13-git-send-email-pbonzini@redhat.com |
---|---|
State | New |
Headers | show |
On Thu, 10/13 19:34, Paolo Bonzini wrote: > Soon bdrv_drain will not call aio_poll itself on iothreads. If block > devices are left hanging off the iothread's AioContext, there will be no > one to do I/O for those poor devices. > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > iothread.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/iothread.c b/iothread.c > index 62c8796..8153e21 100644 > --- a/iothread.c > +++ b/iothread.c > @@ -16,6 +16,7 @@ > #include "qom/object_interfaces.h" > #include "qemu/module.h" > #include "block/aio.h" > +#include "block/block.h" > #include "sysemu/iothread.h" > #include "qmp-commands.h" > #include "qemu/error-report.h" > @@ -199,6 +200,15 @@ IOThreadInfoList *qmp_query_iothreads(Error **errp) > void iothread_stop_all(void) > { > Object *container = object_get_objects_root(); > + BlockDriverState *bs; > + BdrvNextIterator it; > + > + for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { > + AioContext *ctx = bdrv_get_aio_context(bs); I have a strong feeling that we should 'continue' if ctx == qemu_get_aio_context() - otherwise a lot of unnecessary (and somehow complicated) code will always run, even if user has no iothread. Fam > + aio_context_acquire(ctx); > + bdrv_set_aio_context(bs, qemu_get_aio_context()); > + aio_context_release(ctx); > + } > > object_child_foreach(container, iothread_stop, NULL); > } > -- > 2.7.4 > >
On 14/10/2016 16:50, Fam Zheng wrote: >> > + BdrvNextIterator it; >> > + >> > + for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { >> > + AioContext *ctx = bdrv_get_aio_context(bs); > I have a strong feeling that we should 'continue' if ctx == > qemu_get_aio_context() - otherwise a lot of unnecessary (and somehow > complicated) code will always run, even if user has no iothread. > > Fam > >> > + aio_context_acquire(ctx); >> > + bdrv_set_aio_context(bs, qemu_get_aio_context()); >> > + aio_context_release(ctx); >> > + } Sounds good. Paolo
diff --git a/iothread.c b/iothread.c index 62c8796..8153e21 100644 --- a/iothread.c +++ b/iothread.c @@ -16,6 +16,7 @@ #include "qom/object_interfaces.h" #include "qemu/module.h" #include "block/aio.h" +#include "block/block.h" #include "sysemu/iothread.h" #include "qmp-commands.h" #include "qemu/error-report.h" @@ -199,6 +200,15 @@ IOThreadInfoList *qmp_query_iothreads(Error **errp) void iothread_stop_all(void) { Object *container = object_get_objects_root(); + BlockDriverState *bs; + BdrvNextIterator it; + + for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + AioContext *ctx = bdrv_get_aio_context(bs); + aio_context_acquire(ctx); + bdrv_set_aio_context(bs, qemu_get_aio_context()); + aio_context_release(ctx); + } object_child_foreach(container, iothread_stop, NULL); }
Soon bdrv_drain will not call aio_poll itself on iothreads. If block devices are left hanging off the iothread's AioContext, there will be no one to do I/O for those poor devices. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- iothread.c | 10 ++++++++++ 1 file changed, 10 insertions(+)