Message ID | 1399559698-31900-19-git-send-email-stefanha@redhat.com |
---|---|
State | New |
Headers | show |
On 05/08/2014 07:34 AM, Stefan Hajnoczi wrote: > Drop the assumption that we're using the main AioContext. Convert > qemu_bh_new() to aio_bh_new() and qemu_aio_wait() to aio_poll(). > > The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces > are not needed since no fd handlers, timers, or BHs stay registered when > requests have been drained. > > Cc: Josh Durgin <josh.durgin@inktank.com> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > --- > block/rbd.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/block/rbd.c b/block/rbd.c > index dbc79f4..41f7bdc 100644 > --- a/block/rbd.c > +++ b/block/rbd.c > @@ -548,7 +548,7 @@ static void qemu_rbd_aio_cancel(BlockDriverAIOCB *blockacb) > acb->cancelled = 1; > > while (acb->status == -EINPROGRESS) { > - qemu_aio_wait(); > + aio_poll(bdrv_get_aio_context(acb->common.bs), true); > } > > qemu_aio_release(acb); > @@ -581,7 +581,8 @@ static void rbd_finish_aiocb(rbd_completion_t c, RADOSCB *rcb) > rcb->ret = rbd_aio_get_return_value(c); > rbd_aio_release(c); > > - acb->bh = qemu_bh_new(rbd_finish_bh, rcb); > + acb->bh = aio_bh_new(bdrv_get_aio_context(acb->common.bs), > + rbd_finish_bh, rcb); > qemu_bh_schedule(acb->bh); > } Assuming bdrv_get_aio_context() continues to be safe to call from a non-qemu thread: Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Il 09/05/2014 02:04, Josh Durgin ha scritto: > On 05/08/2014 07:34 AM, Stefan Hajnoczi wrote: >> Drop the assumption that we're using the main AioContext. Convert >> qemu_bh_new() to aio_bh_new() and qemu_aio_wait() to aio_poll(). >> >> The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces >> are not needed since no fd handlers, timers, or BHs stay registered when >> requests have been drained. >> >> Cc: Josh Durgin <josh.durgin@inktank.com> >> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> >> --- >> block/rbd.c | 5 +++-- >> 1 file changed, 3 insertions(+), 2 deletions(-) >> >> diff --git a/block/rbd.c b/block/rbd.c >> index dbc79f4..41f7bdc 100644 >> --- a/block/rbd.c >> +++ b/block/rbd.c >> @@ -548,7 +548,7 @@ static void qemu_rbd_aio_cancel(BlockDriverAIOCB >> *blockacb) >> acb->cancelled = 1; >> >> while (acb->status == -EINPROGRESS) { >> - qemu_aio_wait(); >> + aio_poll(bdrv_get_aio_context(acb->common.bs), true); >> } >> >> qemu_aio_release(acb); >> @@ -581,7 +581,8 @@ static void rbd_finish_aiocb(rbd_completion_t c, >> RADOSCB *rcb) >> rcb->ret = rbd_aio_get_return_value(c); >> rbd_aio_release(c); >> >> - acb->bh = qemu_bh_new(rbd_finish_bh, rcb); >> + acb->bh = aio_bh_new(bdrv_get_aio_context(acb->common.bs), >> + rbd_finish_bh, rcb); >> qemu_bh_schedule(acb->bh); >> } > > Assuming bdrv_get_aio_context() continues to be safe to call from a > non-qemu thread: > > Reviewed-by: Josh Durgin <josh.durgin@inktank.com> bdrv_get_aio_context()/bdrv_set_aio_context() are safe if called from the AioContext's own iothread. This is the case here, in fact aio_bh_new has the same requirement (qemu_bh_schedule instead does not). Paolo
On Thu, May 08, 2014 at 05:04:16PM -0700, Josh Durgin wrote: > On 05/08/2014 07:34 AM, Stefan Hajnoczi wrote: > >Drop the assumption that we're using the main AioContext. Convert > >qemu_bh_new() to aio_bh_new() and qemu_aio_wait() to aio_poll(). > > > >The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces > >are not needed since no fd handlers, timers, or BHs stay registered when > >requests have been drained. > > > >Cc: Josh Durgin <josh.durgin@inktank.com> > >Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > >--- > > block/rbd.c | 5 +++-- > > 1 file changed, 3 insertions(+), 2 deletions(-) > > > >diff --git a/block/rbd.c b/block/rbd.c > >index dbc79f4..41f7bdc 100644 > >--- a/block/rbd.c > >+++ b/block/rbd.c > >@@ -548,7 +548,7 @@ static void qemu_rbd_aio_cancel(BlockDriverAIOCB *blockacb) > > acb->cancelled = 1; > > > > while (acb->status == -EINPROGRESS) { > >- qemu_aio_wait(); > >+ aio_poll(bdrv_get_aio_context(acb->common.bs), true); > > } > > > > qemu_aio_release(acb); > >@@ -581,7 +581,8 @@ static void rbd_finish_aiocb(rbd_completion_t c, RADOSCB *rcb) > > rcb->ret = rbd_aio_get_return_value(c); > > rbd_aio_release(c); > > > >- acb->bh = qemu_bh_new(rbd_finish_bh, rcb); > >+ acb->bh = aio_bh_new(bdrv_get_aio_context(acb->common.bs), > >+ rbd_finish_bh, rcb); > > qemu_bh_schedule(acb->bh); > > } > > Assuming bdrv_get_aio_context() continues to be safe to call from a > non-qemu thread: > > Reviewed-by: Josh Durgin <josh.durgin@inktank.com> Yes, we're running in the AioContext associated with this BlockDriverState. This effectively means we have a lock on the BlockDriverState.
diff --git a/block/rbd.c b/block/rbd.c index dbc79f4..41f7bdc 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -548,7 +548,7 @@ static void qemu_rbd_aio_cancel(BlockDriverAIOCB *blockacb) acb->cancelled = 1; while (acb->status == -EINPROGRESS) { - qemu_aio_wait(); + aio_poll(bdrv_get_aio_context(acb->common.bs), true); } qemu_aio_release(acb); @@ -581,7 +581,8 @@ static void rbd_finish_aiocb(rbd_completion_t c, RADOSCB *rcb) rcb->ret = rbd_aio_get_return_value(c); rbd_aio_release(c); - acb->bh = qemu_bh_new(rbd_finish_bh, rcb); + acb->bh = aio_bh_new(bdrv_get_aio_context(acb->common.bs), + rbd_finish_bh, rcb); qemu_bh_schedule(acb->bh); }
Drop the assumption that we're using the main AioContext. Convert qemu_bh_new() to aio_bh_new() and qemu_aio_wait() to aio_poll(). The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces are not needed since no fd handlers, timers, or BHs stay registered when requests have been drained. Cc: Josh Durgin <josh.durgin@inktank.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> --- block/rbd.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)