diff mbox

block: fix bdrv_ioctl called from coroutine

Message ID 1450290827-30508-5-git-send-email-pbonzini@redhat.com
State New
Headers show

Commit Message

Paolo Bonzini Dec. 16, 2015, 6:33 p.m. UTC
When called from a coroutine, bdrv_ioctl must be asynchronous just like
e.g. bdrv_flush.  The code was incorrectly making it synchronous, fix
it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
        Fam, any reason why you did it this way?  I don't see
        any coroutine caller, but it doesn't make much sense. :)

 block/io.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

Comments

Fam Zheng Dec. 17, 2015, 12:59 a.m. UTC | #1
On Wed, 12/16 19:33, Paolo Bonzini wrote:
> When called from a coroutine, bdrv_ioctl must be asynchronous just like
> e.g. bdrv_flush.  The code was incorrectly making it synchronous, fix
> it.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>         Fam, any reason why you did it this way?  I don't see
>         any coroutine caller, but it doesn't make much sense. :)

That is a surprising question!  From a coroutine, it is bdrv_flush ->
bdrv_flush_co_entry -> bdrv_co_flush, which I think is always synchronous,
especially, noticing the code around calling bs->bdrv_aio_flush:

        acb = bs->drv->bdrv_aio_flush(bs, bdrv_co_io_em_complete, &co);
        if (acb == NULL) {
            ret = -EIO;
        } else {
            qemu_coroutine_yield();
            ret = co.ret;
        }

Am I missing something?

Fam

> 
>  block/io.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/block/io.c b/block/io.c
> index e00fb5d..841f5b5 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -2614,10 +2614,11 @@ int bdrv_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
>          bdrv_co_ioctl_entry(&data);
>      } else {
>          Coroutine *co = qemu_coroutine_create(bdrv_co_ioctl_entry);
> +
>          qemu_coroutine_enter(co, &data);
> -    }
> -    while (data.ret == -EINPROGRESS) {
> -        aio_poll(bdrv_get_aio_context(bs), true);
> +        while (data.ret == -EINPROGRESS) {
> +            aio_poll(bdrv_get_aio_context(bs), true);
> +        }
>      }
>      return data.ret;
>  }
> -- 
> 2.5.0
>
Paolo Bonzini Dec. 17, 2015, 8:44 a.m. UTC | #2
On 17/12/2015 01:59, Fam Zheng wrote:
> On Wed, 12/16 19:33, Paolo Bonzini wrote:
>> When called from a coroutine, bdrv_ioctl must be asynchronous just like
>> e.g. bdrv_flush.  The code was incorrectly making it synchronous, fix
>> it.
>>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>         Fam, any reason why you did it this way?  I don't see
>>         any coroutine caller, but it doesn't make much sense. :)
> 
> That is a surprising question!  From a coroutine, it is bdrv_flush ->
> bdrv_flush_co_entry -> bdrv_co_flush, which I think is always synchronous,
> especially, noticing the code around calling bs->bdrv_aio_flush:
> 
>         acb = bs->drv->bdrv_aio_flush(bs, bdrv_co_io_em_complete, &co);
>         if (acb == NULL) {
>             ret = -EIO;
>         } else {
>             qemu_coroutine_yield();
>             ret = co.ret;
>         }
> 
> Am I missing something?

In the coroutine case, the yield is hidden in the drivers, and it may or
may not happen.  For example, qcow2_co_flush_to_os starts with

    qemu_co_mutex_lock(&s->lock);

which can yield.

Paolo

> Fam
> 
>>
>>  block/io.c | 7 ++++---
>>  1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/block/io.c b/block/io.c
>> index e00fb5d..841f5b5 100644
>> --- a/block/io.c
>> +++ b/block/io.c
>> @@ -2614,10 +2614,11 @@ int bdrv_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
>>          bdrv_co_ioctl_entry(&data);
>>      } else {
>>          Coroutine *co = qemu_coroutine_create(bdrv_co_ioctl_entry);
>> +
>>          qemu_coroutine_enter(co, &data);
>> -    }
>> -    while (data.ret == -EINPROGRESS) {
>> -        aio_poll(bdrv_get_aio_context(bs), true);
>> +        while (data.ret == -EINPROGRESS) {
>> +            aio_poll(bdrv_get_aio_context(bs), true);
>> +        }
>>      }
>>      return data.ret;
>>  }
>> -- 
>> 2.5.0
>>
> 
>
Fam Zheng Dec. 17, 2015, 12:33 p.m. UTC | #3
On Thu, 12/17 09:44, Paolo Bonzini wrote:
> 
> 
> On 17/12/2015 01:59, Fam Zheng wrote:
> > On Wed, 12/16 19:33, Paolo Bonzini wrote:
> >> When called from a coroutine, bdrv_ioctl must be asynchronous just like
> >> e.g. bdrv_flush.  The code was incorrectly making it synchronous, fix
> >> it.
> >>
> >> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> >> ---
> >>         Fam, any reason why you did it this way?  I don't see
> >>         any coroutine caller, but it doesn't make much sense. :)
> > 
> > That is a surprising question!  From a coroutine, it is bdrv_flush ->
> > bdrv_flush_co_entry -> bdrv_co_flush, which I think is always synchronous,
> > especially, noticing the code around calling bs->bdrv_aio_flush:
> > 
> >         acb = bs->drv->bdrv_aio_flush(bs, bdrv_co_io_em_complete, &co);
> >         if (acb == NULL) {
> >             ret = -EIO;
> >         } else {
> >             qemu_coroutine_yield();
> >             ret = co.ret;
> >         }
> > 
> > Am I missing something?
> 
> In the coroutine case, the yield is hidden in the drivers, and it may or
> may not happen.  For example, qcow2_co_flush_to_os starts with
> 
>     qemu_co_mutex_lock(&s->lock);
> 
> which can yield.

bdrv_ioctl, on the contrary, is emulated with .bdrv_aio_ioctl, so it always
yields (unless -ENOTSUP), that's why I think aio_poll() is necessary in both
branches.

Fam
Paolo Bonzini Dec. 17, 2015, 12:51 p.m. UTC | #4
On 17/12/2015 13:33, Fam Zheng wrote:
> > In the coroutine case, the yield is hidden in the drivers, and it may or
> > may not happen.  For example, qcow2_co_flush_to_os starts with
> > 
> >     qemu_co_mutex_lock(&s->lock);
> > 
> > which can yield.
> 
> bdrv_ioctl, on the contrary, is emulated with .bdrv_aio_ioctl, so it always
> yields (unless -ENOTSUP), that's why I think aio_poll() is necessary in both
> branches.

But why should it yield, if called in coroutine context?  bdrv_flush
doesn't yield if emulated with .bdrv_aio_flush *and* called in coroutine
context.  bdrv_ioctl should do the same, I think.

Paolo
Fam Zheng Dec. 18, 2015, 1:17 a.m. UTC | #5
On Thu, 12/17 13:51, Paolo Bonzini wrote:
> 
> 
> On 17/12/2015 13:33, Fam Zheng wrote:
> > > In the coroutine case, the yield is hidden in the drivers, and it may or
> > > may not happen.  For example, qcow2_co_flush_to_os starts with
> > > 
> > >     qemu_co_mutex_lock(&s->lock);
> > > 
> > > which can yield.
> > 
> > bdrv_ioctl, on the contrary, is emulated with .bdrv_aio_ioctl, so it always
> > yields (unless -ENOTSUP), that's why I think aio_poll() is necessary in both
> > branches.
> 
> But why should it yield, if called in coroutine context?  bdrv_flush
> doesn't yield if emulated with .bdrv_aio_flush *and* called in coroutine
> context.  bdrv_ioctl should do the same, I think.
> 

Looking at bdrv_flush again I see what you mean. Thanks for explaining!

Reviewed-by: Fam Zheng <famz@redhat.com>
Kevin Wolf Dec. 18, 2015, 12:16 p.m. UTC | #6
[ Cc: qemu-block ]

Am 16.12.2015 um 19:33 hat Paolo Bonzini geschrieben:
> When called from a coroutine, bdrv_ioctl must be asynchronous just like
> e.g. bdrv_flush.  The code was incorrectly making it synchronous, fix
> it.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Thanks, applied to the block branch.

Kevin
diff mbox

Patch

diff --git a/block/io.c b/block/io.c
index e00fb5d..841f5b5 100644
--- a/block/io.c
+++ b/block/io.c
@@ -2614,10 +2614,11 @@  int bdrv_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
         bdrv_co_ioctl_entry(&data);
     } else {
         Coroutine *co = qemu_coroutine_create(bdrv_co_ioctl_entry);
+
         qemu_coroutine_enter(co, &data);
-    }
-    while (data.ret == -EINPROGRESS) {
-        aio_poll(bdrv_get_aio_context(bs), true);
+        while (data.ret == -EINPROGRESS) {
+            aio_poll(bdrv_get_aio_context(bs), true);
+        }
     }
     return data.ret;
 }