diff mbox

[v3,01/17] block: stop relying on io_flush() in bdrv_drain_all()

Message ID 20130610143842.GB4838@stefanha-thinkpad.redhat.com
State New
Headers show

Commit Message

Stefan Hajnoczi June 10, 2013, 2:38 p.m. UTC
On Mon, Jun 10, 2013 at 02:25:57PM +0200, Stefan Hajnoczi wrote:
> @@ -1427,26 +1456,18 @@ void bdrv_close_all(void)
>  void bdrv_drain_all(void)
>  {
>      BlockDriverState *bs;
> -    bool busy;
> -
> -    do {
> -        busy = qemu_aio_wait();
>  
> +    while (bdrv_requests_pending_all()) {
>          /* FIXME: We do not have timer support here, so this is effectively
>           * a busy wait.
>           */
>          QTAILQ_FOREACH(bs, &bdrv_states, list) {
>              if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
>                  qemu_co_queue_restart_all(&bs->throttled_reqs);
> -                busy = true;
>              }
>          }
> -    } while (busy);
>  
> -    /* If requests are still pending there is a bug somewhere */
> -    QTAILQ_FOREACH(bs, &bdrv_states, list) {
> -        assert(QLIST_EMPTY(&bs->tracked_requests));
> -        assert(qemu_co_queue_empty(&bs->throttled_reqs));
> +        qemu_aio_wait();
>      }
>  }

tests/ide-test found an issue here: block.c invokes callbacks from a BH
so we may not yet have completed the request when this loop terminates.

Kevin: can you fold in this patch?

Comments

Paolo Bonzini June 13, 2013, 2:33 p.m. UTC | #1
Il 10/06/2013 10:38, Stefan Hajnoczi ha scritto:
> On Mon, Jun 10, 2013 at 02:25:57PM +0200, Stefan Hajnoczi wrote:
>> @@ -1427,26 +1456,18 @@ void bdrv_close_all(void)
>>  void bdrv_drain_all(void)
>>  {
>>      BlockDriverState *bs;
>> -    bool busy;
>> -
>> -    do {
>> -        busy = qemu_aio_wait();
>>  
>> +    while (bdrv_requests_pending_all()) {
>>          /* FIXME: We do not have timer support here, so this is effectively
>>           * a busy wait.
>>           */
>>          QTAILQ_FOREACH(bs, &bdrv_states, list) {
>>              if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
>>                  qemu_co_queue_restart_all(&bs->throttled_reqs);
>> -                busy = true;
>>              }
>>          }
>> -    } while (busy);
>>  
>> -    /* If requests are still pending there is a bug somewhere */
>> -    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>> -        assert(QLIST_EMPTY(&bs->tracked_requests));
>> -        assert(qemu_co_queue_empty(&bs->throttled_reqs));
>> +        qemu_aio_wait();
>>      }
>>  }
> 
> tests/ide-test found an issue here: block.c invokes callbacks from a BH
> so we may not yet have completed the request when this loop terminates.
> 
> Kevin: can you fold in this patch?
> 
> diff --git a/block.c b/block.c
> index 31f7231..e176215 100644
> --- a/block.c
> +++ b/block.c
> @@ -1469,6 +1469,9 @@ void bdrv_drain_all(void)
>  
>          qemu_aio_wait();
>      }
> +
> +    /* Process pending completion BHs */
> +    aio_poll(qemu_get_aio_context(), false);
>  }

aio_poll could require multiple iterations, this is why the old code was
starting with "busy = qemu_aio_wait()".  You can add a while loop around
the new call, or do something more similar to the old code, along the
lines of "do { QTAILQ_FOREACH(...) ...; busy =
bdrv_request_pending_all(); busy |= aio_poll(qemu_get_aio_context(),
busy); } while(busy);".
Stefan Hajnoczi June 14, 2013, 10:58 a.m. UTC | #2
On Thu, Jun 13, 2013 at 10:33:58AM -0400, Paolo Bonzini wrote:
> Il 10/06/2013 10:38, Stefan Hajnoczi ha scritto:
> > On Mon, Jun 10, 2013 at 02:25:57PM +0200, Stefan Hajnoczi wrote:
> >> @@ -1427,26 +1456,18 @@ void bdrv_close_all(void)
> >>  void bdrv_drain_all(void)
> >>  {
> >>      BlockDriverState *bs;
> >> -    bool busy;
> >> -
> >> -    do {
> >> -        busy = qemu_aio_wait();
> >>  
> >> +    while (bdrv_requests_pending_all()) {
> >>          /* FIXME: We do not have timer support here, so this is effectively
> >>           * a busy wait.
> >>           */
> >>          QTAILQ_FOREACH(bs, &bdrv_states, list) {
> >>              if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
> >>                  qemu_co_queue_restart_all(&bs->throttled_reqs);
> >> -                busy = true;
> >>              }
> >>          }
> >> -    } while (busy);
> >>  
> >> -    /* If requests are still pending there is a bug somewhere */
> >> -    QTAILQ_FOREACH(bs, &bdrv_states, list) {
> >> -        assert(QLIST_EMPTY(&bs->tracked_requests));
> >> -        assert(qemu_co_queue_empty(&bs->throttled_reqs));
> >> +        qemu_aio_wait();
> >>      }
> >>  }
> > 
> > tests/ide-test found an issue here: block.c invokes callbacks from a BH
> > so we may not yet have completed the request when this loop terminates.
> > 
> > Kevin: can you fold in this patch?
> > 
> > diff --git a/block.c b/block.c
> > index 31f7231..e176215 100644
> > --- a/block.c
> > +++ b/block.c
> > @@ -1469,6 +1469,9 @@ void bdrv_drain_all(void)
> >  
> >          qemu_aio_wait();
> >      }
> > +
> > +    /* Process pending completion BHs */
> > +    aio_poll(qemu_get_aio_context(), false);
> >  }
> 
> aio_poll could require multiple iterations, this is why the old code was
> starting with "busy = qemu_aio_wait()".  You can add a while loop around
> the new call, or do something more similar to the old code, along the
> lines of "do { QTAILQ_FOREACH(...) ...; busy =
> bdrv_request_pending_all(); busy |= aio_poll(qemu_get_aio_context(),
> busy); } while(busy);".

Good idea, the trick is to use busy as the blocking flag.

I will resend with the fix to avoid making this email thread too messy.

Stefan
diff mbox

Patch

diff --git a/block.c b/block.c
index 31f7231..e176215 100644
--- a/block.c
+++ b/block.c
@@ -1469,6 +1469,9 @@  void bdrv_drain_all(void)
 
         qemu_aio_wait();
     }
+
+    /* Process pending completion BHs */
+    aio_poll(qemu_get_aio_context(), false);
 }
 
 /* make a BlockDriverState anonymous by removing from bdrv_state list.