diff mbox

[1/2] block: add the support for draining the throttled request queue

Message ID 1329713430-9209-1-git-send-email-zwu.kernel@gmail.com
State New
Headers show

Commit Message

Zhiyong Wu Feb. 20, 2012, 4:50 a.m. UTC
From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.

Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 block.c     |   29 ++++++++++++++++++++++-------
 block_int.h |    1 +
 2 files changed, 23 insertions(+), 7 deletions(-)

Comments

Kevin Wolf Feb. 20, 2012, 9:26 a.m. UTC | #1
Am 20.02.2012 05:50, schrieb zwu.kernel@gmail.com:
> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> 
> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
> 
> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
> 
> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> ---
>  block.c     |   29 ++++++++++++++++++++++-------
>  block_int.h |    1 +
>  2 files changed, 23 insertions(+), 7 deletions(-)
> 
> diff --git a/block.c b/block.c
> index ae297bb..f78df78 100644
> --- a/block.c
> +++ b/block.c
> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>      }
>  }
>  
> -/*
> - * Wait for pending requests to complete across all BlockDriverStates
> - *
> - * This function does not flush data to disk, use bdrv_flush_all() for that
> - * after calling this function.
> - */
> -void bdrv_drain_all(void)
> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>  {
>      BlockDriverState *bs;
>  
> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
> +        if (throttled_bs && throttled_bs != bs) {
> +            continue;
> +        }
> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
> +    }
> +
>      qemu_aio_flush();

Why doesn't qemu_aio_flush() invoke whatever is needed? I think this is
the real bug that should be fixed.

Kevin
Zhiyong Wu Feb. 20, 2012, 9:29 a.m. UTC | #2
On Mon, Feb 20, 2012 at 5:26 PM, Kevin Wolf <kwolf@redhat.com> wrote:
> Am 20.02.2012 05:50, schrieb zwu.kernel@gmail.com:
>> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>
>> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
>>
>> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
>>
>> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>> ---
>>  block.c     |   29 ++++++++++++++++++++++-------
>>  block_int.h |    1 +
>>  2 files changed, 23 insertions(+), 7 deletions(-)
>>
>> diff --git a/block.c b/block.c
>> index ae297bb..f78df78 100644
>> --- a/block.c
>> +++ b/block.c
>> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>>      }
>>  }
>>
>> -/*
>> - * Wait for pending requests to complete across all BlockDriverStates
>> - *
>> - * This function does not flush data to disk, use bdrv_flush_all() for that
>> - * after calling this function.
>> - */
>> -void bdrv_drain_all(void)
>> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>>  {
>>      BlockDriverState *bs;
>>
>> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>> +        if (throttled_bs && throttled_bs != bs) {
>> +            continue;
>> +        }
>> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>> +    }
>> +
>>      qemu_aio_flush();
>
> Why doesn't qemu_aio_flush() invoke whatever is needed? I think this is
> the real bug that should be fixed.
Do you mean that why qemu_aio_flush doesn't invoke those lines of
codes above it? As what i said in above comments, when one VM has 2
multiple disks with enabling I/O throttling, if only one disk need to
be flushed, only this disk's throttled request need to be drained, and
another disk's throttled requests don't need.

>
> Kevin
Zhiyong Wu Feb. 20, 2012, 9:34 a.m. UTC | #3
For example, one disk of one guest is hot plugout, its other disks
should not be affected if those disks also enable I/O throttling. But
if one whole VM need to be stored, all throttled requests for all
disks need to be drained.

On Mon, Feb 20, 2012 at 5:26 PM, Kevin Wolf <kwolf@redhat.com> wrote:
> Am 20.02.2012 05:50, schrieb zwu.kernel@gmail.com:
>> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>
>> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
>>
>> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
>>
>> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>> ---
>>  block.c     |   29 ++++++++++++++++++++++-------
>>  block_int.h |    1 +
>>  2 files changed, 23 insertions(+), 7 deletions(-)
>>
>> diff --git a/block.c b/block.c
>> index ae297bb..f78df78 100644
>> --- a/block.c
>> +++ b/block.c
>> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>>      }
>>  }
>>
>> -/*
>> - * Wait for pending requests to complete across all BlockDriverStates
>> - *
>> - * This function does not flush data to disk, use bdrv_flush_all() for that
>> - * after calling this function.
>> - */
>> -void bdrv_drain_all(void)
>> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>>  {
>>      BlockDriverState *bs;
>>
>> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>> +        if (throttled_bs && throttled_bs != bs) {
>> +            continue;
>> +        }
>> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>> +    }
>> +
>>      qemu_aio_flush();
>
> Why doesn't qemu_aio_flush() invoke whatever is needed? I think this is
> the real bug that should be fixed.
>
> Kevin
Kevin Wolf Feb. 20, 2012, 9:39 a.m. UTC | #4
Am 20.02.2012 10:29, schrieb Zhi Yong Wu:
> On Mon, Feb 20, 2012 at 5:26 PM, Kevin Wolf <kwolf@redhat.com> wrote:
>> Am 20.02.2012 05:50, schrieb zwu.kernel@gmail.com:
>>> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>>
>>> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
>>>
>>> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
>>>
>>> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>> ---
>>>  block.c     |   29 ++++++++++++++++++++++-------
>>>  block_int.h |    1 +
>>>  2 files changed, 23 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/block.c b/block.c
>>> index ae297bb..f78df78 100644
>>> --- a/block.c
>>> +++ b/block.c
>>> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>>>      }
>>>  }
>>>
>>> -/*
>>> - * Wait for pending requests to complete across all BlockDriverStates
>>> - *
>>> - * This function does not flush data to disk, use bdrv_flush_all() for that
>>> - * after calling this function.
>>> - */
>>> -void bdrv_drain_all(void)
>>> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>>>  {
>>>      BlockDriverState *bs;
>>>
>>> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>>> +        if (throttled_bs && throttled_bs != bs) {
>>> +            continue;
>>> +        }
>>> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>>> +    }
>>> +
>>>      qemu_aio_flush();
>>
>> Why doesn't qemu_aio_flush() invoke whatever is needed? I think this is
>> the real bug that should be fixed.
> Do you mean that why qemu_aio_flush doesn't invoke those lines of
> codes above it? As what i said in above comments, when one VM has 2
> multiple disks with enabling I/O throttling, if only one disk need to
> be flushed, only this disk's throttled request need to be drained, and
> another disk's throttled requests don't need.

So this is an optimisation rather than a fix? Would the right
optimisation be a qemu_aio_flush_disk() that flushes the requests for a
single disk?

Kevin
Zhiyong Wu Feb. 20, 2012, 9:46 a.m. UTC | #5
On Mon, Feb 20, 2012 at 5:39 PM, Kevin Wolf <kwolf@redhat.com> wrote:
> Am 20.02.2012 10:29, schrieb Zhi Yong Wu:
>> On Mon, Feb 20, 2012 at 5:26 PM, Kevin Wolf <kwolf@redhat.com> wrote:
>>> Am 20.02.2012 05:50, schrieb zwu.kernel@gmail.com:
>>>> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>>>
>>>> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
>>>>
>>>> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
>>>>
>>>> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>>> ---
>>>>  block.c     |   29 ++++++++++++++++++++++-------
>>>>  block_int.h |    1 +
>>>>  2 files changed, 23 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/block.c b/block.c
>>>> index ae297bb..f78df78 100644
>>>> --- a/block.c
>>>> +++ b/block.c
>>>> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>>>>      }
>>>>  }
>>>>
>>>> -/*
>>>> - * Wait for pending requests to complete across all BlockDriverStates
>>>> - *
>>>> - * This function does not flush data to disk, use bdrv_flush_all() for that
>>>> - * after calling this function.
>>>> - */
>>>> -void bdrv_drain_all(void)
>>>> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>>>>  {
>>>>      BlockDriverState *bs;
>>>>
>>>> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>>>> +        if (throttled_bs && throttled_bs != bs) {
>>>> +            continue;
>>>> +        }
>>>> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>>>> +    }
>>>> +
>>>>      qemu_aio_flush();
>>>
>>> Why doesn't qemu_aio_flush() invoke whatever is needed? I think this is
>>> the real bug that should be fixed.
>> Do you mean that why qemu_aio_flush doesn't invoke those lines of
>> codes above it? As what i said in above comments, when one VM has 2
>> multiple disks with enabling I/O throttling, if only one disk need to
>> be flushed, only this disk's throttled request need to be drained, and
>> another disk's throttled requests don't need.
>
> So this is an optimisation rather than a fix? Would the right
It is a fix. i think that current handling is wrong when I/O
throttling is enabled
> optimisation be a qemu_aio_flush_disk() that flushes the requests for a
> single disk?
No. If bdrv_drain_request is invoked with specified bs, it will flush
All the throttled requests in the throttled queue for this specific bs
 +  pending emulated aio for all disks (in aio engine)
But if bdrv_drain_request(NULL) is invoked, it will flush
All the throttled request in the throttled queue for all disks  +
pending emulated aio for all disks (in aio engine)

>
> Kevin
Stefan Hajnoczi Feb. 24, 2012, 8:49 a.m. UTC | #6
On Mon, Feb 20, 2012 at 12:50:30PM +0800, zwu.kernel@gmail.com wrote:
> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> 
> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
> 
> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
> 
> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> ---
>  block.c     |   29 ++++++++++++++++++++++-------
>  block_int.h |    1 +
>  2 files changed, 23 insertions(+), 7 deletions(-)
> 
> diff --git a/block.c b/block.c
> index ae297bb..f78df78 100644
> --- a/block.c
> +++ b/block.c
> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>      }
>  }
>  
> -/*
> - * Wait for pending requests to complete across all BlockDriverStates
> - *
> - * This function does not flush data to disk, use bdrv_flush_all() for that
> - * after calling this function.
> - */
> -void bdrv_drain_all(void)
> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>  {
>      BlockDriverState *bs;
>  
> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
> +        if (throttled_bs && throttled_bs != bs) {
> +            continue;
> +        }
> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
> +    }
> +
>      qemu_aio_flush();

Since I/O throttling is still enabled, the restarted requests could
enqueue again if they exceed the limit.  We could still hit the assert.

If the semantics of bdrv_drain_request() are that no requests are
pending when it returns then we need a loop here.

BTW bdrv_drain() would be a shorter name for this function.

Stefan
Zhiyong Wu Feb. 24, 2012, 9:20 a.m. UTC | #7
On Fri, Feb 24, 2012 at 4:49 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> On Mon, Feb 20, 2012 at 12:50:30PM +0800, zwu.kernel@gmail.com wrote:
>> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>
>> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
>>
>> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
>>
>> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>> ---
>>  block.c     |   29 ++++++++++++++++++++++-------
>>  block_int.h |    1 +
>>  2 files changed, 23 insertions(+), 7 deletions(-)
>>
>> diff --git a/block.c b/block.c
>> index ae297bb..f78df78 100644
>> --- a/block.c
>> +++ b/block.c
>> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>>      }
>>  }
>>
>> -/*
>> - * Wait for pending requests to complete across all BlockDriverStates
>> - *
>> - * This function does not flush data to disk, use bdrv_flush_all() for that
>> - * after calling this function.
>> - */
>> -void bdrv_drain_all(void)
>> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>>  {
>>      BlockDriverState *bs;
>>
>> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>> +        if (throttled_bs && throttled_bs != bs) {
>> +            continue;
>> +        }
>> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>> +    }
>> +
>>      qemu_aio_flush();
>
> Since I/O throttling is still enabled, the restarted requests could
> enqueue again if they exceed the limit.  We could still hit the assert.
good catch.
>
> If the semantics of bdrv_drain_request() are that no requests are
> pending when it returns then we need a loop here.
OK.
>
> BTW bdrv_drain() would be a shorter name for this function.
OK
>
> Stefan
Zhiyong Wu Feb. 24, 2012, 9:25 a.m. UTC | #8
On Fri, Feb 24, 2012 at 4:49 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> On Mon, Feb 20, 2012 at 12:50:30PM +0800, zwu.kernel@gmail.com wrote:
>> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>
>> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
>>
>> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
>>
>> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>> ---
>>  block.c     |   29 ++++++++++++++++++++++-------
>>  block_int.h |    1 +
>>  2 files changed, 23 insertions(+), 7 deletions(-)
>>
>> diff --git a/block.c b/block.c
>> index ae297bb..f78df78 100644
>> --- a/block.c
>> +++ b/block.c
>> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>>      }
>>  }
>>
>> -/*
>> - * Wait for pending requests to complete across all BlockDriverStates
>> - *
>> - * This function does not flush data to disk, use bdrv_flush_all() for that
>> - * after calling this function.
>> - */
>> -void bdrv_drain_all(void)
>> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>>  {
>>      BlockDriverState *bs;
>>
>> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>> +        if (throttled_bs && throttled_bs != bs) {
>> +            continue;
>> +        }
>> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>> +    }
>> +
>>      qemu_aio_flush();
>
> Since I/O throttling is still enabled, the restarted requests could
> enqueue again if they exceed the limit.  We could still hit the assert.
>
> If the semantics of bdrv_drain_request() are that no requests are
> pending when it returns then we need a loop here.
>
> BTW bdrv_drain() would be a shorter name for this function.
For this function's semantics, i have some concerns.
Is it used to drain all requests of one single disk or all disks for one guest?
which is more suitable?

>
> Stefan
Stefan Hajnoczi Feb. 24, 2012, 11:18 a.m. UTC | #9
On Fri, Feb 24, 2012 at 05:25:08PM +0800, Zhi Yong Wu wrote:
> On Fri, Feb 24, 2012 at 4:49 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > On Mon, Feb 20, 2012 at 12:50:30PM +0800, zwu.kernel@gmail.com wrote:
> >> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> >>
> >> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
> >>
> >> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
> >>
> >> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> >> ---
> >>  block.c     |   29 ++++++++++++++++++++++-------
> >>  block_int.h |    1 +
> >>  2 files changed, 23 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/block.c b/block.c
> >> index ae297bb..f78df78 100644
> >> --- a/block.c
> >> +++ b/block.c
> >> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
> >>      }
> >>  }
> >>
> >> -/*
> >> - * Wait for pending requests to complete across all BlockDriverStates
> >> - *
> >> - * This function does not flush data to disk, use bdrv_flush_all() for that
> >> - * after calling this function.
> >> - */
> >> -void bdrv_drain_all(void)
> >> +void bdrv_drain_request(BlockDriverState *throttled_bs)
> >>  {
> >>      BlockDriverState *bs;
> >>
> >> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
> >> +        if (throttled_bs && throttled_bs != bs) {
> >> +            continue;
> >> +        }
> >> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
> >> +    }
> >> +
> >>      qemu_aio_flush();
> >
> > Since I/O throttling is still enabled, the restarted requests could
> > enqueue again if they exceed the limit.  We could still hit the assert.
> >
> > If the semantics of bdrv_drain_request() are that no requests are
> > pending when it returns then we need a loop here.
> >
> > BTW bdrv_drain() would be a shorter name for this function.
> For this function's semantics, i have some concerns.
> Is it used to drain all requests of one single disk or all disks for one guest?
> which is more suitable?

Both are useful:

/**
 * Complete all pending requests for a block device
 */
void bdrv_drain(BlockDriverState *bs);

/**
 * Complete pending requests for all block devices
 */
void bdrv_drain_all(void);

Stefan
Paolo Bonzini Feb. 24, 2012, 12:54 p.m. UTC | #10
On 02/24/2012 09:49 AM, Stefan Hajnoczi wrote:
>> > +void bdrv_drain_request(BlockDriverState *throttled_bs)
>> >  {
>> >      BlockDriverState *bs;
>> >  
>> > +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>> > +        if (throttled_bs && throttled_bs != bs) {
>> > +            continue;
>> > +        }
>> > +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>> > +    }
>> > +
>> >      qemu_aio_flush();
> Since I/O throttling is still enabled, the restarted requests could
> enqueue again if they exceed the limit.  We could still hit the assert.

Yes, and qemu_aio_flush() rightly doesn't know that there are pending
requests, because these are "not there" until the timer fires.

Perhaps the bug is simply that the assert is bogus.  If there are
throttled requests, we need to invoke qemu_aio_flush() again.  The
problem with this is that timers don't run inside qemu_aio_wait() so we
have to restart them all and we're effectively busy waiting.  Not a huge
problem since qemu_aio_flush() is rare, but not too nice either.

Paolo
Zhiyong Wu Feb. 24, 2012, 1:07 p.m. UTC | #11
On Fri, Feb 24, 2012 at 7:18 PM, Stefan Hajnoczi
<stefanha@linux.vnet.ibm.com> wrote:
> On Fri, Feb 24, 2012 at 05:25:08PM +0800, Zhi Yong Wu wrote:
>> On Fri, Feb 24, 2012 at 4:49 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>> > On Mon, Feb 20, 2012 at 12:50:30PM +0800, zwu.kernel@gmail.com wrote:
>> >> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>> >>
>> >> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first.
>> >>
>> >> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected.
>> >>
>> >> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>> >> ---
>> >>  block.c     |   29 ++++++++++++++++++++++-------
>> >>  block_int.h |    1 +
>> >>  2 files changed, 23 insertions(+), 7 deletions(-)
>> >>
>> >> diff --git a/block.c b/block.c
>> >> index ae297bb..f78df78 100644
>> >> --- a/block.c
>> >> +++ b/block.c
>> >> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>> >>      }
>> >>  }
>> >>
>> >> -/*
>> >> - * Wait for pending requests to complete across all BlockDriverStates
>> >> - *
>> >> - * This function does not flush data to disk, use bdrv_flush_all() for that
>> >> - * after calling this function.
>> >> - */
>> >> -void bdrv_drain_all(void)
>> >> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>> >>  {
>> >>      BlockDriverState *bs;
>> >>
>> >> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>> >> +        if (throttled_bs && throttled_bs != bs) {
>> >> +            continue;
>> >> +        }
>> >> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>> >> +    }
>> >> +
>> >>      qemu_aio_flush();
>> >
>> > Since I/O throttling is still enabled, the restarted requests could
>> > enqueue again if they exceed the limit.  We could still hit the assert.
>> >
>> > If the semantics of bdrv_drain_request() are that no requests are
>> > pending when it returns then we need a loop here.
>> >
>> > BTW bdrv_drain() would be a shorter name for this function.
>> For this function's semantics, i have some concerns.
>> Is it used to drain all requests of one single disk or all disks for one guest?
>> which is more suitable?
>
> Both are useful:
>
> /**
>  * Complete all pending requests for a block device
>  */
> void bdrv_drain(BlockDriverState *bs);
>
> /**
>  * Complete pending requests for all block devices
>  */
> void bdrv_drain_all(void);
Great, thanks.
>
> Stefan
>
diff mbox

Patch

diff --git a/block.c b/block.c
index ae297bb..f78df78 100644
--- a/block.c
+++ b/block.c
@@ -853,25 +853,40 @@  void bdrv_close_all(void)
     }
 }
 
-/*
- * Wait for pending requests to complete across all BlockDriverStates
- *
- * This function does not flush data to disk, use bdrv_flush_all() for that
- * after calling this function.
- */
-void bdrv_drain_all(void)
+void bdrv_drain_request(BlockDriverState *throttled_bs)
 {
     BlockDriverState *bs;
 
+    QTAILQ_FOREACH(bs, &bdrv_states, list) {
+        if (throttled_bs && throttled_bs != bs) {
+            continue;
+        }
+        qemu_co_queue_restart_all(&bs->throttled_reqs);
+    }
+
     qemu_aio_flush();
 
     /* If requests are still pending there is a bug somewhere */
     QTAILQ_FOREACH(bs, &bdrv_states, list) {
         assert(QLIST_EMPTY(&bs->tracked_requests));
+        if (throttled_bs && throttled_bs != bs) {
+            continue;
+        }
         assert(qemu_co_queue_empty(&bs->throttled_reqs));
     }
 }
 
+/*
+ * Wait for pending requests to complete across all BlockDriverStates
+ *
+ * This function does not flush data to disk, use bdrv_flush_all() for that
+ * after calling this function.
+ */
+void bdrv_drain_all(void)
+{
+    bdrv_drain_request(NULL);
+}
+
 /* make a BlockDriverState anonymous by removing from bdrv_state list.
    Also, NULL terminate the device_name to prevent double remove */
 void bdrv_make_anon(BlockDriverState *bs)
diff --git a/block_int.h b/block_int.h
index 7946cf6..1311288 100644
--- a/block_int.h
+++ b/block_int.h
@@ -323,6 +323,7 @@  void qemu_aio_release(void *p);
 
 void bdrv_set_io_limits(BlockDriverState *bs,
                         BlockIOLimit *io_limits);
+void bdrv_drain_request(BlockDriverState *bs);
 
 #ifdef _WIN32
 int is_windows_drive(const char *filename);