Patchwork [v2,1/2] block: add the support to drain throttled requests

login
register
mail settings
Submitter Zhiyong Wu
Date March 12, 2012, 6:29 a.m.
Message ID <1331533744-15802-1-git-send-email-zwu.kernel@gmail.com>
Download mbox | patch
Permalink /patch/146040/
State New
Headers show

Comments

Zhiyong Wu - March 12, 2012, 6:29 a.m.
From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 block.c     |   21 +++++++++++++++++++++
 block_int.h |    1 +
 2 files changed, 22 insertions(+), 0 deletions(-)
Paolo Bonzini - March 12, 2012, 7:27 a.m.
Il 12/03/2012 07:29, zwu.kernel@gmail.com ha scritto:
> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> 
> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> ---
>  block.c     |   21 +++++++++++++++++++++
>  block_int.h |    1 +
>  2 files changed, 22 insertions(+), 0 deletions(-)
> 
> diff --git a/block.c b/block.c
> index 52ffe14..0825168 100644
> --- a/block.c
> +++ b/block.c
> @@ -853,6 +853,21 @@ void bdrv_close_all(void)
>      }
>  }
>  
> +/**
> + * Complete all pending requests for a block device
> + */
> +void bdrv_drain(BlockDriverState *bs)
> +{
> +    do {
> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
> +    } while (!qemu_co_queue_empty(&bs->throttled_reqs));
> +
> +    qemu_aio_flush();

This doesn't work, qemu_aio_flush can start new I/O.

Paolo
Zhiyong Wu - March 12, 2012, 8:42 a.m.
On Mon, Mar 12, 2012 at 3:27 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 12/03/2012 07:29, zwu.kernel@gmail.com ha scritto:
>> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>
>> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>> ---
>>  block.c     |   21 +++++++++++++++++++++
>>  block_int.h |    1 +
>>  2 files changed, 22 insertions(+), 0 deletions(-)
>>
>> diff --git a/block.c b/block.c
>> index 52ffe14..0825168 100644
>> --- a/block.c
>> +++ b/block.c
>> @@ -853,6 +853,21 @@ void bdrv_close_all(void)
>>      }
>>  }
>>
>> +/**
>> + * Complete all pending requests for a block device
>> + */
>> +void bdrv_drain(BlockDriverState *bs)
>> +{
>> +    do {
>> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>> +    } while (!qemu_co_queue_empty(&bs->throttled_reqs));
>> +
>> +    qemu_aio_flush();
>
> This doesn't work, qemu_aio_flush can start new I/O.
Do you mean that it will start next I/O via the current request's cb?
if no, where will it start new I/O?
>
> Paolo
>
>
Paolo Bonzini - March 12, 2012, 8:48 a.m.
Il 12/03/2012 09:42, Zhi Yong Wu ha scritto:
>> >
>> > This doesn't work, qemu_aio_flush can start new I/O.
> Do you mean that it will start next I/O via the current request's cb?

Yes.

Paolo

Patch

diff --git a/block.c b/block.c
index 52ffe14..0825168 100644
--- a/block.c
+++ b/block.c
@@ -853,6 +853,21 @@  void bdrv_close_all(void)
     }
 }
 
+/**
+ * Complete all pending requests for a block device
+ */
+void bdrv_drain(BlockDriverState *bs)
+{
+    do {
+        qemu_co_queue_restart_all(&bs->throttled_reqs);
+    } while (!qemu_co_queue_empty(&bs->throttled_reqs));
+
+    qemu_aio_flush();
+
+    assert(QLIST_EMPTY(&bs->tracked_requests));
+    assert(qemu_co_queue_empty(&bs->throttled_reqs));
+}
+
 /*
  * Wait for pending requests to complete across all BlockDriverStates
  *
@@ -863,6 +878,12 @@  void bdrv_drain_all(void)
 {
     BlockDriverState *bs;
 
+    QTAILQ_FOREACH(bs, &bdrv_states, list) {
+        do {
+            qemu_co_queue_restart_all(&bs->throttled_reqs);
+        } while (!qemu_co_queue_empty(&bs->throttled_reqs));
+    }
+
     qemu_aio_flush();
 
     /* If requests are still pending there is a bug somewhere */
diff --git a/block_int.h b/block_int.h
index b460c36..c624399 100644
--- a/block_int.h
+++ b/block_int.h
@@ -318,6 +318,7 @@  void qemu_aio_release(void *p);
 
 void bdrv_set_io_limits(BlockDriverState *bs,
                         BlockIOLimit *io_limits);
+void bdrv_drain(BlockDriverState *bs);
 
 #ifdef _WIN32
 int is_windows_drive(const char *filename);