Patchwork block: fix block I/O throtting with IDE

login
register
mail settings
Submitter Zhiyong Wu
Date Feb. 19, 2012, 3:16 p.m.
Message ID <1329664586-13923-1-git-send-email-zwu.kernel@gmail.com>
Download mbox | patch
Permalink /patch/142060/
State New
Headers show

Comments

Zhiyong Wu - Feb. 19, 2012, 3:16 p.m.
From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

The patch is based on the latest QEMU upstream. If you will backport the patchset to QEMU 1.0, pls note the difference.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 block.c |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)
Andreas Färber - Feb. 19, 2012, 3:40 p.m.
Am 19.02.2012 16:16, schrieb zwu.kernel@gmail.com:
> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> 
> The patch is based on the latest QEMU upstream. If you will backport the patchset to QEMU 1.0, pls note the difference.

"Fix" is never a good patch description. ;) In place of the above
sentence, which does not tell us what the actual difference is, please
describe in which way it was broken before and what the patch changes.

And if this is to be fixed on stable-1.0, please cc qemu-stable.

Andreas

> 
> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> ---
>  block.c |    6 ++++++
>  1 files changed, 6 insertions(+), 0 deletions(-)
> 
> diff --git a/block.c b/block.c
> index ae297bb..07cd143 100644
> --- a/block.c
> +++ b/block.c
> @@ -863,6 +863,12 @@ void bdrv_drain_all(void)
>  {
>      BlockDriverState *bs;
>  
> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
> +        if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
> +            qemu_co_queue_restart_all(&bs->throttled_reqs);
> +        }
> +    }
> +
>      qemu_aio_flush();
>  
>      /* If requests are still pending there is a bug somewhere */
Chris Webb - Feb. 19, 2012, 9:18 p.m.
zwu.kernel@gmail.com writes:

> The patch is based on the latest QEMU upstream. If you will backport the
> patchset to QEMU 1.0, pls note the difference.

I would indeed quite like to backport this to qemu 1.0! Am I right in
thinking the sanest way to do this is to apply 922453bca6a9 to bring all the
relevant qemu_aio_flush() calls through the same place before I apply your
patch?

Best wishes,

Chris.
Zhiyong Wu - Feb. 20, 2012, 4:27 a.m.
On Sun, Feb 19, 2012 at 11:40 PM, Andreas Färber <afaerber@suse.de> wrote:
> Am 19.02.2012 16:16, schrieb zwu.kernel@gmail.com:
>> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>
>> The patch is based on the latest QEMU upstream. If you will backport the patchset to QEMU 1.0, pls note the difference.
>
> "Fix" is never a good patch description. ;) In place of the above
> sentence, which does not tell us what the actual difference is, please
> describe in which way it was broken before and what the patch changes.
nice ,thinks.
>
> And if this is to be fixed on stable-1.0, please cc qemu-stable.
This is not for stable-1.0
>
> Andreas
>
>>
>> Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>> ---
>>  block.c |    6 ++++++
>>  1 files changed, 6 insertions(+), 0 deletions(-)
>>
>> diff --git a/block.c b/block.c
>> index ae297bb..07cd143 100644
>> --- a/block.c
>> +++ b/block.c
>> @@ -863,6 +863,12 @@ void bdrv_drain_all(void)
>>  {
>>      BlockDriverState *bs;
>>
>> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>> +        if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
>> +            qemu_co_queue_restart_all(&bs->throttled_reqs);
>> +        }
>> +    }
>> +
>>      qemu_aio_flush();
>>
>>      /* If requests are still pending there is a bug somewhere */
>
> --
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
Zhiyong Wu - Feb. 20, 2012, 4:53 a.m.
On Mon, Feb 20, 2012 at 5:18 AM, Chris Webb <chris@arachsys.com> wrote:
> zwu.kernel@gmail.com writes:
>
>> The patch is based on the latest QEMU upstream. If you will backport the
>> patchset to QEMU 1.0, pls note the difference.
>
> I would indeed quite like to backport this to qemu 1.0! Am I right in
> thinking the sanest way to do this is to apply 922453bca6a9 to bring all the
> relevant qemu_aio_flush() calls through the same place before I apply your
> patch?
For how to backport it to qemu 1.0, i don't know how to help you, sorry.
>
> Best wishes,
>
> Chris.
Chris Webb - Feb. 20, 2012, 10:02 a.m.
Zhi Yong Wu <zwu.kernel@gmail.com> writes:

> On Mon, Feb 20, 2012 at 5:18 AM, Chris Webb <chris@arachsys.com> wrote:
> > I would indeed quite like to backport this to qemu 1.0! Am I right in
> > thinking the sanest way to do this is to apply 922453bca6a9 to bring all the
> > relevant qemu_aio_flush() calls through the same place before I apply your
> > patch?
> For how to backport it to qemu 1.0, i don't know how to help you, sorry.

For the list archives: the patches backport fine to qemu 1.0 with
on top of cherry-picked 922453bca6a9, dbffbdcfff69, e8ee5e4c476d.

Time for us to test them a bit...

Cheers,

Chris.

Patch

diff --git a/block.c b/block.c
index ae297bb..07cd143 100644
--- a/block.c
+++ b/block.c
@@ -863,6 +863,12 @@  void bdrv_drain_all(void)
 {
     BlockDriverState *bs;
 
+    QTAILQ_FOREACH(bs, &bdrv_states, list) {
+        if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
+            qemu_co_queue_restart_all(&bs->throttled_reqs);
+        }
+    }
+
     qemu_aio_flush();
 
     /* If requests are still pending there is a bug somewhere */