diff mbox

block: fix bdrv_exceed_iops_limits wait computation

Message ID 20130320132924.GB14441@stefanha-thinkpad.muc.redhat.com
State New
Headers show

Commit Message

Stefan Hajnoczi March 20, 2013, 1:29 p.m. UTC
On Wed, Mar 20, 2013 at 10:12:14AM +0100, BenoƮt Canet wrote:
> This patch fix an I/O throttling behavior triggered by limiting at 150 iops
> and running a load of 50 threads doing random pwrites on a raw virtio device.
> 
> After a few moments the iops count start to oscillate steadily between 0 and a
> value upper than the limit.
> 
> As the load keep running the period and the amplitude of the oscillation
> increase until io bursts reaching the physical storage max iops count are
> done.
> 
> These bursts are followed by guest io starvation.
> 
> As the period of this oscillation cycle is increasing the cause must be a
> computation error leading to increase slowly the wait time.
> 
> This patch make the wait time a bit smaller and tests confirm that it solves
> the oscillating behavior.
> 
> Signed-off-by: Benoit Canet <benoit@irqsave.net>
> ---
>  block.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/block.c b/block.c
> index 0a062c9..455d8b0 100644
> --- a/block.c
> +++ b/block.c
> @@ -3739,7 +3739,7 @@ static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write,
>      }
>  
>      /* Calc approx time to dispatch */
> -    wait_time = (ios_base + 1) / iops_limit;
> +    wait_time = ios_base / iops_limit;
>      if (wait_time > elapsed_time) {
>          wait_time = wait_time - elapsed_time;
>      } else {

I tried reproducing without your test case:

dd if=/dev/vdb of=/dev/null bs=4096 iflag=direct

I've pasted printfs below which reveals that wait time increases monotonically!
In other words, dd is slowed down more and more as it runs:

bs 0x7f56fe187a30 co 0x7f56fe211cb0 throttled for 1426 ms
bs 0x7f56fe187a30 co 0x7f56fe211cb0 woke up from throttled_reqs after sleeping
bs 0x7f56fe187a30 co 0x7f56fe211cb0 throttled for 1431 ms
bs 0x7f56fe187a30 co 0x7f56fe211cb0 woke up from throttled_reqs after sleeping
bs 0x7f56fe187a30 co 0x7f56fe211cb0 throttled for 1437 ms
bs 0x7f56fe187a30 co 0x7f56fe211cb0 woke up from throttled_reqs after sleeping
...

Killing dd and starting it again resets the accumulated delay (probably because
we end the slice and state is cleared).

This suggests workloads that are constantly at the I/O limit will experience
creeping delay or the oscillations you found.

After applying your patch I observed the opposite behavior: wait time decreases
until it resets itself.  Perhaps we're waiting less and less until we just
finish the slice and all values reset:

bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 throttled for 496 ms
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 woke up from throttled_reqs after sleeping
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 throttled for 489 ms
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 woke up from throttled_reqs after sleeping
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 throttled for 484 ms
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 woke up from throttled_reqs after sleeping
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 throttled for 480 ms
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 woke up from throttled_reqs after sleeping
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 throttled for 474 ms
...
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 throttled for 300 ms
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 woke up from throttled_reqs after sleeping
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 throttled for 299 ms
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 woke up from throttled_reqs after sleeping
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 throttled for 298 ms
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 woke up from throttled_reqs after sleeping
bs 0x7f2cd2c52a30 co 0x7f2cd2ce3910 throttled for 494 ms

I'm not confident that I understand the effects of your patch.  Do you have an
explanation for these results?

More digging will probably be necessary to solve the underlying problem here.
diff mbox

Patch

diff --git a/block.c b/block.c
index 0a062c9..7a8c9e6 100644
--- a/block.c
+++ b/block.c
@@ -175,7 +175,9 @@  static void bdrv_io_limits_intercept(BlockDriverState *bs,
     int64_t wait_time = -1;
 
     if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
+        fprintf(stderr, "bs %p co %p waiting for throttled_reqs\n", bs, qemu_coroutine_self());
         qemu_co_queue_wait(&bs->throttled_reqs);
+        fprintf(stderr, "bs %p co %p woke up from throttled_reqs\n", bs, qemu_coroutine_self());
     }
 
     /* In fact, we hope to keep each request's timing, in FIFO mode. The next
@@ -188,7 +190,9 @@  static void bdrv_io_limits_intercept(BlockDriverState *bs,
     while (bdrv_exceed_io_limits(bs, nb_sectors, is_write, &wait_time)) {
         qemu_mod_timer(bs->block_timer,
                        wait_time + qemu_get_clock_ns(vm_clock));
+        fprintf(stderr, "bs %p co %p throttled for %"PRId64" ms\n", bs, qemu_coroutine_self(), wait_time
         qemu_co_queue_wait_insert_head(&bs->throttled_reqs);
+        fprintf(stderr, "bs %p co %p woke up from throttled_reqs after sleeping\n", bs, qemu_coroutine_s
     }
 
     qemu_co_queue_next(&bs->throttled_reqs);