diff mbox series

iotests: Tweak 030 in order to trigger a race condition with parallel jobs

Message ID 20171207170834.15843-1-berto@igalia.com
State New
Headers show
Series iotests: Tweak 030 in order to trigger a race condition with parallel jobs | expand

Commit Message

Alberto Garcia Dec. 7, 2017, 5:08 p.m. UTC
This patch tweaks TestParallelOps in iotest 030 so it allocates data
in smaller regions (256KB/512KB instead of 512KB/1MB) and the
block-stream job in test_stream_commit() only needs to copy data that
is at the very end of the image.

This way when the block-stream job is waken up it will finish right
away without any chance of being stopped by block_job_sleep_ns(). This
triggers the bug that was fixed by 3d5d319e1221082974711af1d09d82f07
and is therefore a more useful test case for parallel block jobs.

After this patch the aforementiond bug can also be reproduced with the
test_stream_parallel() test case.

Signed-off-by: Alberto Garcia <berto@igalia.com>
---
 tests/qemu-iotests/030 | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

Comments

Eric Blake Dec. 7, 2017, 7:16 p.m. UTC | #1
On 12/07/2017 11:08 AM, Alberto Garcia wrote:
> This patch tweaks TestParallelOps in iotest 030 so it allocates data
> in smaller regions (256KB/512KB instead of 512KB/1MB) and the
> block-stream job in test_stream_commit() only needs to copy data that
> is at the very end of the image.
> 
> This way when the block-stream job is waken up it will finish right

s/waken up/awakened/

> away without any chance of being stopped by block_job_sleep_ns(). This
> triggers the bug that was fixed by 3d5d319e1221082974711af1d09d82f07
> and is therefore a more useful test case for parallel block jobs.
> 
> After this patch the aforementiond bug can also be reproduced with the
> test_stream_parallel() test case.
> 
> Signed-off-by: Alberto Garcia <berto@igalia.com>
> ---
>  tests/qemu-iotests/030 | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 

>          # Put data into the images we are copying data from
>          for i in range(self.num_imgs / 2):
> -            img_index = i * 2 + 1
> -            # Alternate between 512k and 1M.
> +            img_index = self.num_imgs - i * 2 - 2
> +            # Alternate between 256KB and 512KB.
>              # This way jobs will not finish in the same order they were created
> -            num_kb = 512 + 512 * (i % 2)
> +            num_kb = 256 + 256 * (i % 2)
>              qemu_io('-f', iotests.imgfmt,
> -                    '-c', 'write -P %d %d %d' % (i, i*1024*1024, num_kb * 1024),
> +                    '-c', 'write -P 0xFF %dk %dk' % (i * 512, num_kb),

I guess changing from a variable to a fixed 0xff pattern doesn't make a
difference?
Alberto Garcia Dec. 7, 2017, 7:34 p.m. UTC | #2
On Thu 07 Dec 2017 08:16:41 PM CET, Eric Blake wrote:
>>              qemu_io('-f', iotests.imgfmt,
>> -                    '-c', 'write -P %d %d %d' % (i, i*1024*1024, num_kb * 1024),
>> +                    '-c', 'write -P 0xFF %dk %dk' % (i * 512, num_kb),
>
> I guess changing from a variable to a fixed 0xff pattern doesn't make
> a difference?

I noticed that with the previous code we would write zeroes to the first
image (i == 0), and with that I can't reproduce the bug. I assume that
block-stream doesn't copy the data in that case. Changing it to anything
!= 0 solves the problem.

And answering your question, it doesn't really matter if we write the
same value in all places, we only check the output of 'qemu-io -c map'.
Plus the areas don't even overlap.

Berto
John Snow Dec. 8, 2017, 7:13 p.m. UTC | #3
On 12/07/2017 02:34 PM, Alberto Garcia wrote:
> On Thu 07 Dec 2017 08:16:41 PM CET, Eric Blake wrote:
>>>              qemu_io('-f', iotests.imgfmt,
>>> -                    '-c', 'write -P %d %d %d' % (i, i*1024*1024, num_kb * 1024),
>>> +                    '-c', 'write -P 0xFF %dk %dk' % (i * 512, num_kb),
>>
>> I guess changing from a variable to a fixed 0xff pattern doesn't make
>> a difference?
> 
> I noticed that with the previous code we would write zeroes to the first
> image (i == 0), and with that I can't reproduce the bug. I assume that
> block-stream doesn't copy the data in that case. Changing it to anything
> != 0 solves the problem.
> 

I think I ran into a similar problem with an AHCI test once.

Reviewed-by: John Snow <jsnow@redhat.com>

> And answering your question, it doesn't really matter if we write the
> same value in all places, we only check the output of 'qemu-io -c map'.
> Plus the areas don't even overlap.
> 
> Berto
>
Alberto Garcia Dec. 9, 2017, 11:08 a.m. UTC | #4
On Fri 08 Dec 2017 08:13:48 PM CET, John Snow <jsnow@redhat.com> wrote:

>>>>              qemu_io('-f', iotests.imgfmt,
>>>> -                    '-c', 'write -P %d %d %d' % (i, i*1024*1024, num_kb * 1024),
>>>> +                    '-c', 'write -P 0xFF %dk %dk' % (i * 512, num_kb),
>>>
>>> I guess changing from a variable to a fixed 0xff pattern doesn't
>>> make a difference?
>> 
>> I noticed that with the previous code we would write zeroes to the
>> first image (i == 0), and with that I can't reproduce the bug. I
>> assume that block-stream doesn't copy the data in that case. Changing
>> it to anything != 0 solves the problem.
>
> I think I ran into a similar problem with an AHCI test once.

It's this bit from bdrv_co_do_copy_on_readv() (which is the way
block-stream is implemented):

    if (drv->bdrv_co_pwrite_zeroes &&
        buffer_is_zero(bounce_buffer, pnum)) {
        /* FIXME: Should we (perhaps conditionally) be setting
         * BDRV_REQ_MAY_UNMAP, if it will allow for a sparser copy
         * that still correctly reads as zero? */
        ret = bdrv_co_do_pwrite_zeroes(bs, cluster_offset, pnum, 0);
    } else {
        /* This does not change the data on the disk, it is not
         * necessary to flush even in cache=writethrough mode.
         */
        ret = bdrv_driver_pwritev(bs, cluster_offset, pnum,
                                  &local_qiov, 0);
    }

Berto
Kevin Wolf Dec. 13, 2017, 11:31 a.m. UTC | #5
Am 07.12.2017 um 18:08 hat Alberto Garcia geschrieben:
> This patch tweaks TestParallelOps in iotest 030 so it allocates data
> in smaller regions (256KB/512KB instead of 512KB/1MB) and the
> block-stream job in test_stream_commit() only needs to copy data that
> is at the very end of the image.
> 
> This way when the block-stream job is waken up it will finish right
> away without any chance of being stopped by block_job_sleep_ns(). This
> triggers the bug that was fixed by 3d5d319e1221082974711af1d09d82f07
> and is therefore a more useful test case for parallel block jobs.

But if all jobs complete right away, doesn't this kill the parallelism
that the test case intended to test?

Maybe we need both cases?

Kevin
Alberto Garcia Dec. 13, 2017, 12:43 p.m. UTC | #6
On Wed 13 Dec 2017 12:31:20 PM CET, Kevin Wolf wrote:
> Am 07.12.2017 um 18:08 hat Alberto Garcia geschrieben:
>> This patch tweaks TestParallelOps in iotest 030 so it allocates data
>> in smaller regions (256KB/512KB instead of 512KB/1MB) and the
>> block-stream job in test_stream_commit() only needs to copy data that
>> is at the very end of the image.
>> 
>> This way when the block-stream job is waken up it will finish right
>> away without any chance of being stopped by block_job_sleep_ns(). This
>> triggers the bug that was fixed by 3d5d319e1221082974711af1d09d82f07
>> and is therefore a more useful test case for parallel block jobs.
>
> But if all jobs complete right away, doesn't this kill the parallelism
> that the test case intended to test?

Good question. There's two cases where there's parallel block jobs:

- test_stream_parallel(): in this case we run four block-stream jobs so
  although one finishes early the others will run the stream_run() loop
  in parallel. So no need to worry here.

- test_stream_commit(): here there's only two jobs. block-stream starts
  first, then block-commit starts and calls bdrv_reopen() and it's
  during that reopen that the first job finishes. So while the main
  purpose of this test case was to ensure that op blockers were working
  fine and that you can launch both jobs at the same time, it's true
  that with this patch you won't have both main iterations (stream_run()
  and mirror_run()) running at the same time.

> Maybe we need both cases?

I guess it's a good idea, it should be as simple as creating a subclass
of TestParallelOps with different sizes, all test cases will be
inherited from the parent.

I'll send a new patch.

Berto
diff mbox series

Patch

diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index 457984b8e9..230eb761dd 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -156,7 +156,7 @@  class TestSingleDrive(iotests.QMPTestCase):
 class TestParallelOps(iotests.QMPTestCase):
     num_ops = 4 # Number of parallel block-stream operations
     num_imgs = num_ops * 2 + 1
-    image_len = num_ops * 1024 * 1024
+    image_len = num_ops * 512 * 1024
     imgs = []
 
     def setUp(self):
@@ -177,12 +177,12 @@  class TestParallelOps(iotests.QMPTestCase):
 
         # Put data into the images we are copying data from
         for i in range(self.num_imgs / 2):
-            img_index = i * 2 + 1
-            # Alternate between 512k and 1M.
+            img_index = self.num_imgs - i * 2 - 2
+            # Alternate between 256KB and 512KB.
             # This way jobs will not finish in the same order they were created
-            num_kb = 512 + 512 * (i % 2)
+            num_kb = 256 + 256 * (i % 2)
             qemu_io('-f', iotests.imgfmt,
-                    '-c', 'write -P %d %d %d' % (i, i*1024*1024, num_kb * 1024),
+                    '-c', 'write -P 0xFF %dk %dk' % (i * 512, num_kb),
                     self.imgs[img_index])
 
         # Attach the drive to the VM
@@ -323,7 +323,7 @@  class TestParallelOps(iotests.QMPTestCase):
         self.assert_no_active_block_jobs()
 
         # Stream from node0 into node2
-        result = self.vm.qmp('block-stream', device='node2', job_id='node2')
+        result = self.vm.qmp('block-stream', device='node2', base_node='node0', job_id='node2')
         self.assert_qmp(result, 'return', {})
 
         # Commit from the active layer into node3