diff mbox series

[v6,3/3] block/stream: introduce a bottom node

Message ID 20190506153429.52737-4-vsementsov@virtuozzo.com
State New
Headers show
Series block/stream: get rid of the base | expand

Commit Message

Vladimir Sementsov-Ogievskiy May 6, 2019, 3:34 p.m. UTC
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>

The bottom node is the intermediate block device that has the base as its
backing image. It is used instead of the base node while a block stream
job is running to avoid dependency on the base that may change due to the
parallel jobs. The change may take place due to a filter node as well that
is inserted between the base and the intermediate bottom node. It occurs
when the base node is the top one for another commit or stream job.
After the introduction of the bottom node, don't freeze its backing child,
that's the base, anymore.

Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
---
 block/stream.c         | 49 +++++++++++++++++++++---------------------
 tests/qemu-iotests/245 |  4 ++--
 2 files changed, 27 insertions(+), 26 deletions(-)

Comments

Max Reitz May 28, 2019, 5:33 p.m. UTC | #1
On 06.05.19 17:34, Vladimir Sementsov-Ogievskiy wrote:
> From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> 
> The bottom node is the intermediate block device that has the base as its
> backing image. It is used instead of the base node while a block stream
> job is running to avoid dependency on the base that may change due to the
> parallel jobs. The change may take place due to a filter node as well that
> is inserted between the base and the intermediate bottom node. It occurs
> when the base node is the top one for another commit or stream job.
> After the introduction of the bottom node, don't freeze its backing child,
> that's the base, anymore.
> 
> Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Alberto Garcia <berto@igalia.com>
> ---
>  block/stream.c         | 49 +++++++++++++++++++++---------------------
>  tests/qemu-iotests/245 |  4 ++--
>  2 files changed, 27 insertions(+), 26 deletions(-)
> 
> diff --git a/block/stream.c b/block/stream.c
> index 65b13b27e0..fc97c89f81 100644
> --- a/block/stream.c
> +++ b/block/stream.c

[...]

> @@ -248,26 +250,25 @@ void stream_start(const char *job_id, BlockDriverState *bs,
>       * already have our own plans. Also don't allow resize as the image size is
>       * queried only at the job start and then cached. */
>      s = block_job_create(job_id, &stream_job_driver, NULL, bs,
> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
> -                         BLK_PERM_GRAPH_MOD,
> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
> -                         BLK_PERM_WRITE,
> +                         basic_flags | BLK_PERM_GRAPH_MOD,
> +                         basic_flags | BLK_PERM_WRITE,
>                           speed, creation_flags, NULL, NULL, errp);
>      if (!s) {
>          goto fail;
>      }
>  
> -    /* Block all intermediate nodes between bs and base, because they will
> -     * disappear from the chain after this operation. The streaming job reads
> -     * every block only once, assuming that it doesn't change, so block writes
> -     * and resizes. */
> -    for (iter = backing_bs(bs); iter && iter != base; iter = backing_bs(iter)) {
> -        block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
> -                           BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED,
> -                           &error_abort);
> +    /*
> +     * Block all intermediate nodes between bs and bottom (inclusive), because
> +     * they will disappear from the chain after this operation. The streaming
> +     * job reads every block only once, assuming that it doesn't change, so
> +     * forbid writes and resizes.
> +     */
> +    for (iter = bs; iter != bottom; iter = backing_bs(iter)) {
> +        block_job_add_bdrv(&s->common, "intermediate node", backing_bs(iter),
> +                           0, basic_flags, &error_abort);

I don’t understand this change.  Isn’t it doing exactly the same as before?

(If so, I just find it harder to read because every iteration isn’t
about @iter but backing_bs(iter).)

The rest looks good to me.

Max

>      }
>  
> -    s->base = base;
> +    s->bottom = bottom;
>      s->backing_file_str = g_strdup(backing_file_str);
>      s->bs_read_only = bs_read_only;
>      s->chain_frozen = true;
Vladimir Sementsov-Ogievskiy May 29, 2019, 7:34 a.m. UTC | #2
28.05.2019 20:33, Max Reitz wrote:
> On 06.05.19 17:34, Vladimir Sementsov-Ogievskiy wrote:
>> From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>
>> The bottom node is the intermediate block device that has the base as its
>> backing image. It is used instead of the base node while a block stream
>> job is running to avoid dependency on the base that may change due to the
>> parallel jobs. The change may take place due to a filter node as well that
>> is inserted between the base and the intermediate bottom node. It occurs
>> when the base node is the top one for another commit or stream job.
>> After the introduction of the bottom node, don't freeze its backing child,
>> that's the base, anymore.
>>
>> Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> Reviewed-by: Alberto Garcia <berto@igalia.com>
>> ---
>>   block/stream.c         | 49 +++++++++++++++++++++---------------------
>>   tests/qemu-iotests/245 |  4 ++--
>>   2 files changed, 27 insertions(+), 26 deletions(-)
>>
>> diff --git a/block/stream.c b/block/stream.c
>> index 65b13b27e0..fc97c89f81 100644
>> --- a/block/stream.c
>> +++ b/block/stream.c
> 
> [...]
> 
>> @@ -248,26 +250,25 @@ void stream_start(const char *job_id, BlockDriverState *bs,
>>        * already have our own plans. Also don't allow resize as the image size is
>>        * queried only at the job start and then cached. */
>>       s = block_job_create(job_id, &stream_job_driver, NULL, bs,
>> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>> -                         BLK_PERM_GRAPH_MOD,
>> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>> -                         BLK_PERM_WRITE,
>> +                         basic_flags | BLK_PERM_GRAPH_MOD,
>> +                         basic_flags | BLK_PERM_WRITE,
>>                            speed, creation_flags, NULL, NULL, errp);
>>       if (!s) {
>>           goto fail;
>>       }
>>   
>> -    /* Block all intermediate nodes between bs and base, because they will
>> -     * disappear from the chain after this operation. The streaming job reads
>> -     * every block only once, assuming that it doesn't change, so block writes
>> -     * and resizes. */
>> -    for (iter = backing_bs(bs); iter && iter != base; iter = backing_bs(iter)) {
>> -        block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
>> -                           BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED,
>> -                           &error_abort);
>> +    /*
>> +     * Block all intermediate nodes between bs and bottom (inclusive), because
>> +     * they will disappear from the chain after this operation. The streaming
>> +     * job reads every block only once, assuming that it doesn't change, so
>> +     * forbid writes and resizes.
>> +     */
>> +    for (iter = bs; iter != bottom; iter = backing_bs(iter)) {
>> +        block_job_add_bdrv(&s->common, "intermediate node", backing_bs(iter),
>> +                           0, basic_flags, &error_abort);
> 
> I don’t understand this change.  Isn’t it doing exactly the same as before?
> 
> (If so, I just find it harder to read because every iteration isn’t
> about @iter but backing_bs(iter).)

Hm, it's the same, but not using base. We may save old loop if calculate base exactly before
the loop (or at least not separated from it by any yield-point)

> 
> The rest looks good to me.
> 
> Max
> 
>>       }
>>   
>> -    s->base = base;
>> +    s->bottom = bottom;
>>       s->backing_file_str = g_strdup(backing_file_str);
>>       s->bs_read_only = bs_read_only;
>>       s->chain_frozen = true;
>
Max Reitz May 29, 2019, 11:23 a.m. UTC | #3
On 29.05.19 09:34, Vladimir Sementsov-Ogievskiy wrote:
> 28.05.2019 20:33, Max Reitz wrote:
>> On 06.05.19 17:34, Vladimir Sementsov-Ogievskiy wrote:
>>> From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>
>>> The bottom node is the intermediate block device that has the base as its
>>> backing image. It is used instead of the base node while a block stream
>>> job is running to avoid dependency on the base that may change due to the
>>> parallel jobs. The change may take place due to a filter node as well that
>>> is inserted between the base and the intermediate bottom node. It occurs
>>> when the base node is the top one for another commit or stream job.
>>> After the introduction of the bottom node, don't freeze its backing child,
>>> that's the base, anymore.
>>>
>>> Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> Reviewed-by: Alberto Garcia <berto@igalia.com>
>>> ---
>>>   block/stream.c         | 49 +++++++++++++++++++++---------------------
>>>   tests/qemu-iotests/245 |  4 ++--
>>>   2 files changed, 27 insertions(+), 26 deletions(-)
>>>
>>> diff --git a/block/stream.c b/block/stream.c
>>> index 65b13b27e0..fc97c89f81 100644
>>> --- a/block/stream.c
>>> +++ b/block/stream.c
>>
>> [...]
>>
>>> @@ -248,26 +250,25 @@ void stream_start(const char *job_id, BlockDriverState *bs,
>>>        * already have our own plans. Also don't allow resize as the image size is
>>>        * queried only at the job start and then cached. */
>>>       s = block_job_create(job_id, &stream_job_driver, NULL, bs,
>>> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>>> -                         BLK_PERM_GRAPH_MOD,
>>> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>>> -                         BLK_PERM_WRITE,
>>> +                         basic_flags | BLK_PERM_GRAPH_MOD,
>>> +                         basic_flags | BLK_PERM_WRITE,
>>>                            speed, creation_flags, NULL, NULL, errp);
>>>       if (!s) {
>>>           goto fail;
>>>       }
>>>   
>>> -    /* Block all intermediate nodes between bs and base, because they will
>>> -     * disappear from the chain after this operation. The streaming job reads
>>> -     * every block only once, assuming that it doesn't change, so block writes
>>> -     * and resizes. */
>>> -    for (iter = backing_bs(bs); iter && iter != base; iter = backing_bs(iter)) {
>>> -        block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
>>> -                           BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED,
>>> -                           &error_abort);
>>> +    /*
>>> +     * Block all intermediate nodes between bs and bottom (inclusive), because
>>> +     * they will disappear from the chain after this operation. The streaming
>>> +     * job reads every block only once, assuming that it doesn't change, so
>>> +     * forbid writes and resizes.
>>> +     */
>>> +    for (iter = bs; iter != bottom; iter = backing_bs(iter)) {
>>> +        block_job_add_bdrv(&s->common, "intermediate node", backing_bs(iter),
>>> +                           0, basic_flags, &error_abort);
>>
>> I don’t understand this change.  Isn’t it doing exactly the same as before?
>>
>> (If so, I just find it harder to read because every iteration isn’t
>> about @iter but backing_bs(iter).)
> 
> Hm, it's the same, but not using base. We may save old loop if calculate base exactly before
> the loop (or at least not separated from it by any yield-point)

But we are still in stream_start() here.  base cannot have changed yet,
can it?

(I don’t even think this is about yield points.  As long as
stream_start() doesn’t return, the QMP monitor won’t receive any new
commands.)

Max
Vladimir Sementsov-Ogievskiy May 29, 2019, 11:44 a.m. UTC | #4
29.05.2019 14:23, Max Reitz wrote:
> On 29.05.19 09:34, Vladimir Sementsov-Ogievskiy wrote:
>> 28.05.2019 20:33, Max Reitz wrote:
>>> On 06.05.19 17:34, Vladimir Sementsov-Ogievskiy wrote:
>>>> From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>>
>>>> The bottom node is the intermediate block device that has the base as its
>>>> backing image. It is used instead of the base node while a block stream
>>>> job is running to avoid dependency on the base that may change due to the
>>>> parallel jobs. The change may take place due to a filter node as well that
>>>> is inserted between the base and the intermediate bottom node. It occurs
>>>> when the base node is the top one for another commit or stream job.
>>>> After the introduction of the bottom node, don't freeze its backing child,
>>>> that's the base, anymore.
>>>>
>>>> Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> Reviewed-by: Alberto Garcia <berto@igalia.com>
>>>> ---
>>>>    block/stream.c         | 49 +++++++++++++++++++++---------------------
>>>>    tests/qemu-iotests/245 |  4 ++--
>>>>    2 files changed, 27 insertions(+), 26 deletions(-)
>>>>
>>>> diff --git a/block/stream.c b/block/stream.c
>>>> index 65b13b27e0..fc97c89f81 100644
>>>> --- a/block/stream.c
>>>> +++ b/block/stream.c
>>>
>>> [...]
>>>
>>>> @@ -248,26 +250,25 @@ void stream_start(const char *job_id, BlockDriverState *bs,
>>>>         * already have our own plans. Also don't allow resize as the image size is
>>>>         * queried only at the job start and then cached. */
>>>>        s = block_job_create(job_id, &stream_job_driver, NULL, bs,
>>>> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>>>> -                         BLK_PERM_GRAPH_MOD,
>>>> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>>>> -                         BLK_PERM_WRITE,
>>>> +                         basic_flags | BLK_PERM_GRAPH_MOD,
>>>> +                         basic_flags | BLK_PERM_WRITE,
>>>>                             speed, creation_flags, NULL, NULL, errp);
>>>>        if (!s) {
>>>>            goto fail;
>>>>        }
>>>>    
>>>> -    /* Block all intermediate nodes between bs and base, because they will
>>>> -     * disappear from the chain after this operation. The streaming job reads
>>>> -     * every block only once, assuming that it doesn't change, so block writes
>>>> -     * and resizes. */
>>>> -    for (iter = backing_bs(bs); iter && iter != base; iter = backing_bs(iter)) {
>>>> -        block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
>>>> -                           BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED,
>>>> -                           &error_abort);
>>>> +    /*
>>>> +     * Block all intermediate nodes between bs and bottom (inclusive), because
>>>> +     * they will disappear from the chain after this operation. The streaming
>>>> +     * job reads every block only once, assuming that it doesn't change, so
>>>> +     * forbid writes and resizes.
>>>> +     */
>>>> +    for (iter = bs; iter != bottom; iter = backing_bs(iter)) {
>>>> +        block_job_add_bdrv(&s->common, "intermediate node", backing_bs(iter),
>>>> +                           0, basic_flags, &error_abort);
>>>
>>> I don’t understand this change.  Isn’t it doing exactly the same as before?
>>>
>>> (If so, I just find it harder to read because every iteration isn’t
>>> about @iter but backing_bs(iter).)
>>
>> Hm, it's the same, but not using base. We may save old loop if calculate base exactly before
>> the loop (or at least not separated from it by any yield-point)
> 
> But we are still in stream_start() here.  base cannot have changed yet,
> can it?
> 
> (I don’t even think this is about yield points.  As long as
> stream_start() doesn’t return, the QMP monitor won’t receive any new
> commands.)
> 

But block graph may be modified not only from qmp. From block jobs too. If base is another filter, it may
be dropped in any time.
Max Reitz May 29, 2019, 11:53 a.m. UTC | #5
On 29.05.19 13:44, Vladimir Sementsov-Ogievskiy wrote:
> 29.05.2019 14:23, Max Reitz wrote:
>> On 29.05.19 09:34, Vladimir Sementsov-Ogievskiy wrote:
>>> 28.05.2019 20:33, Max Reitz wrote:
>>>> On 06.05.19 17:34, Vladimir Sementsov-Ogievskiy wrote:
>>>>> From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>>>
>>>>> The bottom node is the intermediate block device that has the base as its
>>>>> backing image. It is used instead of the base node while a block stream
>>>>> job is running to avoid dependency on the base that may change due to the
>>>>> parallel jobs. The change may take place due to a filter node as well that
>>>>> is inserted between the base and the intermediate bottom node. It occurs
>>>>> when the base node is the top one for another commit or stream job.
>>>>> After the introduction of the bottom node, don't freeze its backing child,
>>>>> that's the base, anymore.
>>>>>
>>>>> Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>>> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>>> Reviewed-by: Alberto Garcia <berto@igalia.com>
>>>>> ---
>>>>>    block/stream.c         | 49 +++++++++++++++++++++---------------------
>>>>>    tests/qemu-iotests/245 |  4 ++--
>>>>>    2 files changed, 27 insertions(+), 26 deletions(-)
>>>>>
>>>>> diff --git a/block/stream.c b/block/stream.c
>>>>> index 65b13b27e0..fc97c89f81 100644
>>>>> --- a/block/stream.c
>>>>> +++ b/block/stream.c
>>>>
>>>> [...]
>>>>
>>>>> @@ -248,26 +250,25 @@ void stream_start(const char *job_id, BlockDriverState *bs,
>>>>>         * already have our own plans. Also don't allow resize as the image size is
>>>>>         * queried only at the job start and then cached. */
>>>>>        s = block_job_create(job_id, &stream_job_driver, NULL, bs,
>>>>> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>>>>> -                         BLK_PERM_GRAPH_MOD,
>>>>> -                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>>>>> -                         BLK_PERM_WRITE,
>>>>> +                         basic_flags | BLK_PERM_GRAPH_MOD,
>>>>> +                         basic_flags | BLK_PERM_WRITE,
>>>>>                             speed, creation_flags, NULL, NULL, errp);
>>>>>        if (!s) {
>>>>>            goto fail;
>>>>>        }
>>>>>    
>>>>> -    /* Block all intermediate nodes between bs and base, because they will
>>>>> -     * disappear from the chain after this operation. The streaming job reads
>>>>> -     * every block only once, assuming that it doesn't change, so block writes
>>>>> -     * and resizes. */
>>>>> -    for (iter = backing_bs(bs); iter && iter != base; iter = backing_bs(iter)) {
>>>>> -        block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
>>>>> -                           BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED,
>>>>> -                           &error_abort);
>>>>> +    /*
>>>>> +     * Block all intermediate nodes between bs and bottom (inclusive), because
>>>>> +     * they will disappear from the chain after this operation. The streaming
>>>>> +     * job reads every block only once, assuming that it doesn't change, so
>>>>> +     * forbid writes and resizes.
>>>>> +     */
>>>>> +    for (iter = bs; iter != bottom; iter = backing_bs(iter)) {
>>>>> +        block_job_add_bdrv(&s->common, "intermediate node", backing_bs(iter),
>>>>> +                           0, basic_flags, &error_abort);
>>>>
>>>> I don’t understand this change.  Isn’t it doing exactly the same as before?
>>>>
>>>> (If so, I just find it harder to read because every iteration isn’t
>>>> about @iter but backing_bs(iter).)
>>>
>>> Hm, it's the same, but not using base. We may save old loop if calculate base exactly before
>>> the loop (or at least not separated from it by any yield-point)
>>
>> But we are still in stream_start() here.  base cannot have changed yet,
>> can it?
>>
>> (I don’t even think this is about yield points.  As long as
>> stream_start() doesn’t return, the QMP monitor won’t receive any new
>> commands.)
>>
> 
> But block graph may be modified not only from qmp. From block jobs too. If base is another filter, it may
> be dropped in any time.

Ah, yes, that’s true.  And I suppose that can happen in
bdrv_reopen_set_read_only().  Hm.

OK, I still don’t like the loop how it is currently written.  Maybe I’d
like it better with s/iter/parent_bs/?

Well, or you proposed would work, too, i.e. base = backing_bs(bottom)
just before the loop – with a comment that explains why we need that.
That’s probably better.

Max
diff mbox series

Patch

diff --git a/block/stream.c b/block/stream.c
index 65b13b27e0..fc97c89f81 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -31,7 +31,7 @@  enum {
 
 typedef struct StreamBlockJob {
     BlockJob common;
-    BlockDriverState *base;
+    BlockDriverState *bottom;
     BlockdevOnError on_error;
     char *backing_file_str;
     bool bs_read_only;
@@ -54,7 +54,7 @@  static void stream_abort(Job *job)
 
     if (s->chain_frozen) {
         BlockJob *bjob = &s->common;
-        bdrv_unfreeze_backing_chain(blk_bs(bjob->blk), s->base);
+        bdrv_unfreeze_backing_chain(blk_bs(bjob->blk), s->bottom);
     }
 }
 
@@ -63,11 +63,11 @@  static int stream_prepare(Job *job)
     StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
     BlockJob *bjob = &s->common;
     BlockDriverState *bs = blk_bs(bjob->blk);
-    BlockDriverState *base = s->base;
+    BlockDriverState *base = backing_bs(s->bottom);
     Error *local_err = NULL;
     int ret = 0;
 
-    bdrv_unfreeze_backing_chain(bs, base);
+    bdrv_unfreeze_backing_chain(bs, s->bottom);
     s->chain_frozen = false;
 
     if (bs->backing) {
@@ -110,7 +110,7 @@  static int coroutine_fn stream_run(Job *job, Error **errp)
     StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
     BlockBackend *blk = s->common.blk;
     BlockDriverState *bs = blk_bs(blk);
-    BlockDriverState *base = s->base;
+    bool enable_cor = !backing_bs(s->bottom);
     int64_t len;
     int64_t offset = 0;
     uint64_t delay_ns = 0;
@@ -119,7 +119,8 @@  static int coroutine_fn stream_run(Job *job, Error **errp)
     int64_t n = 0; /* bytes */
     void *buf;
 
-    if (!bs->backing) {
+    if (bs == s->bottom) {
+        /* Nothing to stream */
         return 0;
     }
 
@@ -136,7 +137,7 @@  static int coroutine_fn stream_run(Job *job, Error **errp)
      * backing chain since the copy-on-read operation does not take base into
      * account.
      */
-    if (!base) {
+    if (enable_cor) {
         bdrv_enable_copy_on_read(bs);
     }
 
@@ -159,9 +160,8 @@  static int coroutine_fn stream_run(Job *job, Error **errp)
         } else if (ret >= 0) {
             /* Copy if allocated in the intermediate images.  Limit to the
              * known-unallocated area [offset, offset+n*BDRV_SECTOR_SIZE).  */
-            ret = bdrv_is_allocated_above(backing_bs(bs), base, false,
+            ret = bdrv_is_allocated_above(backing_bs(bs), s->bottom, true,
                                           offset, n, &n);
-
             /* Finish early if end of backing file has been reached */
             if (ret == 0 && n == 0) {
                 n = len - offset;
@@ -198,7 +198,7 @@  static int coroutine_fn stream_run(Job *job, Error **errp)
         }
     }
 
-    if (!base) {
+    if (enable_cor) {
         bdrv_disable_copy_on_read(bs);
     }
 
@@ -230,8 +230,10 @@  void stream_start(const char *job_id, BlockDriverState *bs,
     StreamBlockJob *s;
     BlockDriverState *iter;
     bool bs_read_only;
+    int basic_flags = BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED;
+    BlockDriverState *bottom = bdrv_find_overlay(bs, base);
 
-    if (bdrv_freeze_backing_chain(bs, base, errp) < 0) {
+    if (bdrv_freeze_backing_chain(bs, bottom, errp) < 0) {
         return;
     }
 
@@ -248,26 +250,25 @@  void stream_start(const char *job_id, BlockDriverState *bs,
      * already have our own plans. Also don't allow resize as the image size is
      * queried only at the job start and then cached. */
     s = block_job_create(job_id, &stream_job_driver, NULL, bs,
-                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
-                         BLK_PERM_GRAPH_MOD,
-                         BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
-                         BLK_PERM_WRITE,
+                         basic_flags | BLK_PERM_GRAPH_MOD,
+                         basic_flags | BLK_PERM_WRITE,
                          speed, creation_flags, NULL, NULL, errp);
     if (!s) {
         goto fail;
     }
 
-    /* Block all intermediate nodes between bs and base, because they will
-     * disappear from the chain after this operation. The streaming job reads
-     * every block only once, assuming that it doesn't change, so block writes
-     * and resizes. */
-    for (iter = backing_bs(bs); iter && iter != base; iter = backing_bs(iter)) {
-        block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
-                           BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED,
-                           &error_abort);
+    /*
+     * Block all intermediate nodes between bs and bottom (inclusive), because
+     * they will disappear from the chain after this operation. The streaming
+     * job reads every block only once, assuming that it doesn't change, so
+     * forbid writes and resizes.
+     */
+    for (iter = bs; iter != bottom; iter = backing_bs(iter)) {
+        block_job_add_bdrv(&s->common, "intermediate node", backing_bs(iter),
+                           0, basic_flags, &error_abort);
     }
 
-    s->base = base;
+    s->bottom = bottom;
     s->backing_file_str = g_strdup(backing_file_str);
     s->bs_read_only = bs_read_only;
     s->chain_frozen = true;
diff --git a/tests/qemu-iotests/245 b/tests/qemu-iotests/245
index a04c6235c1..006f9f72f0 100644
--- a/tests/qemu-iotests/245
+++ b/tests/qemu-iotests/245
@@ -865,9 +865,9 @@  class TestBlockdevReopen(iotests.QMPTestCase):
                              device = 'hd0', base_node = 'hd2', speed = 512 * 1024)
         self.assert_qmp(result, 'return', {})
 
-        # We can't remove hd2 while the stream job is ongoing
+        # We can remove hd2 while the stream job is ongoing
         opts['backing']['backing'] = None
-        self.reopen(opts, {}, "Cannot change 'backing' link from 'hd1' to 'hd2'")
+        self.reopen(opts, {})
 
         # We can't remove hd1 while the stream job is ongoing
         opts['backing'] = None