Patchwork [1/2] qcow2: Undo leaked allocations in co_writev

login
register
mail settings
Submitter Max Reitz
Date Oct. 10, 2013, 8:52 a.m.
Message ID <1381395144-4449-2-git-send-email-mreitz@redhat.com>
Download mbox | patch
Permalink /patch/282176/
State New
Headers show

Comments

Max Reitz - Oct. 10, 2013, 8:52 a.m.
If the write request spans more than one L2 table,
qcow2_alloc_cluster_offset cannot handle the required allocations
atomically. This results in leaks if it allocated new clusters in any
but the last L2 table touched and an error occurs in qcow2_co_writev
before having established the L2 link. These non-atomic allocations
were, however, indeed successful and are therefore given to the caller
in the L2Meta list.

If an error occurs in qcow2_co_writev and the L2Meta list is unwound,
all its remaining entries are clusters whose L2 links were not yet
established. Thus, all allocations in that list should be undone.

Signed-off-by: Max Reitz <mreitz@redhat.com>
---
 block/qcow2.c | 7 +++++++
 1 file changed, 7 insertions(+)
Kevin Wolf - Oct. 10, 2013, 12:26 p.m.
Am 10.10.2013 um 10:52 hat Max Reitz geschrieben:
> If the write request spans more than one L2 table,
> qcow2_alloc_cluster_offset cannot handle the required allocations
> atomically. This results in leaks if it allocated new clusters in any
> but the last L2 table touched and an error occurs in qcow2_co_writev
> before having established the L2 link. These non-atomic allocations
> were, however, indeed successful and are therefore given to the caller
> in the L2Meta list.
> 
> If an error occurs in qcow2_co_writev and the L2Meta list is unwound,
> all its remaining entries are clusters whose L2 links were not yet
> established. Thus, all allocations in that list should be undone.
> 
> Signed-off-by: Max Reitz <mreitz@redhat.com>
> ---
>  block/qcow2.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/block/qcow2.c b/block/qcow2.c
> index b2489fb..6bedd5d 100644
> --- a/block/qcow2.c
> +++ b/block/qcow2.c
> @@ -1017,6 +1017,13 @@ fail:
>      while (l2meta != NULL) {
>          QCowL2Meta *next;
>  
> +        /* Undo all leaked allocations */
> +        if (l2meta->nb_clusters != 0) {
> +            qcow2_free_clusters(bs, l2meta->alloc_offset,
> +                                l2meta->nb_clusters << s->cluster_bits,
> +                                QCOW2_DISCARD_ALWAYS);
> +        }
> +
>          if (l2meta->nb_clusters != 0) {
>              QLIST_REMOVE(l2meta, next_in_flight);
>          }

This feels a bit risky.

I think currently it does work, because qcow2_alloc_cluster_link_l2()
can only return an error when it didn't update the L2 entry in the cache
yet, but adding any error condition between that point and the L2Meta
unwinding would result in corruption. I'm unsure, but perhaps a cluster
leak is the lesser evil. Did you consider this? Do other people have an
opinion on it?

Also, shouldn't it be QCOW2_DISCARD_OTHER?

Kevin
Max Reitz - Oct. 10, 2013, 12:54 p.m.
On 2013-10-10 14:26, Kevin Wolf wrote:
> Am 10.10.2013 um 10:52 hat Max Reitz geschrieben:
>> If the write request spans more than one L2 table,
>> qcow2_alloc_cluster_offset cannot handle the required allocations
>> atomically. This results in leaks if it allocated new clusters in any
>> but the last L2 table touched and an error occurs in qcow2_co_writev
>> before having established the L2 link. These non-atomic allocations
>> were, however, indeed successful and are therefore given to the caller
>> in the L2Meta list.
>>
>> If an error occurs in qcow2_co_writev and the L2Meta list is unwound,
>> all its remaining entries are clusters whose L2 links were not yet
>> established. Thus, all allocations in that list should be undone.
>>
>> Signed-off-by: Max Reitz <mreitz@redhat.com>
>> ---
>>   block/qcow2.c | 7 +++++++
>>   1 file changed, 7 insertions(+)
>>
>> diff --git a/block/qcow2.c b/block/qcow2.c
>> index b2489fb..6bedd5d 100644
>> --- a/block/qcow2.c
>> +++ b/block/qcow2.c
>> @@ -1017,6 +1017,13 @@ fail:
>>       while (l2meta != NULL) {
>>           QCowL2Meta *next;
>>   
>> +        /* Undo all leaked allocations */
>> +        if (l2meta->nb_clusters != 0) {
>> +            qcow2_free_clusters(bs, l2meta->alloc_offset,
>> +                                l2meta->nb_clusters << s->cluster_bits,
>> +                                QCOW2_DISCARD_ALWAYS);
>> +        }
>> +
>>           if (l2meta->nb_clusters != 0) {
>>               QLIST_REMOVE(l2meta, next_in_flight);
>>           }
> This feels a bit risky.
>
> I think currently it does work, because qcow2_alloc_cluster_link_l2()
> can only return an error when it didn't update the L2 entry in the cache
> yet, but adding any error condition between that point and the L2Meta
> unwinding would result in corruption. I'm unsure, but perhaps a cluster
> leak is the lesser evil. Did you consider this? Do other people have an
> opinion on it?

What error conditions are there which can occur between 
qcow2_alloc_cluster_link_l2 and the L2Meta unwinding? If all 
qcow2_alloc_cluster_link_l2 calls are successful, the list is empty and 
the while loop either goes into another iteration or the function 
returns successfully (without any further need to unwind the list). If 
some call fails, all previous (successful) calls have already been 
removed from the list, therefore the unwinding only affects L2Meta 
request with failed calls to qcow2_alloc_cluster_link_l2 (or ones where 
that function wasn't called at all).

If the "currently" implied that this will turn out bad if there is a new 
error condition between a successful call to qcow2_alloc_cluster_link_l2 
and the removal of the L2Meta request from the list: Yes, that's true, 
of course. However, as you've said, currently, there is no such 
condition; and I don't see why it should be introduced. The sole purpose 
of the list seems to be (to me) to execute qcow2_alloc_cluster_link_l2 
on every of its elements. Thus, as soon as qcow2_alloc_cluster_link_l2 
is successful, the corresponding request should be removed from the list.

So, in case you do agree that it currently works fine, I would not 
consider it risky; if this patch is applied and some time in the future 
anything introduces a "goto fail" between qcow2_alloc_cluster_link_l2 
and l2_meta = next, this patch would simply have to make sure that 
qcow2_free_clusters isn't called in this case. In the probably very 
unlikely case all my previous assumptions and conclusions were true, I'd 
just add a comment in the qcow2_alloc_cluster_link_l2 loop informing 
about this case (“If you add a goto fail here, make sure to pay 
attention” or something along these lines).

> Also, shouldn't it be QCOW2_DISCARD_OTHER?

I'm always unsure about the discard flags. ;-)

I try to follow the rule of “use the specific type (or ‘other’) for 
freeing ‘out of the blue’, but use ‘always’ if it's just a very recent 
allocation that is being undone again”. I'd gladly accept better 
recommendations. ;-)

Max
Kevin Wolf - Oct. 10, 2013, 1:57 p.m.
Am 10.10.2013 um 14:54 hat Max Reitz geschrieben:
> If the "currently" implied that this will turn out bad if there is a
> new error condition between a successful call to
> qcow2_alloc_cluster_link_l2 and the removal of the L2Meta request
> from the list: Yes, that's true, of course.

Yes, that's the scenario. It seems easy to miss the error handling path
when reviewing a change to this code.

> However, as you've said,
> currently, there is no such condition; and I don't see why it should
> be introduced. The sole purpose of the list seems to be (to me) to
> execute qcow2_alloc_cluster_link_l2 on every of its elements. Thus,
> as soon as qcow2_alloc_cluster_link_l2 is successful, the
> corresponding request should be removed from the list.

If anything isn't complex enough in qcow2, think about how things will
turn out with Delayed COW and chances are that it does become complex.

For example, you can then have other requests running in parallel which
use the newly allocated cluster. You may decrease the refcount only
after the last of them has completed. This is just the first case that
comes to mind, I'm sure there's more fun to be had.

> So, in case you do agree that it currently works fine, I would not
> consider it risky; if this patch is applied and some time in the
> future anything introduces a "goto fail" between
> qcow2_alloc_cluster_link_l2 and l2_meta = next, this patch would
> simply have to make sure that qcow2_free_clusters isn't called in
> this case. In the probably very unlikely case all my previous
> assumptions and conclusions were true, I'd just add a comment in the
> qcow2_alloc_cluster_link_l2 loop informing about this case (“If you
> add a goto fail here, make sure to pay attention” or something along
> these lines).

Adding a comment there sounds like a fair compromise.

> >Also, shouldn't it be QCOW2_DISCARD_OTHER?
> 
> I'm always unsure about the discard flags. ;-)
> 
> I try to follow the rule of “use the specific type (or ‘other’) for
> freeing ‘out of the blue’, but use ‘always’ if it's just a very
> recent allocation that is being undone again”. I'd gladly accept
> better recommendations. ;-)

To be honest, I'm not sure if there are any legitimate use cases for
'always'... Discard is a slow operation, so unless there's a specific
reason anyway, I'd default to 'other' (or a specific type, of course).

Kevin
Max Reitz - Oct. 10, 2013, 2:01 p.m.
On 2013-10-10 15:57, Kevin Wolf wrote:
> Am 10.10.2013 um 14:54 hat Max Reitz geschrieben:
>> If the "currently" implied that this will turn out bad if there is a
>> new error condition between a successful call to
>> qcow2_alloc_cluster_link_l2 and the removal of the L2Meta request
>> from the list: Yes, that's true, of course.
> Yes, that's the scenario. It seems easy to miss the error handling path
> when reviewing a change to this code.
>
>> However, as you've said,
>> currently, there is no such condition; and I don't see why it should
>> be introduced. The sole purpose of the list seems to be (to me) to
>> execute qcow2_alloc_cluster_link_l2 on every of its elements. Thus,
>> as soon as qcow2_alloc_cluster_link_l2 is successful, the
>> corresponding request should be removed from the list.
> If anything isn't complex enough in qcow2, think about how things will
> turn out with Delayed COW and chances are that it does become complex.
>
> For example, you can then have other requests running in parallel which
> use the newly allocated cluster. You may decrease the refcount only
> after the last of them has completed. This is just the first case that
> comes to mind, I'm sure there's more fun to be had.

Well, yes, I somehow thought of something like this; but I thought 
that'd be the problem of qcow2_alloc_cluster_link_l2 and as soon as that 
function has been called, though maybe we cannot remove it from the 
requests in flight (global for the BDS), but we still should remove it 
from the local l2meta list.

>> So, in case you do agree that it currently works fine, I would not
>> consider it risky; if this patch is applied and some time in the
>> future anything introduces a "goto fail" between
>> qcow2_alloc_cluster_link_l2 and l2_meta = next, this patch would
>> simply have to make sure that qcow2_free_clusters isn't called in
>> this case. In the probably very unlikely case all my previous
>> assumptions and conclusions were true, I'd just add a comment in the
>> qcow2_alloc_cluster_link_l2 loop informing about this case (“If you
>> add a goto fail here, make sure to pay attention” or something along
>> these lines).
> Adding a comment there sounds like a fair compromise.

Okay, I'll add it.

>>> Also, shouldn't it be QCOW2_DISCARD_OTHER?
>> I'm always unsure about the discard flags. ;-)
>>
>> I try to follow the rule of “use the specific type (or ‘other’) for
>> freeing ‘out of the blue’, but use ‘always’ if it's just a very
>> recent allocation that is being undone again”. I'd gladly accept
>> better recommendations. ;-)
> To be honest, I'm not sure if there are any legitimate use cases for
> 'always'... Discard is a slow operation, so unless there's a specific
> reason anyway, I'd default to 'other' (or a specific type, of course).

Seems easy enough to remember, thanks. ;-)

Max
Stefan Hajnoczi - Oct. 11, 2013, 9:15 a.m.
On Thu, Oct 10, 2013 at 02:26:25PM +0200, Kevin Wolf wrote:
> Am 10.10.2013 um 10:52 hat Max Reitz geschrieben:
> > If the write request spans more than one L2 table,
> > qcow2_alloc_cluster_offset cannot handle the required allocations
> > atomically. This results in leaks if it allocated new clusters in any
> > but the last L2 table touched and an error occurs in qcow2_co_writev
> > before having established the L2 link. These non-atomic allocations
> > were, however, indeed successful and are therefore given to the caller
> > in the L2Meta list.
> > 
> > If an error occurs in qcow2_co_writev and the L2Meta list is unwound,
> > all its remaining entries are clusters whose L2 links were not yet
> > established. Thus, all allocations in that list should be undone.
> > 
> > Signed-off-by: Max Reitz <mreitz@redhat.com>
> > ---
> >  block/qcow2.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/block/qcow2.c b/block/qcow2.c
> > index b2489fb..6bedd5d 100644
> > --- a/block/qcow2.c
> > +++ b/block/qcow2.c
> > @@ -1017,6 +1017,13 @@ fail:
> >      while (l2meta != NULL) {
> >          QCowL2Meta *next;
> >  
> > +        /* Undo all leaked allocations */
> > +        if (l2meta->nb_clusters != 0) {
> > +            qcow2_free_clusters(bs, l2meta->alloc_offset,
> > +                                l2meta->nb_clusters << s->cluster_bits,
> > +                                QCOW2_DISCARD_ALWAYS);
> > +        }
> > +
> >          if (l2meta->nb_clusters != 0) {
> >              QLIST_REMOVE(l2meta, next_in_flight);
> >          }
> 
> This feels a bit risky.
> 
> I think currently it does work, because qcow2_alloc_cluster_link_l2()
> can only return an error when it didn't update the L2 entry in the cache
> yet, but adding any error condition between that point and the L2Meta
> unwinding would result in corruption. I'm unsure, but perhaps a cluster
> leak is the lesser evil. Did you consider this? Do other people have an
> opinion on it?

I agree it's easy to make things worse by trying to free clusters in an
error path.  I find these types of changes a lot of effort (to write and
review properly) for relatively little gain.

Stefan

Patch

diff --git a/block/qcow2.c b/block/qcow2.c
index b2489fb..6bedd5d 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -1017,6 +1017,13 @@  fail:
     while (l2meta != NULL) {
         QCowL2Meta *next;
 
+        /* Undo all leaked allocations */
+        if (l2meta->nb_clusters != 0) {
+            qcow2_free_clusters(bs, l2meta->alloc_offset,
+                                l2meta->nb_clusters << s->cluster_bits,
+                                QCOW2_DISCARD_ALWAYS);
+        }
+
         if (l2meta->nb_clusters != 0) {
             QLIST_REMOVE(l2meta, next_in_flight);
         }