Patchwork [01/10] qcow2: Fix error handling in qcow2_grow_l1_table

login
register
mail settings
Submitter Kevin Wolf
Date Jan. 18, 2010, 12:11 p.m.
Message ID <1263816696-24122-2-git-send-email-kwolf@redhat.com>
Download mbox | patch
Permalink /patch/43063/
State New
Headers show

Comments

Kevin Wolf - Jan. 18, 2010, 12:11 p.m.
Return the appropriate error value instead of always using EIO. Don't free the
L1 table on errors, we still need it.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/qcow2-cluster.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)
Christoph Hellwig - Jan. 19, 2010, 10:58 a.m.
On Mon, Jan 18, 2010 at 01:11:27PM +0100, Kevin Wolf wrote:
> Return the appropriate error value instead of always using EIO. Don't free the
> L1 table on errors, we still need it.
> 
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>

Looks good correct, but shouldn't we free the clusters for the new l1
table if writing to it fails?  At least dependend on whether it's EIO
in which case we could assume the sectos on disk to be worn out.
Kevin Wolf - Jan. 19, 2010, 11:25 a.m.
Am 19.01.2010 11:58, schrieb Christoph Hellwig:
> On Mon, Jan 18, 2010 at 01:11:27PM +0100, Kevin Wolf wrote:
>> Return the appropriate error value instead of always using EIO. Don't free the
>> L1 table on errors, we still need it.
>>
>> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> 
> Looks good correct, but shouldn't we free the clusters for the new l1
> table if writing to it fails?  At least dependend on whether it's EIO
> in which case we could assume the sectos on disk to be worn out.

Agreed, we should try to free them. At least in RHEV, the most likely
case for failure will be ENOSPC and we're going to succeed with the free
in that case.

This will become the first patch of the second series then. ;-)

Kevin

Patch

diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
index f88118c..adddf56 100644
--- a/block/qcow2-cluster.c
+++ b/block/qcow2-cluster.c
@@ -67,9 +67,10 @@  int qcow2_grow_l1_table(BlockDriverState *bs, int min_size)
     /* set new table */
     cpu_to_be32w((uint32_t*)data, new_l1_size);
     cpu_to_be64w((uint64_t*)(data + 4), new_l1_table_offset);
-    if (bdrv_pwrite(s->hd, offsetof(QCowHeader, l1_size), data,
-                sizeof(data)) != sizeof(data))
+    ret = bdrv_pwrite(s->hd, offsetof(QCowHeader, l1_size), data,sizeof(data));
+    if (ret != sizeof(data)) {
         goto fail;
+    }
     qemu_free(s->l1_table);
     qcow2_free_clusters(bs, s->l1_table_offset, s->l1_size * sizeof(uint64_t));
     s->l1_table_offset = new_l1_table_offset;
@@ -77,8 +78,7 @@  int qcow2_grow_l1_table(BlockDriverState *bs, int min_size)
     s->l1_size = new_l1_size;
     return 0;
  fail:
-    qemu_free(s->l1_table);
-    return -EIO;
+    return ret < 0 ? ret : -EIO;
 }
 
 void qcow2_l2_cache_reset(BlockDriverState *bs)