diff mbox

[RFC,1/2] close all the block drivers before the qemu process exits

Message ID 1273661213-19611-2-git-send-email-morita.kazutaka@lab.ntt.co.jp
State New
Headers show

Commit Message

MORITA Kazutaka May 12, 2010, 10:46 a.m. UTC
This patch calls the close handler of the block driver before the qemu
process exits.

This is necessary because the sheepdog block driver releases the lock
of VM images in the close handler.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 block.c   |   11 +++++++++++
 block.h   |    1 +
 monitor.c |    1 +
 vl.c      |    1 +
 4 files changed, 14 insertions(+), 0 deletions(-)

Comments

Christoph Hellwig May 12, 2010, 2:01 p.m. UTC | #1
On Wed, May 12, 2010 at 07:46:52PM +0900, MORITA Kazutaka wrote:
> This patch calls the close handler of the block driver before the qemu
> process exits.
> 
> This is necessary because the sheepdog block driver releases the lock
> of VM images in the close handler.
> 
> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>

Looks good in principle, except that bdrv_first is gone and has been
replaced with a real list in the meantime, so this won't even apply.
Avi Kivity May 12, 2010, 2:28 p.m. UTC | #2
On 05/12/2010 01:46 PM, MORITA Kazutaka wrote:
> This patch calls the close handler of the block driver before the qemu
> process exits.
>
> This is necessary because the sheepdog block driver releases the lock
> of VM images in the close handler.
>
>    

How do you handle abnormal termination?
MORITA Kazutaka May 12, 2010, 7:50 p.m. UTC | #3
On 2010/05/12 23:01, Christoph Hellwig wrote:
> On Wed, May 12, 2010 at 07:46:52PM +0900, MORITA Kazutaka wrote:
>> This patch calls the close handler of the block driver before the qemu
>> process exits.
>>
>> This is necessary because the sheepdog block driver releases the lock
>> of VM images in the close handler.
>>
>> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> 
> Looks good in principle, except that bdrv_first is gone and has been
> replaced with a real list in the meantime, so this won't even apply.
> 

Thank you for your comment.
I'll rebase and resend the updated version in the next days.

Thanks,

Kazutaka
MORITA Kazutaka May 12, 2010, 8:16 p.m. UTC | #4
On 2010/05/12 23:28, Avi Kivity wrote:
> On 05/12/2010 01:46 PM, MORITA Kazutaka wrote:
>> This patch calls the close handler of the block driver before the qemu
>> process exits.
>>
>> This is necessary because the sheepdog block driver releases the lock
>> of VM images in the close handler.
>>
>>    
> 
> How do you handle abnormal termination?
> 

In the case, we need to release the lock manually, unfortunately.
Sheepdog admin tool has a command to do that.

Thanks,

Kazutaka
MORITA Kazutaka May 13, 2010, 2:34 a.m. UTC | #5
At Thu, 13 May 2010 05:16:35 +0900,
MORITA Kazutaka wrote:
> 
> On 2010/05/12 23:28, Avi Kivity wrote:
> > On 05/12/2010 01:46 PM, MORITA Kazutaka wrote:
> >> This patch calls the close handler of the block driver before the qemu
> >> process exits.
> >>
> >> This is necessary because the sheepdog block driver releases the lock
> >> of VM images in the close handler.
> >>
> >>    
> > 
> > How do you handle abnormal termination?
> > 
> 
> In the case, we need to release the lock manually, unfortunately.
> Sheepdog admin tool has a command to do that.
> 

More precisely, if qemu fails down with its host machine, we detect
the qemu failure and release the lock.  It is because Sheepdog
currently assumes that all the qemu processes are in the sheepdog
cluster, and remember that where they are running.  When machine
failures happen, sheepdog release the lock of the VMs on the machines
(We uses corosync to check which machines are alive or not).

If the qemu process exits abnormally and its host machine is alive,
the sheepdog daemon on the host needs to detect the qemu failure.
However, the feature is not implemented yet.  We think of checking a
socket connection between qemu and sheepdog daemon to detect the
failure.  Currently, we need to release the lock manually from the
admin tool in this case.

Thanks,

Kazutaka
diff mbox

Patch

diff --git a/block.c b/block.c
index 7326bfe..a606820 100644
--- a/block.c
+++ b/block.c
@@ -526,6 +526,17 @@  void bdrv_close(BlockDriverState *bs)
     }
 }
 
+void bdrv_close_all(void)
+{
+    BlockDriverState *bs, *n;
+
+    for (bs = bdrv_first, n = bs->next; bs; bs = n, n = bs ? bs->next : NULL) {
+        if (bs && bs->drv && bs->drv->bdrv_close) {
+            bs->drv->bdrv_close(bs);
+        }
+    }
+}
+
 void bdrv_delete(BlockDriverState *bs)
 {
     BlockDriverState **pbs;
diff --git a/block.h b/block.h
index fa51ddf..1a1293a 100644
--- a/block.h
+++ b/block.h
@@ -123,6 +123,7 @@  BlockDriverAIOCB *bdrv_aio_ioctl(BlockDriverState *bs,
 /* Ensure contents are flushed to disk.  */
 void bdrv_flush(BlockDriverState *bs);
 void bdrv_flush_all(void);
+void bdrv_close_all(void);
 
 int bdrv_is_allocated(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
 	int *pnum);
diff --git a/monitor.c b/monitor.c
index 17e59f5..44bfe83 100644
--- a/monitor.c
+++ b/monitor.c
@@ -845,6 +845,7 @@  static void do_info_cpu_stats(Monitor *mon)
  */
 static void do_quit(Monitor *mon, const QDict *qdict, QObject **ret_data)
 {
+    bdrv_close_all();
     exit(0);
 }
 
diff --git a/vl.c b/vl.c
index 77677e8..65160ed 100644
--- a/vl.c
+++ b/vl.c
@@ -4205,6 +4205,7 @@  static void main_loop(void)
             vm_stop(r);
         }
     }
+    bdrv_close_all();
     pause_all_vcpus();
 }