From patchwork Tue Oct 31 05:19:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: linzhecheng X-Patchwork-Id: 832240 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yR0682gVfz9sNw for ; Tue, 31 Oct 2017 16:19:59 +1100 (AEDT) Received: from localhost ([::1]:43741 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1e9Oxt-0007vL-Vw for incoming@patchwork.ozlabs.org; Tue, 31 Oct 2017 01:19:54 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35944) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1e9OxR-0007v7-Vc for qemu-devel@nongnu.org; Tue, 31 Oct 2017 01:19:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1e9OxO-0002Ba-27 for qemu-devel@nongnu.org; Tue, 31 Oct 2017 01:19:25 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:4500) by eggs.gnu.org with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.71) (envelope-from ) id 1e9OxM-00029R-Ul for qemu-devel@nongnu.org; Tue, 31 Oct 2017 01:19:21 -0400 Received: from 172.30.72.53 (EHLO DGGEMM405-HUB.china.huawei.com) ([172.30.72.53]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AZD12098; Tue, 31 Oct 2017 13:19:15 +0800 (CST) Received: from DGGEMM509-MBS.china.huawei.com ([169.254.10.121]) by DGGEMM405-HUB.china.huawei.com ([10.3.20.213]) with mapi id 14.03.0361.001; Tue, 31 Oct 2017 13:19:09 +0800 From: linzhecheng To: "qemu-devel@nongnu.org" Thread-Topic: [Qemu-devel] [Bug] virtio-blk: qemu will crash if hotplug virtio-blk device failed Thread-Index: AdNSB7PFuqr4omHfSIueT8XW371Mpg== Date: Tue, 31 Oct 2017 05:19:08 +0000 Message-ID: Accept-Language: en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.177.131.80] MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090206.59F807D4.0209, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.10.121, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: f044412b1c5f5e9dbf514abc8e60f8d6 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] [fuzzy] X-Received-From: 45.249.212.187 X-Content-Filtered-By: Mailman/MimeDel 2.1.21 Subject: [Qemu-devel] [Bug] virtio-blk: qemu will crash if hotplug virtio-blk device failed X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" I found that hotplug virtio-blk device will lead to qemu crash. Re-production steps: 1. Run VM named vm001 2. Create a virtio-blk.xml which contains wrong configurations: 3. Run command : virsh attach-device vm001 vm001 Libvirt will return err msg: error: Failed to attach device from blk-scsi.xml error: internal error: unable to execute QEMU command 'device_add': Please set scsi=off for virtio-blk devices in order to use virtio 1.0 it means hotplug virtio-blk device failed. 4. Suspend or shutdown VM will leads to qemu crash from gdb: (gdb) bt #0 object_get_class (obj=obj@entry=0x0) at qom/object.c:750 #1 0x00007f9a72582e01 in virtio_vmstate_change (opaque=0x7f9a73d10960, running=0, state=) at /mnt/sdb/lzc/code/open/qemu/hw/virtio/virtio.c:2203 #2 0x00007f9a7261ef52 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_PAUSED) at vl.c:1685 #3 0x00007f9a7252603a in do_vm_stop (state=RUN_STATE_PAUSED) at /mnt/sdb/lzc/code/open/qemu/cpus.c:941 #4 vm_stop (state=state@entry=RUN_STATE_PAUSED) at /mnt/sdb/lzc/code/open/qemu/cpus.c:1807 #5 0x00007f9a7262eb1b in qmp_stop (errp=errp@entry=0x7ffe63e25590) at qmp.c:102 #6 0x00007f9a7262c70a in qmp_marshal_stop (args=, ret=, errp=0x7ffe63e255d8) at qmp-marshal.c:5854 #7 0x00007f9a72897e79 in do_qmp_dispatch (errp=0x7ffe63e255d0, request=0x7f9a76510120, cmds=0x7f9a72ee7980 ) at qapi/qmp-dispatch.c:104 #8 qmp_dispatch (cmds=0x7f9a72ee7980 , request=request@entry=0x7f9a76510120) at qapi/qmp-dispatch.c:131 #9 0x00007f9a725288d5 in handle_qmp_command (parser=, tokens=) at /mnt/sdb/lzc/code/open/qemu/monitor.c:3852 #10 0x00007f9a7289d514 in json_message_process_token (lexer=0x7f9a73ce4498, input=0x7f9a73cc6880, type=JSON_RCURLY, x=36, y=17) at qobject/json-streamer.c:105 #11 0x00007f9a728bb69b in json_lexer_feed_char (lexer=lexer@entry=0x7f9a73ce4498, ch=125 '}', flush=flush@entry=false) at qobject/json-lexer.c:323 #12 0x00007f9a728bb75e in json_lexer_feed (lexer=0x7f9a73ce4498, buffer=, size=) at qobject/json-lexer.c:373 #13 0x00007f9a7289d5d9 in json_message_parser_feed (parser=, buffer=, size=) at qobject/json-streamer.c:124 #14 0x00007f9a7252722e in monitor_qmp_read (opaque=, buf=, size=) at /mnt/sdb/lzc/code/open/qemu/monitor.c:3894 #15 0x00007f9a7284ee1b in tcp_chr_read (chan=, cond=, opaque=) at chardev/char-socket.c:441 #16 0x00007f9a6e03e99a in g_main_context_dispatch () from /usr/lib64/libglib-2.0.so.0 #17 0x00007f9a728a342c in glib_pollfds_poll () at util/main-loop.c:214 #18 os_host_main_loop_wait (timeout=) at util/main-loop.c:261 #19 main_loop_wait (nonblocking=nonblocking@entry=0) at util/main-loop.c:515 #20 0x00007f9a724e7547 in main_loop () at vl.c:1999 #21 main (argc=, argv=, envp=) at vl.c:4877 Problem happens in virtio_vmstate_change which is called by vm_state_notify, static void virtio_vmstate_change(void *opaque, int running, RunState state) { VirtIODevice *vdev = opaque; BusState *qbus = qdev_get_parent_bus(DEVICE(vdev)); VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus); bool backend_run = running && (vdev->status & VIRTIO_CONFIG_S_DRIVER_OK); vdev->vm_running = running; if (backend_run) { virtio_set_status(vdev, vdev->status); } if (k->vmstate_change) { k->vmstate_change(qbus->parent, backend_run); } if (!backend_run) { virtio_set_status(vdev, vdev->status); } } Vdev's parent_bus is NULL, so qdev_get_parent_bus(DEVICE(vdev)) will crash. virtio_vmstate_change is added to the list vm_change_state_head at virtio_blk_device_realize(virtio_init), but after hotplug virtio-blk failed, virtio_vmstate_change will not be removed from vm_change_state_head. I apply a patch as follews: diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 5884ce3..ea532dc 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -2491,6 +2491,7 @@ static void virtio_device_realize(DeviceState *dev, Error **errp) virtio_bus_device_plugged(vdev, &err); if (err != NULL) { error_propagate(errp, err); + vdc->unrealize(dev, NULL); return; }