From patchwork Wed Jun 10 09:34:08 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fam Zheng X-Patchwork-Id: 482569 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 5BF88140293 for ; Wed, 10 Jun 2015 19:34:36 +1000 (AEST) Received: from localhost ([::1]:38613 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z2cP3-0002Ar-9N for incoming@patchwork.ozlabs.org; Wed, 10 Jun 2015 05:34:33 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38494) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z2cOn-0001u9-54 for qemu-devel@nongnu.org; Wed, 10 Jun 2015 05:34:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z2cOi-0003xr-62 for qemu-devel@nongnu.org; Wed, 10 Jun 2015 05:34:17 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57355) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z2cOh-0003xn-TM for qemu-devel@nongnu.org; Wed, 10 Jun 2015 05:34:12 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (Postfix) with ESMTPS id 403AAB82DB; Wed, 10 Jun 2015 09:34:11 +0000 (UTC) Received: from localhost (dhcp-14-104.nay.redhat.com [10.66.14.104]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t5A9Y9f7022298; Wed, 10 Jun 2015 05:34:10 -0400 Date: Wed, 10 Jun 2015 17:34:08 +0800 From: Fam Zheng To: Christian Borntraeger Message-ID: <20150610093408.GC11648@ad.nay.redhat.com> References: <556DBF87.2020908@de.ibm.com> <20150609022832.GA12817@cpc-pc.redhat.com> <5576AB52.8090708@de.ibm.com> <20150610021224.GE10873@ad.nay.redhat.com> <557800E0.5020202@de.ibm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <557800E0.5020202@de.ibm.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Kevin Wolf , Paolo Bonzini , qemu-devel , Stefan Hajnoczi Subject: Re: [Qemu-devel] "iothread: release iothread around aio_poll" causes random hangs at startup X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On Wed, 06/10 11:18, Christian Borntraeger wrote: > Am 10.06.2015 um 04:12 schrieb Fam Zheng: > > On Tue, 06/09 11:01, Christian Borntraeger wrote: > >> Am 09.06.2015 um 04:28 schrieb Fam Zheng: > >>> On Tue, 06/02 16:36, Christian Borntraeger wrote: > >>>> Paolo, > >>>> > >>>> I bisected > >>>> commit a0710f7995f914e3044e5899bd8ff6c43c62f916 > >>>> Author: Paolo Bonzini > >>>> AuthorDate: Fri Feb 20 17:26:52 2015 +0100 > >>>> Commit: Kevin Wolf > >>>> CommitDate: Tue Apr 28 15:36:08 2015 +0200 > >>>> > >>>> iothread: release iothread around aio_poll > >>>> > >>>> to cause a problem with hanging guests. > >>>> > >>>> Having many guests all with a kernel/ramdisk (via -kernel) and > >>>> several null block devices will result in hangs. All hanging > >>>> guests are in partition detection code waiting for an I/O to return > >>>> so very early maybe even the first I/O. > >>>> > >>>> Reverting that commit "fixes" the hangs. > >>>> Any ideas? > >>> > >>> Christian, I can't reproduce this on my x86 box with virtio-blk-pci. Do you > >>> have a reproducer for x86? Or could you collect backtraces for all the threads > >>> in QEMU when it hangs? > >>> > >>> My long shot is that the main loop is blocked at aio_context_acquire(ctx), > >>> while the iothread of that ctx is blocked at aio_poll(ctx, blocking). > >> > >> Here is a backtrace on s390. I need 2 or more disks, (one is not enough). > > > > It shows iothreads and main loop are all waiting for events, and the vcpu > > threads are running guest code. > > > > It could be the requests being leaked. Do you see this problem with a regular > > file based image or null-co driver? Maybe we're missing something about the > > AioContext in block/null.c. > > It seems to run with normal file based images. As soon as I have two or more null-aio > devices it hangs pretty soon when doing a reboot loop. > Ahh! If it's a reboot loop, the device reset thing may get fishy. I suspect the completion BH used by null-aio may be messed up, that's why I wonder whether null-co:// would work for you. Could you test that? Also, could you try below patch with null-aio://, too? Thanks, Fam diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index cd539aa..c87b444 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -652,15 +652,11 @@ static void virtio_blk_reset(VirtIODevice *vdev) { VirtIOBlock *s = VIRTIO_BLK(vdev); - if (s->dataplane) { - virtio_blk_data_plane_stop(s->dataplane); - } - - /* - * This should cancel pending requests, but can't do nicely until there - * are per-device request lists. - */ blk_drain_all(); + if (s->dataplane) { + virtio_blk_data_plane_stop(s->dataplane); + } + blk_set_enable_write_cache(s->blk, s->original_wce); }