From patchwork Mon Oct 15 09:13:53 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 191497 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 8DBAF2C00A6 for ; Mon, 15 Oct 2012 20:14:28 +1100 (EST) Received: from localhost ([::1]:49605 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TNgki-0006ZW-L3 for incoming@patchwork.ozlabs.org; Mon, 15 Oct 2012 05:14:24 -0400 Received: from eggs.gnu.org ([208.118.235.92]:45651) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TNgkY-0006Z8-3a for qemu-devel@nongnu.org; Mon, 15 Oct 2012 05:14:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TNgkW-0006PL-Nw for qemu-devel@nongnu.org; Mon, 15 Oct 2012 05:14:14 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42258) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TNgkW-0006P9-Cw; Mon, 15 Oct 2012 05:14:12 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q9F9E159017492 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 15 Oct 2012 05:14:01 -0400 Received: from dhcp-5-188.str.redhat.com (vpn1-4-183.ams2.redhat.com [10.36.4.183]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q9F9Ds9s017995 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Mon, 15 Oct 2012 05:13:55 -0400 Message-ID: <507BD3D1.3010804@redhat.com> Date: Mon, 15 Oct 2012 11:13:53 +0200 From: Kevin Wolf User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120605 Thunderbird/13.0 MIME-Version: 1.0 To: =?ISO-8859-1?Q?Andreas_F=E4rber?= References: <20120513160331.5b774c93.yizhouzhou@ict.ac.cn> <4FB0F89F.6080306@redhat.com> <4FD74513.2000500@suse.de> <4FD747BF.3020809@redhat.com> <50783CCA.902@suse.de> In-Reply-To: <50783CCA.902@suse.de> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-MIME-Autoconverted: from 8bit to quoted-printable by mx1.redhat.com id q9F9E159017492 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 209.132.183.28 Cc: qemu-devel@nongnu.org, qemu-stable@nongnu.org, Michael Tokarev , Zhouyi Zhou , Bruce Rogers Subject: Re: [Qemu-devel] fixing qemu-0.1X endless loop in qcow2_alloc_cluster_offset X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Am 12.10.2012 17:52, schrieb Andreas Färber: > Am 12.06.2012 15:44, schrieb Kevin Wolf: >> Am 12.06.2012 15:33, schrieb Andreas Färber: >>> Am 14.05.2012 14:20, schrieb Kevin Wolf: >>>> Am 13.05.2012 10:03, schrieb Zhouyi Zhou: >>>>> hi all >>>>> >>>>> sometimes, qemu/kvm-0.1x will hang in endless loop in qcow2_alloc_cluster_offset. >>>>> after some investigation, I found that: >>>>> in function posix_aio_process_queue(void *opaque) >>>>> 440 ret = qemu_paio_error(acb); >>>>> 441 if (ret == ECANCELED) { >>>>> 442 /* remove the request */ >>>>> 443 *pacb = acb->next; >>>>> 444 qemu_aio_release(acb); >>>>> 445 result = 1; >>>>> 446 } else if (ret != EINPROGRESS) { >>>>> in line 444 acb got released but acb->common.opaque does not. >>>>> which will be released via guest OS via ide_dma_cancel which >>>>> will in term call qcow_aio_cancel which does not check its argument >>>>> is in flight list or not. >>>>> The fix is as follows: (debian 6's qemu-kvm-0.12.5) >>>>> ####################################### >>>>> --- block/qcow2.h~ 2010-07-27 08:43:53.000000000 +0800 >>>>> +++ block/qcow2.h 2012-05-13 15:51:39.000000000 +0800 >>>>> @@ -143,6 +143,7 @@ >>>>> QLIST_HEAD(QCowAioDependencies, QCowAIOCB) dependent_requests; >>>>> >>>>> QLIST_ENTRY(QCowL2Meta) next_in_flight; >>>>> + int inflight; >>>>> } QCowL2Meta; >>>>> --- block/qcow2.c~ 2012-05-13 15:57:09.000000000 +0800 >>>>> +++ block/qcow2.c 2012-05-13 15:57:24.000000000 +0800 >>>>> @@ -349,6 +349,10 @@ >>>>> QCowAIOCB *acb = (QCowAIOCB *)blockacb; >>>>> if (acb->hd_aiocb) >>>>> bdrv_aio_cancel(acb->hd_aiocb); >>>>> + if (acb->l2meta.inflight) { >>>>> + QLIST_REMOVE(&acb->l2meta, next_in_flight); >>>>> + acb->l2meta.inflight = 0; >>>>> + } >>>>> qemu_aio_release(acb); >>>>> } >>>>> >>>>> @@ -506,6 +510,7 @@ >>>>> acb->n = 0; >>>>> acb->cluster_offset = 0; >>>>> acb->l2meta.nb_clusters = 0; >>>>> + acb->l2meta.inflight = 0; >>>>> QLIST_INIT(&acb->l2meta.dependent_requests); >>>>> return acb; >>>>> } >>>>> @@ -534,6 +539,7 @@ >>>>> /* Take the request off the list of running requests */ >>>>> if (m->nb_clusters != 0) { >>>>> QLIST_REMOVE(m, next_in_flight); >>>>> + m->inflight = 0; >>>>> } >>>>> >>>>> /* >>>>> @@ -632,6 +638,7 @@ >>>>> fail: >>>>> if (acb->l2meta.nb_clusters != 0) { >>>>> QLIST_REMOVE(&acb->l2meta, next_in_flight); >>>>> + acb->l2meta.inflight = 0; >>>>> } >>>>> done: >>>>> if (acb->qiov->niov > 1) >>>>> --- block/qcow2-cluster.c~ 2010-07-27 08:43:53.000000000 +0800 >>>>> +++ block/qcow2-cluster.c 2012-05-13 15:53:53.000000000 +0800 >>>>> @@ -827,6 +827,7 @@ >>>>> m->offset = offset; >>>>> m->n_start = n_start; >>>>> m->nb_clusters = nb_clusters; >>>>> + m->inflight = 1; >>>>> >>>>> out: >>>>> m->nb_available = MIN(nb_clusters << (s->cluster_bits - 9), n_end); >>>>> >>>>> Thanks for investigation >>>>> Zhouyi >>>> >>>> The patch looks reasonable to me. Note however that while it fixes the >>>> hang, it still causes cluster leaks. I'm not sure if someone is >>>> interested in picking these up for old stable releases. Andreas, I think >>>> you were going to take 0.15? The first version that doesn't have the >>>> problem is 1.0. >>> >>> Kevin, the policy as I understood it is to cherry-pick patches from >>> qemu.git into qemu-stable-x.y.git. So I don't think me applying this >>> patch to stable-0.15 would be right. I don't spot a particular qcow2 fix >>> among our 0.15 backports that I have now pushed. Do you have a pointer >>> which one(s) would fix this issue so that I can recheck? >> >> It's "fixed" as a side effect of the block layer conversion to >> coroutines. Not exactly the kind of patches you'd want to cherry-pick >> for stable-0.15. >> >> The better fix for 0.15 could be to backport the new behaviour of >> coroutine based requests with bdrv_aio_cancel: >> >> static void bdrv_aio_co_cancel_em(BlockDriverAIOCB *blockacb) >> { >> qemu_aio_flush(); >> } >> >> Using that as the implementation for qcow2_aio_cancel should be safe and >> fix this problem. > > Kevin, I have stable-0.15 in a state where I'm about to tag 0.15.2 now. > The original patch does not have a Signed-off-by nor your Acked-by, so I > can't apply it as-is. stable-0.15 does not have coroutines, so I don't > understand what exactly you're suggesting as alternative here: Backport > the whole coroutine feature including coroutine function above? Or just > call qemu_aio_flush() in place of what? This is old qcow2_aio_cancel(): No, that was qcow2_aio_flush. ;-) What I'm suggesting (not even compile tested!) is: Signed-off-by: Kevin Wolf diff --git a/block/qcow2.c b/block/qcow2.c index 48e1b95..d665675 100644 --- a/block/qcow2.c +++ b/block/qcow2.c @@ -388,10 +388,7 @@ typedef struct QCowAIOCB { static void qcow2_aio_cancel(BlockDriverAIOCB *blockacb) { - QCowAIOCB *acb = container_of(blockacb, QCowAIOCB, common); - if (acb->hd_aiocb) - bdrv_aio_cancel(acb->hd_aiocb); - qemu_aio_release(acb); + qemu_aio_flush(); } static AIOPool qcow2_aio_pool = {