From patchwork Mon Jun 24 08:57:35 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Bader X-Patchwork-Id: 253742 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id C157A2C04CE for ; Mon, 24 Jun 2013 18:57:52 +1000 (EST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1Ur2al-0002dh-DX; Mon, 24 Jun 2013 08:57:43 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1Ur2af-0002by-1i for kernel-team@lists.ubuntu.com; Mon, 24 Jun 2013 08:57:37 +0000 Received: from p5b2e3175.dip0.t-ipconnect.de ([91.46.49.117] helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1Ur2ae-0001gM-OX for kernel-team@lists.ubuntu.com; Mon, 24 Jun 2013 08:57:37 +0000 From: Stefan Bader To: kernel-team@lists.ubuntu.com Subject: [PATCH] xen: Avoid deadlock in xenUnifiedDomainGetXMLDesc Date: Mon, 24 Jun 2013 10:57:35 +0200 Message-Id: <1372064255-2421-1-git-send-email-stefan.bader@canonical.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <51C40E72.7090208@canonical.com> References: <51C40E72.7090208@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com So this is one way I am able to get this issue resolved. It might not be ideal as maybe there is a slight chance of inconsistencies. But at least it does not lock up anymore. -Stefan --- From f0c28a9f7af84a01bb2c3d71067adefb92e919d9 Mon Sep 17 00:00:00 2001 From: Stefan Bader Date: Mon, 24 Jun 2013 09:30:20 +0200 Subject: [PATCH] xen: Avoid deadlock in xenUnifiedDomainGetXMLDesc Commit 95e18efd added the use virDomainDefPtr to several Xen driver methods. That structure is obtained by calling xenGetDomainDefForDom() which will acquire the mutex to the priv pointer. Unfortunately (at least) xenUnifiedDomainGetXMLDesc already takes that mutex and now is locking up while calling into xenDomainUsedCpus(). xenDomainUsedCpus ... nb_vcpu = xenUnifiedDomainGetMaxVcpus(dom); return xenUnifiedDomainGetVcpusFlags(...) ... if (!(def = xenGetDomainDefForDom(dom))) return xenGetDomainDefForUUID(dom->conn, dom->uuid); ... ret = xenHypervisorLookupDomainByUUID(conn, uuid); ... xenUnifiedLock(priv); name = xenStoreDomainGetName(conn, id); xenUnifiedUnlock(priv); ... if ((ncpus = xenUnifiedDomainGetVcpus(dom, cpuinfo, nb_vcpu, ... if (!(def = xenGetDomainDefForDom(dom))) [again like above] The deadlock is observable when connecting to a Xen host using the old xm stack and trying to obtain domain XML data from any running guest (like "dumpxml 0"). This would be a minimal fix that tries to avoid the deadlock. I am not completely sure this is not allowing a small window for getting inconsistent data. Signed-off-by: Stefan Bader --- src/xen/xen_driver.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/src/xen/xen_driver.c b/src/xen/xen_driver.c index 3efc27a..64cc305 100644 --- a/src/xen/xen_driver.c +++ b/src/xen/xen_driver.c @@ -189,6 +189,7 @@ xenDomainUsedCpus(virDomainPtr dom) unsigned char *cpumap = NULL; size_t cpumaplen; int nb = 0; + int nbNodeCpus; int n, m; virVcpuInfoPtr cpuinfo = NULL; virNodeInfo nodeinfo; @@ -198,8 +199,11 @@ xenDomainUsedCpus(virDomainPtr dom) return NULL; priv = dom->conn->privateData; + xenUnifiedLock(priv); + nbNodeCpus = priv->nbNodeCpus; + xenUnifiedUnlock(priv); - if (priv->nbNodeCpus <= 0) + if (nbNodeCpus <= 0) return NULL; nb_vcpu = xenUnifiedDomainGetMaxVcpus(dom); if (nb_vcpu <= 0) @@ -207,7 +211,7 @@ xenDomainUsedCpus(virDomainPtr dom) if (xenUnifiedNodeGetInfo(dom->conn, &nodeinfo) < 0) return NULL; - if (!(cpulist = virBitmapNew(priv->nbNodeCpus))) { + if (!(cpulist = virBitmapNew(nbNodeCpus))) { virReportOOMError(); goto done; } @@ -225,7 +229,7 @@ xenDomainUsedCpus(virDomainPtr dom) if ((ncpus = xenUnifiedDomainGetVcpus(dom, cpuinfo, nb_vcpu, cpumap, cpumaplen)) >= 0) { for (n = 0; n < ncpus; n++) { - for (m = 0; m < priv->nbNodeCpus; m++) { + for (m = 0; m < nbNodeCpus; m++) { bool used; ignore_value(virBitmapGetBit(cpulist, m, &used)); if ((!used) && @@ -233,7 +237,7 @@ xenDomainUsedCpus(virDomainPtr dom) ignore_value(virBitmapSetBit(cpulist, m)); nb++; /* if all CPU are used just return NULL */ - if (nb == priv->nbNodeCpus) + if (nb == nbNodeCpus) goto done; } @@ -1403,9 +1407,7 @@ xenUnifiedDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) def = xenXMDomainGetXMLDesc(dom->conn, minidef); } else { char *cpus; - xenUnifiedLock(priv); cpus = xenDomainUsedCpus(dom); - xenUnifiedUnlock(priv); def = xenDaemonDomainGetXMLDesc(dom->conn, minidef, cpus); VIR_FREE(cpus); }