From patchwork Thu May 12 18:42:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Nestratov X-Patchwork-Id: 622045 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3r5t2K38znz9t6x for ; Sat, 14 May 2016 00:45:45 +1000 (AEST) Received: from localhost ([::1]:34823 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b1ELX-0001tl-0B for incoming@patchwork.ozlabs.org; Fri, 13 May 2016 10:45:43 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58538) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b0vZD-0004tV-DL for qemu-devel@nongnu.org; Thu, 12 May 2016 14:42:36 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b0vZ9-0007PN-5e for qemu-devel@nongnu.org; Thu, 12 May 2016 14:42:35 -0400 Received: from mailhub.sw.ru ([195.214.232.25]:48279 helo=relay.sw.ru) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b0vZ8-0007NS-N4 for qemu-devel@nongnu.org; Thu, 12 May 2016 14:42:31 -0400 Received: from max-fedora.localdomain ([10.30.16.130]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id u4CIgO14003514; Thu, 12 May 2016 21:42:26 +0300 (MSK) From: Maxim Nestratov To: qemu-devel@nongnu.org Date: Thu, 12 May 2016 21:42:15 +0300 Message-Id: <1463078535-35462-1-git-send-email-mnestratov@virtuozzo.com> X-Mailer: git-send-email 2.4.3 X-detected-operating-system: by eggs.gnu.org: OpenBSD 3.x X-Received-From: 195.214.232.25 X-Mailman-Approved-At: Fri, 13 May 2016 10:45:11 -0400 Subject: [Qemu-devel] [PATCH] migration: fix ram decompression race deadlock X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: den@openvz.org, Maxim Nestratov Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" The way how decompress_data_with_multi_threads communicates with do_data_decompress is incorrect. Imagine the following scenario. The function do_data_decompress just finished decompression and released param->mutex and got preempted. In parallel, the function decompress_data_with_multi_threads called start_decompression and then it starts to loop infinitely waiting for decompression to complete, which will never happend because decomp_param[idx].start is true and do_data_decompress will never enter while loop again. The patch fixes this problem by correcting while loop where we wait for condition only and other actions are moved out of it. Signed-off-by: Maxim Nestratov --- migration/ram.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 3f05738..579bfc0 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2193,18 +2193,18 @@ static void *do_data_decompress(void *opaque) qemu_mutex_lock(¶m->mutex); while (!param->start && !quit_decomp_thread) { qemu_cond_wait(¶m->cond, ¶m->mutex); - pagesize = TARGET_PAGE_SIZE; - if (!quit_decomp_thread) { - /* uncompress() will return failed in some case, especially - * when the page is dirted when doing the compression, it's - * not a problem because the dirty page will be retransferred - * and uncompress() won't break the data in other pages. - */ - uncompress((Bytef *)param->des, &pagesize, - (const Bytef *)param->compbuf, param->len); - } - param->start = false; } + pagesize = TARGET_PAGE_SIZE; + if (!quit_decomp_thread) { + /* uncompress() will return failed in some case, especially + * when the page is dirted when doing the compression, it's + * not a problem because the dirty page will be retransferred + * and uncompress() won't break the data in other pages. + */ + uncompress((Bytef *)param->des, &pagesize, + (const Bytef *)param->compbuf, param->len); + } + param->start = false; qemu_mutex_unlock(¶m->mutex); }