From patchwork Wed May 21 02:42:13 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fam Zheng X-Patchwork-Id: 350915 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 50C7A140080 for ; Wed, 21 May 2014 12:42:39 +1000 (EST) Received: from localhost ([::1]:56444 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WmwUH-0001aA-6e for incoming@patchwork.ozlabs.org; Tue, 20 May 2014 22:42:37 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53752) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WmwTu-0001Ja-1f for qemu-devel@nongnu.org; Tue, 20 May 2014 22:42:20 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WmwTm-0006BZ-4n for qemu-devel@nongnu.org; Tue, 20 May 2014 22:42:14 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39251) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WmwTl-0006BK-Sp for qemu-devel@nongnu.org; Tue, 20 May 2014 22:42:06 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s4L2g57q022862 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Tue, 20 May 2014 22:42:05 -0400 Received: from T430.nay.redhat.com (dhcp-14-247.nay.redhat.com [10.66.14.247]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s4L2g1Yk015569; Tue, 20 May 2014 22:42:02 -0400 From: Fam Zheng To: qemu-devel@nongnu.org Date: Wed, 21 May 2014 10:42:13 +0800 Message-Id: <1400640133-19836-1-git-send-email-famz@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Paolo Bonzini , uobergfe@redhat.com, Stefan Hajnoczi Subject: [Qemu-devel] [PATCH v3] aio: Fix use-after-free in cancellation path X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The current flow of canceling a thread from THREAD_ACTIVE state is: 1) Caller wants to cancel a request, so it calls thread_pool_cancel. 2) thread_pool_cancel waits on the conditional variable elem->check_cancel. 3) The worker thread changes state to THREAD_DONE once the task is done, and notifies elem->check_cancel to allow thread_pool_cancel to continue execution, and signals the notifier (pool->notifier) to allow callback function to be called later. But because of the global mutex, the notifier won't get processed until step 4) and 5) are done. 4) thread_pool_cancel continues, leaving the notifier signaled, it just returns to caller. 5) Caller thinks the request is already canceled successfully, so it releases any related data, such as freeing elem->common.opaque. 6) In the next main loop iteration, the notifier handler, event_notifier_ready, is called. It finds the canceled thread in THREAD_DONE state, so calls elem->common.cb, with an (likely) dangling opaque pointer. This is a use-after-free. Fix it by calling event_notifier_ready before leaving thread_pool_cancel. Test case update: This change will let cancel complete earlier than test-thread-pool.c expects, so update the code to check this case: if it's already done, done_cb sets .aiocb to NULL, skip calling bdrv_aio_cancel on them. Reported-by: Ulrich Obergfell Suggested-by: Paolo Bonzini Signed-off-by: Fam Zheng --- v3: Call event_notifier_ready after unlock. --- tests/test-thread-pool.c | 2 +- thread-pool.c | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c index c1f8e13..aa156bc 100644 --- a/tests/test-thread-pool.c +++ b/tests/test-thread-pool.c @@ -180,7 +180,7 @@ static void test_cancel(void) /* Canceling the others will be a blocking operation. */ for (i = 0; i < 100; i++) { - if (data[i].n != 3) { + if (data[i].aiocb && data[i].n != 3) { bdrv_aio_cancel(data[i].aiocb); } } diff --git a/thread-pool.c b/thread-pool.c index fbdd3ff..dfb699d 100644 --- a/thread-pool.c +++ b/thread-pool.c @@ -224,6 +224,7 @@ static void thread_pool_cancel(BlockDriverAIOCB *acb) pool->pending_cancellations--; } qemu_mutex_unlock(&pool->lock); + event_notifier_ready(&pool->notifier); } static const AIOCBInfo thread_pool_aiocb_info = {