From patchwork Wed Nov 8 19:20:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 835966 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yXGSr22PZz9s3w for ; Thu, 9 Nov 2017 06:24:28 +1100 (AEDT) Received: from localhost ([::1]:33603 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eCVxa-0001Md-EK for incoming@patchwork.ozlabs.org; Wed, 08 Nov 2017 14:24:26 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41002) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eCVuF-0007nJ-G5 for qemu-devel@nongnu.org; Wed, 08 Nov 2017 14:21:04 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eCVuE-0001DM-Ht for qemu-devel@nongnu.org; Wed, 08 Nov 2017 14:20:59 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40064) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eCVuE-0001DD-BX for qemu-devel@nongnu.org; Wed, 08 Nov 2017 14:20:58 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 907942C9DC7; Wed, 8 Nov 2017 19:20:57 +0000 (UTC) Received: from localhost (ovpn-116-145.ams2.redhat.com [10.36.116.145]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3B5756BF86; Wed, 8 Nov 2017 19:20:53 +0000 (UTC) From: Stefan Hajnoczi To: Date: Wed, 8 Nov 2017 19:20:43 +0000 Message-Id: <20171108192043.10682-3-stefanha@redhat.com> In-Reply-To: <20171108192043.10682-1-stefanha@redhat.com> References: <20171108192043.10682-1-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Wed, 08 Nov 2017 19:20:57 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL for-2.11-rc1 v2 2/2] util/async: use atomic_mb_set in qemu_bh_cancel X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Stefan Hajnoczi , Sergio Lopez Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Sergio Lopez Commit b7a745d added a qemu_bh_cancel call to the completion function as an optimization to prevent it from unnecessarily rescheduling itself. This completion function is scheduled from worker_thread, after setting the state of a ThreadPoolElement to THREAD_DONE. This was considered to be safe, as the completion function restarts the loop just after the call to qemu_bh_cancel. But, as this loop lacks a HW memory barrier, the read of req->state may actually happen _before_ the call, seeing it still as THREAD_QUEUED, and ending the completion function without having processed a pending TPE linked at pool->head: worker thread | I/O thread ------------------------------------------------------------------------ | speculatively read req->state req->state = THREAD_DONE; | qemu_bh_schedule(p->completion_bh) | bh->scheduled = 1; | | qemu_bh_cancel(p->completion_bh) | bh->scheduled = 0; | if (req->state == THREAD_DONE) | // sees THREAD_QUEUED The source of the misunderstanding was that qemu_bh_cancel is now being used by the _consumer_ rather than the producer, and therefore now needs to have acquire semantics just like e.g. aio_bh_poll. In some situations, if there are no other independent requests in the same aio context that could eventually trigger the scheduling of the completion function, the omitted TPE and all operations pending on it will get stuck forever. [Added Sergio's updated wording about the HW memory barrier. --Stefan] Signed-off-by: Sergio Lopez Message-id: 20171108063447.2842-1-slp@redhat.com Signed-off-by: Stefan Hajnoczi --- util/async.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/util/async.c b/util/async.c index 355af73ee7..0e1bd8780a 100644 --- a/util/async.c +++ b/util/async.c @@ -174,7 +174,7 @@ void qemu_bh_schedule(QEMUBH *bh) */ void qemu_bh_cancel(QEMUBH *bh) { - bh->scheduled = 0; + atomic_mb_set(&bh->scheduled, 0); } /* This func is async.The bottom half will do the delete action at the finial