get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/2218180/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2218180,
    "url": "http://patchwork.ozlabs.org/api/patches/2218180/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/qemu-devel/patch/20260331150352.256332-5-kwolf@redhat.com/",
    "project": {
        "id": 14,
        "url": "http://patchwork.ozlabs.org/api/projects/14/?format=api",
        "name": "QEMU Development",
        "link_name": "qemu-devel",
        "list_id": "qemu-devel.nongnu.org",
        "list_email": "qemu-devel@nongnu.org",
        "web_url": "",
        "scm_url": "",
        "webscm_url": "",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20260331150352.256332-5-kwolf@redhat.com>",
    "list_archive_url": null,
    "date": "2026-03-31T15:03:50",
    "name": "[PULL,4/6] monitor: Fix deadlock in monitor_cleanup",
    "commit_ref": null,
    "pull_url": null,
    "state": "new",
    "archived": false,
    "hash": "2aee69fffb4d78a91d968f5e043693107650495c",
    "submitter": {
        "id": 2714,
        "url": "http://patchwork.ozlabs.org/api/people/2714/?format=api",
        "name": "Kevin Wolf",
        "email": "kwolf@redhat.com"
    },
    "delegate": null,
    "mbox": "http://patchwork.ozlabs.org/project/qemu-devel/patch/20260331150352.256332-5-kwolf@redhat.com/mbox/",
    "series": [
        {
            "id": 498214,
            "url": "http://patchwork.ozlabs.org/api/series/498214/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/qemu-devel/list/?series=498214",
            "date": "2026-03-31T15:03:47",
            "name": "[PULL,1/6] ide: Fix potential assertion failure on VM stop for PIO read error",
            "version": 1,
            "mbox": "http://patchwork.ozlabs.org/series/498214/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/2218180/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/2218180/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>",
        "X-Original-To": "incoming@patchwork.ozlabs.org",
        "Delivered-To": "patchwork-incoming@legolas.ozlabs.org",
        "Authentication-Results": [
            "legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=J5PcoOpf;\n\tdkim-atps=neutral",
            "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org\n (client-ip=209.51.188.17; helo=lists.gnu.org;\n envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org;\n receiver=patchwork.ozlabs.org)"
        ],
        "Received": [
            "from lists.gnu.org (lists.gnu.org [209.51.188.17])\n\t(using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4flWdl23NGz20wH\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 01 Apr 2026 02:05:47 +1100 (AEDT)",
            "from localhost ([::1] helo=lists1p.gnu.org)\n\tby lists.gnu.org with esmtp (Exim 4.90_1)\n\t(envelope-from <qemu-devel-bounces@nongnu.org>)\n\tid 1w7adZ-0003JT-UR; Tue, 31 Mar 2026 11:04:13 -0400",
            "from eggs.gnu.org ([2001:470:142:3::10])\n by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <kwolf@redhat.com>) id 1w7adY-0003Gy-1B\n for qemu-devel@nongnu.org; Tue, 31 Mar 2026 11:04:12 -0400",
            "from us-smtp-delivery-124.mimecast.com ([170.10.133.124])\n by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <kwolf@redhat.com>) id 1w7adW-0002ZF-Im\n for qemu-devel@nongnu.org; Tue, 31 Mar 2026 11:04:11 -0400",
            "from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com\n (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by\n relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,\n cipher=TLS_AES_256_GCM_SHA384) id us-mta-34-GN0xrhozMYCzet6TcjxOEw-1; Tue,\n 31 Mar 2026 11:04:06 -0400",
            "from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com\n (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17])\n (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n (No client certificate requested)\n by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS\n id 42032195609E; Tue, 31 Mar 2026 15:04:05 +0000 (UTC)",
            "from merkur.redhat.com (unknown [10.44.50.38])\n by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP\n id DACDE1954102; Tue, 31 Mar 2026 15:04:03 +0000 (UTC)"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1774969450;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=4/EQKTbtcH/MX2YkSmC6DrV9C1Uks/s3kAn+UlSgZnk=;\n b=J5PcoOpfEqzTpVBBcpCgGNPy+EBNdBXxldSYQNks2VuZsDjE0r2IkpWQjpgqBKtJ8mv5pk\n kHWb66ybGHTaFm/6bWhpiccfMSfAAJKifNbXhB6WErecAocbU1TlxWUhR/I55wu2Tz5O5n\n 0zAPjqFTau/jp3Q+kvHYv0m748VpA5s=",
        "X-MC-Unique": "GN0xrhozMYCzet6TcjxOEw-1",
        "X-Mimecast-MFC-AGG-ID": "GN0xrhozMYCzet6TcjxOEw_1774969445",
        "From": "Kevin Wolf <kwolf@redhat.com>",
        "To": "qemu-block@nongnu.org",
        "Cc": "kwolf@redhat.com,\n\tqemu-devel@nongnu.org,\n\tpeter.maydell@linaro.org",
        "Subject": "[PULL 4/6] monitor: Fix deadlock in monitor_cleanup",
        "Date": "Tue, 31 Mar 2026 17:03:50 +0200",
        "Message-ID": "<20260331150352.256332-5-kwolf@redhat.com>",
        "In-Reply-To": "<20260331150352.256332-1-kwolf@redhat.com>",
        "References": "<20260331150352.256332-1-kwolf@redhat.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "X-Scanned-By": "MIMEDefang 3.0 on 10.30.177.17",
        "Received-SPF": "pass client-ip=170.10.133.124; envelope-from=kwolf@redhat.com;\n helo=us-smtp-delivery-124.mimecast.com",
        "X-Spam_score_int": "27",
        "X-Spam_score": "2.7",
        "X-Spam_bar": "++",
        "X-Spam_report": "(2.7 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.54,\n DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,\n RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.01, RCVD_IN_SBL_CSS=3.335,\n RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=1, RCVD_IN_VALIDITY_RPBL_BLOCKED=1,\n SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no",
        "X-Spam_action": "no action",
        "X-BeenThere": "qemu-devel@nongnu.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "qemu development <qemu-devel.nongnu.org>",
        "List-Unsubscribe": "<https://lists.nongnu.org/mailman/options/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>",
        "List-Archive": "<https://lists.nongnu.org/archive/html/qemu-devel>",
        "List-Post": "<mailto:qemu-devel@nongnu.org>",
        "List-Help": "<mailto:qemu-devel-request@nongnu.org?subject=help>",
        "List-Subscribe": "<https://lists.nongnu.org/mailman/listinfo/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=subscribe>",
        "Errors-To": "qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org",
        "Sender": "qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org"
    },
    "content": "From: hongmianquan <hongmianquan@bytedance.com>\n\nDuring qemu_cleanup, if a non-coroutine QMP command (e.g.,\nquery-commands) is concurrently received and processed by the\nmon_iothread, it can lead to a deadlock in monitor_cleanup.\n\nThe root cause is a race condition between the main thread's shutdown\nsequence and the coroutine's dispatching mechanism. When handling a\nnon-coroutine QMP command, qmp_dispatcher_co schedules the actual\ncommand execution as a bottom half in iohandler_ctx and then yields. At\nthis suspended point, qmp_dispatcher_co_busy remains true.\n\nSubsequently, the main thread in monitor_cleanup(), sets\nqmp_dispatcher_co_shutdown, and calls qmp_dispatcher_co_wake(). Since\nqmp_dispatcher_co_busy is already true, the aio_co_wake is skipped. The\nmain thread then enters the AIO_WAIT_WHILE_UNLOCKED loop, it executes\nthe scheduled BH (do_qmp_dispatch_bh) via aio_poll(iohandler_ctx,\nfalse), which attempts to wake up the coroutine, aio_co_wake schedules a\nnew wake-up BH in iohandler_ctx. The main thread then blocks\nindefinitely in aio_poll(qemu_aio_context, true), while the coroutine's\nwake-up BH is starved in iohandler_ctx, qmp_dispatcher_co never reaches\ntermination, resulting in a deadlock.\n\nThe execution sequence is illustrated below:\n\n IO Thread                 Main Thread (qemu_aio_context)        qmp_dispatcher_co (iohandler_ctx)\n    |                                 |                                        |\n    |-- query-commands                |                                        |\n    |-- qmp_dispatcher_co_wake()      |                                        |\n    |    (sets busy = true)           |                                        |\n    |                                 |   <-- Wakes up in iohandler_ctx -->    |\n    |                                 |                                        |-- qmp_dispatch()\n    |                                 |                                        |-- Schedules BH (do_qmp_dispatch_bh)\n    |                                 |                                        |-- qemu_coroutine_yield()\n    |                                 |                                            [State: Suspended, busy=true]\n    |   [ quit triggered ]            |\n    |                                 |-- monitor_cleanup()\n    |                                 |-- qmp_dispatcher_co_shutdown = true\n    |                                 |-- qmp_dispatcher_co_wake()\n    |                                 |    -> Checks busy flag. It's TRUE!\n    |                                 |    -> Skips aio_co_wake().\n    |                                 |\n    |                                 |-- AIO_WAIT_WHILE_UNLOCKED:\n    |                                 |   |-- aio_poll(iohandler_ctx, false)\n    |                                 |   |    -> Executes do_qmp_dispatch_bh\n    |                                 |   |    -> Schedules 'co_schedule_bh' in iohandler_ctx\n    |                                 |   |\n    |                                 |   |-- aio_poll(qemu_aio_context, true)\n    |                                 |   |    -> Blocks indefinitely! (Deadlock)\n    |                                 |\n    |                                 X (Main thread sleeping)                 X (Waiting for next iohandler_ctx poll)\n\nTo fix this, we add an explicit aio_wait_kick() in do_qmp_dispatch_bh()\nto break the main loop out of its blocking poll, allowing it to evaluate\nthe loop condition and poll iohandler_ctx.\n\nSuggested-by: Kevin Wolf <kwolf@redhat.com>\nSigned-off-by: hongmianquan <hongmianquan@bytedance.com>\nSigned-off-by: wubo.bob <wubo.bob@bytedance.com>\nMessage-ID: <20260327131024.51947-1-hongmianquan@bytedance.com>\nAcked-by: Markus Armbruster <armbru@redhat.com>\nReviewed-by: Kevin Wolf <kwolf@redhat.com>\nSigned-off-by: Kevin Wolf <kwolf@redhat.com>\n---\n qapi/qmp-dispatch.c | 10 ++++++++++\n 1 file changed, 10 insertions(+)",
    "diff": "diff --git a/qapi/qmp-dispatch.c b/qapi/qmp-dispatch.c\nindex 9bb1e6a9f4a..e3897d51977 100644\n--- a/qapi/qmp-dispatch.c\n+++ b/qapi/qmp-dispatch.c\n@@ -128,6 +128,16 @@ static void do_qmp_dispatch_bh(void *opaque)\n     data->cmd->fn(data->args, data->ret, data->errp);\n     monitor_set_cur(qemu_coroutine_self(), NULL);\n     aio_co_wake(data->co);\n+\n+    /*\n+     * If the QMP dispatcher coroutine is waiting to be scheduled\n+     * in iohandler_ctx, we must kick the main loop. This ensures\n+     * that AIO_WAIT_WHILE_UNLOCKED() in monitor_cleanup() doesn't\n+     * block indefinitely waiting for an event in qemu_aio_context,\n+     * but actually gets the chance to poll iohandler_ctx and resume\n+     * the coroutine.\n+     */\n+    aio_wait_kick();\n }\n \n /*\n",
    "prefixes": [
        "PULL",
        "4/6"
    ]
}