{"id":2224899,"url":"http://patchwork.ozlabs.org/api/1.1/patches/2224899/?format=json","web_url":"http://patchwork.ozlabs.org/project/qemu-devel/patch/20260419130139.15554-7-alexander@mihalicyn.com/","project":{"id":14,"url":"http://patchwork.ozlabs.org/api/1.1/projects/14/?format=json","name":"QEMU Development","link_name":"qemu-devel","list_id":"qemu-devel.nongnu.org","list_email":"qemu-devel@nongnu.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20260419130139.15554-7-alexander@mihalicyn.com>","date":"2026-04-19T13:01:37","name":"[v6,6/8] hw/nvme: add basic live migration support","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"16a556b6d4c551a3dd168aa760f1f922f8b5c279","submitter":{"id":81630,"url":"http://patchwork.ozlabs.org/api/1.1/people/81630/?format=json","name":"Alexander Mikhalitsyn","email":"alexander@mihalicyn.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/qemu-devel/patch/20260419130139.15554-7-alexander@mihalicyn.com/mbox/","series":[{"id":500500,"url":"http://patchwork.ozlabs.org/api/1.1/series/500500/?format=json","web_url":"http://patchwork.ozlabs.org/project/qemu-devel/list/?series=500500","date":"2026-04-19T13:01:32","name":"hw/nvme: add basic live migration support","version":6,"mbox":"http://patchwork.ozlabs.org/series/500500/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2224899/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2224899/checks/","tags":{},"headers":{"Return-Path":"<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n secure) header.d=mihalicyn.com header.i=@mihalicyn.com header.a=rsa-sha256\n header.s=mihalicyn header.b=DdiLzh1x;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org\n (client-ip=209.51.188.17; helo=lists1p.gnu.org;\n envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from lists1p.gnu.org (lists1p.gnu.org [209.51.188.17])\n\t(using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fz82M1bqtz1yHk\n\tfor <incoming@patchwork.ozlabs.org>; Sun, 19 Apr 2026 23:03:55 +1000 (AEST)","from localhost ([::1] helo=lists1p.gnu.org)\n\tby lists1p.gnu.org with esmtp (Exim 4.90_1)\n\t(envelope-from <qemu-devel-bounces@nongnu.org>)\n\tid 1wERnE-00014v-P3; Sun, 19 Apr 2026 09:02:32 -0400","from eggs.gnu.org ([2001:470:142:3::10])\n by lists1p.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <alexander@mihalicyn.com>)\n id 1wERmm-0000uJ-Gd\n for qemu-devel@nongnu.org; Sun, 19 Apr 2026 09:02:05 -0400","from mail-wr1-x435.google.com ([2a00:1450:4864:20::435])\n by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)\n (Exim 4.90_1) (envelope-from <alexander@mihalicyn.com>)\n id 1wERme-0008K1-N7\n for qemu-devel@nongnu.org; Sun, 19 Apr 2026 09:02:04 -0400","by mail-wr1-x435.google.com with SMTP id\n ffacd0b85a97d-43d7605ec91so1874561f8f.3\n for <qemu-devel@nongnu.org>; Sun, 19 Apr 2026 06:01:50 -0700 (PDT)","from alex-laptop.lan\n (p200300cf57228c00995e4e0d3496e07b.dip0.t-ipconnect.de.\n [2003:cf:5722:8c00:995e:4e0d:3496:e07b])\n by smtp.gmail.com with ESMTPSA id\n ffacd0b85a97d-43fe4cc2cacsm20734304f8f.13.2026.04.19.06.01.47\n (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n Sun, 19 Apr 2026 06:01:48 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=mihalicyn.com; s=mihalicyn; t=1776603709; x=1777208509; darn=nongnu.org;\n h=content-transfer-encoding:mime-version:references:in-reply-to\n :message-id:date:subject:cc:to:from:from:to:cc:subject:date\n :message-id:reply-to;\n bh=rk0IgYjAfH9Pdowv6rsdSE3Aypa1t/enhNxNQIimAQc=;\n b=DdiLzh1xiVSEVAsjuW6bKo7yW58hBzVBAyLCP6yJDcZ0mVv6pdyOlw+v6b+sdiOEGo\n OlsgBDGTDRzKhh+xJO3enIbOWwh1oh1Z/mNQ8USdDMcKi86yXrBfVTh9r758CvafmP3m\n X6ePF+N8lphtzgAg65NA/RIHpB21SYZ1rPfio=","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20251104; t=1776603709; x=1777208509;\n h=content-transfer-encoding:mime-version:references:in-reply-to\n :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from\n :to:cc:subject:date:message-id:reply-to;\n bh=rk0IgYjAfH9Pdowv6rsdSE3Aypa1t/enhNxNQIimAQc=;\n b=QitwEh6rPVZwrHoI4fzomLbkKZ5G+1YjWJ7TEC7lYuNN9EKIKsk7HsrmwcnDwaMwS+\n cIYuY+lXJEgqBoV8DaCW9VBF+WVshJyWTdATDMA2Q2fNjE7SfHq9CRlcpQlCFey0HBXd\n x667BpnByKVqrraEeMAdLv1S7R+EksAGx3OnwVQB/z8Q5xxOHHKRGFkc4iEF4lKXqjj0\n v/BgtNTTjOJEa6BsELYHjcnRcKmsdJdWsJ1sZSUUNqIsm7K9WlVpgMErYvbz6ceciWsy\n DPiJjPSTRsjhXr9wjhzPlyvQ8jIq/Ne5EriDvylmRz2EebvZr2R2bkt05BrYZRVkFPlU\n sadQ==","X-Gm-Message-State":"AOJu0YyXW/hNWAD6+jo8nQZEVuui/F889M1hae+Oitn8MyAPT4MkfJH0\n XRJlCktyojFmJCE8x7oNE0TMyDD7efdYrWRb2dR+4/ylu8JiHAJpVwdDGh/PYWREtqGHScyB3id\n lCMxQvBE=","X-Gm-Gg":"AeBDietlo01IGXki8amoSgSal6fdbZBBjJdwIKcjuPIV32ddmIH3HMUG7IK2qVk6NpN\n dQGK4S0WGJVvLuw3xm6kXJOykcyMm6BL3F+o+4bIJK7YzxcAWO+0ydacaK9irLG6N7/maE8z4IY\n 1OCwofHKLMs/4G2+6UPbokuxNopCb0kmnvGP00ouIlDTyUTZTvvkKHpueWVqkFM8kdrSXbb/nyj\n i7A2lxa+z9XmdlrHOX7CHjSelzvsaev6WUPQk4be+LrO6eUDdBU+fAFD7Kgx5S3XUbT6FqJGP+y\n cEMbCFO5VP2Plm72zh98NX2QEkm9pDSHwyf619zHNY0JJgdDuM9KFp66SAN1N2mLr8v3rm5cazG\n MyhyxtI4t6iBrDWwu9bymip2bXwoasYEu1c7rSbVo7k0oSGLICU818Kc0JxuxjD3nstkyuEatgM\n LrWvvbFoTSJbw2t7GFC+FsRs5sJa/ktTAs2UVHSpUgDTrEQwjs60nts7ZQmQRXHjVDv3dE7MUkf\n zt9tPc866TnkvK5g/9LLMV3/Upgj8RefA==","X-Received":"by 2002:a05:6000:230b:b0:43d:773d:7908 with SMTP id\n ffacd0b85a97d-43fe3e13d14mr15424776f8f.32.1776603708893;\n Sun, 19 Apr 2026 06:01:48 -0700 (PDT)","From":"Alexander Mikhalitsyn <alexander@mihalicyn.com>","To":"qemu-devel@nongnu.org","Cc":"Alexander Mikhalitsyn <alexander@mihalicyn.com>,\n Kevin Wolf <kwolf@redhat.com>, qemu-block@nongnu.org,\n Fam Zheng <fam@euphon.net>,\n =?utf-8?q?St=C3=A9phane_Graber?= <stgraber@stgraber.org>, =?utf-8?q?Philipp?=\n\t=?utf-8?q?e_Mathieu-Daud=C3=A9?= <philmd@linaro.org>,\n Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,\n Laurent Vivier <lvivier@redhat.com>, Jesper Devantier <foss@defmacro.it>,\n Klaus Jensen <its@irrelevant.dk>, Fabiano Rosas <farosas@suse.de>,\n Zhao Liu <zhao1.liu@intel.com>, Keith Busch <kbusch@kernel.org>,\n Peter Xu <peterx@redhat.com>, Hanna Reitz <hreitz@redhat.com>,\n Alexander Mikhalitsyn <aleksandr.mikhalitsyn@futurfusion.io>","Subject":"[PATCH v6 6/8] hw/nvme: add basic live migration support","Date":"Sun, 19 Apr 2026 15:01:37 +0200","Message-ID":"<20260419130139.15554-7-alexander@mihalicyn.com>","X-Mailer":"git-send-email 2.47.3","In-Reply-To":"<20260419130139.15554-1-alexander@mihalicyn.com>","References":"<20260419130139.15554-1-alexander@mihalicyn.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","Received-SPF":"pass client-ip=2a00:1450:4864:20::435;\n envelope-from=alexander@mihalicyn.com; helo=mail-wr1-x435.google.com","X-Spam_score_int":"-20","X-Spam_score":"-2.1","X-Spam_bar":"--","X-Spam_report":"(-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1,\n DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,\n RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001,\n SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no","X-Spam_action":"no action","X-BeenThere":"qemu-devel@nongnu.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"qemu development <qemu-devel.nongnu.org>","List-Unsubscribe":"<https://lists.nongnu.org/mailman/options/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>","List-Archive":"<https://lists.nongnu.org/archive/html/qemu-devel>","List-Post":"<mailto:qemu-devel@nongnu.org>","List-Help":"<mailto:qemu-devel-request@nongnu.org?subject=help>","List-Subscribe":"<https://lists.nongnu.org/mailman/listinfo/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=subscribe>","Errors-To":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org","Sender":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org"},"content":"From: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@futurfusion.io>\n\nIt has some limitations:\n- only one NVMe namespace is supported\n- SMART counters are not preserved\n- CMB is not supported\n- PMR is not supported\n- SPDM is not supported\n- SR-IOV is not supported\n\nSigned-off-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@futurfusion.io>\nv2:\n- AERs are now fully supported\nv6:\n- handle full CQ case\n---\n hw/nvme/ctrl.c       | 722 ++++++++++++++++++++++++++++++++++++++++++-\n hw/nvme/ns.c         | 160 ++++++++++\n hw/nvme/nvme.h       |   9 +\n hw/nvme/trace-events |  10 +\n 4 files changed, 892 insertions(+), 9 deletions(-)","diff":"diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c\nindex 1ff91493b60..5157c7fd5a4 100644\n--- a/hw/nvme/ctrl.c\n+++ b/hw/nvme/ctrl.c\n@@ -208,6 +208,7 @@\n #include \"hw/pci/pcie_sriov.h\"\n #include \"system/spdm-socket.h\"\n #include \"migration/blocker.h\"\n+#include \"migration/qemu-file-types.h\"\n #include \"migration/vmstate.h\"\n \n #include \"nvme.h\"\n@@ -1518,6 +1519,18 @@ static void nvme_post_cqes(void *opaque)\n             break;\n         }\n \n+        /*\n+         * Here we take the following fields from NvmeRequest structure\n+         * and write cqe to the guest RAM based on them:\n+         * - req->sq\n+         * - req->status\n+         * - req->cqe\n+         *\n+         * If you change this code and more fields from NvmeRequest are\n+         * used, please make sure that you have handled this in:\n+         * nvme_vmstate_request and nvme_ctrl_pre_save().\n+         */\n+\n         sq = req->sq;\n         req->cqe.status = cpu_to_le16((req->status << 1) | cq->phase);\n         req->cqe.sq_head = cpu_to_le16(sq->head);\n@@ -4903,6 +4916,25 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr,\n     __nvme_init_sq(sq);\n }\n \n+static void nvme_restore_sq(NvmeSQueue *sq_from)\n+{\n+    NvmeCtrl *n = sq_from->ctrl;\n+    NvmeSQueue *sq = sq_from;\n+\n+    if (sq_from->sqid == 0) {\n+        sq = &n->admin_sq;\n+        sq->ctrl = n;\n+        sq->dma_addr = sq_from->dma_addr;\n+        sq->sqid = sq_from->sqid;\n+        sq->size = sq_from->size;\n+        sq->cqid = sq_from->cqid;\n+        sq->head = sq_from->head;\n+        sq->tail = sq_from->tail;\n+    }\n+\n+    __nvme_init_sq(sq);\n+}\n+\n static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeRequest *req)\n {\n     NvmeSQueue *sq;\n@@ -5605,6 +5637,39 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr,\n     __nvme_init_cq(cq);\n }\n \n+static void copy_cq_req_list(NvmeCQueue *cq_to, NvmeCQueue *cq_from)\n+{\n+    NvmeRequest *req, *next;\n+\n+    QTAILQ_FOREACH_SAFE(req, &cq_from->req_list, entry, next) {\n+        QTAILQ_REMOVE(&cq_from->req_list, req, entry);\n+        QTAILQ_INSERT_TAIL(&cq_to->req_list, req, entry);\n+    }\n+}\n+\n+static void nvme_restore_cq(NvmeCQueue *cq_from)\n+{\n+    NvmeCtrl *n = cq_from->ctrl;\n+    NvmeCQueue *cq = cq_from;\n+\n+    if (cq_from->cqid == 0) {\n+        cq = &n->admin_cq;\n+        cq->ctrl = n;\n+        cq->cqid = cq_from->cqid;\n+        cq->size = cq_from->size;\n+        cq->dma_addr = cq_from->dma_addr;\n+        cq->phase = cq_from->phase;\n+        cq->irq_enabled = cq_from->irq_enabled;\n+        cq->vector = cq_from->vector;\n+        cq->head = cq_from->head;\n+        cq->tail = cq_from->tail;\n+        QTAILQ_INIT(&cq->req_list);\n+        copy_cq_req_list(cq, cq_from);\n+    }\n+\n+    __nvme_init_cq(cq);\n+}\n+\n static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req)\n {\n     NvmeCQueue *cq;\n@@ -7293,7 +7358,7 @@ static uint16_t nvme_dbbuf_config(NvmeCtrl *n, const NvmeRequest *req)\n     n->dbbuf_eis = eis_addr;\n     n->dbbuf_enabled = true;\n \n-    for (i = 0; i < n->params.max_ioqpairs + 1; i++) {\n+    for (i = 0; i < n->num_queues; i++) {\n         NvmeSQueue *sq = n->sq[i];\n         NvmeCQueue *cq = n->cq[i];\n \n@@ -7737,7 +7802,7 @@ static int nvme_atomic_write_check(NvmeCtrl *n, NvmeCmd *cmd,\n     /*\n      * Walk the queues to see if there are any atomic conflicts.\n      */\n-    for (i = 1; i < n->params.max_ioqpairs + 1; i++) {\n+    for (i = 1; i < n->num_queues; i++) {\n         NvmeSQueue *sq;\n         NvmeRequest *req;\n         NvmeRwCmd *req_rw;\n@@ -7807,6 +7872,12 @@ static void nvme_process_sq(void *opaque)\n     NvmeCmd cmd;\n     NvmeRequest *req;\n \n+    /*\n+     * We don't want to have a race with nvme_ctrl_pre_save().\n+     * What implicitly protects us from this is BQL.\n+     */\n+    assert(bql_locked());\n+\n     if (n->dbbuf_enabled) {\n         nvme_update_sq_tail(sq);\n     }\n@@ -7924,12 +7995,12 @@ static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst)\n         nvme_ns_drain(ns);\n     }\n \n-    for (i = 0; i < n->params.max_ioqpairs + 1; i++) {\n+    for (i = 0; i < n->num_queues; i++) {\n         if (n->sq[i] != NULL) {\n             nvme_free_sq(n->sq[i], n);\n         }\n     }\n-    for (i = 0; i < n->params.max_ioqpairs + 1; i++) {\n+    for (i = 0; i < n->num_queues; i++) {\n         if (n->cq[i] != NULL) {\n             nvme_free_cq(n->cq[i], n);\n         }\n@@ -8599,6 +8670,8 @@ static bool nvme_check_params(NvmeCtrl *n, Error **errp)\n         params->max_ioqpairs = params->num_queues - 1;\n     }\n \n+    n->num_queues = params->max_ioqpairs + 1;\n+\n     if (n->namespace.blkconf.blk && n->subsys) {\n         error_setg(errp, \"subsystem support is unavailable with legacy \"\n                    \"namespace ('drive' property)\");\n@@ -8753,8 +8826,8 @@ static void nvme_init_state(NvmeCtrl *n)\n         n->conf_msix_qsize = n->params.msix_qsize;\n     }\n \n-    n->sq = g_new0(NvmeSQueue *, n->params.max_ioqpairs + 1);\n-    n->cq = g_new0(NvmeCQueue *, n->params.max_ioqpairs + 1);\n+    n->sq = g_new0(NvmeSQueue *, n->num_queues);\n+    n->cq = g_new0(NvmeCQueue *, n->num_queues);\n     n->temperature = NVME_TEMPERATURE;\n     n->features.temp_thresh_hi = NVME_TEMPERATURE_WARNING;\n     n->starttime_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL);\n@@ -8989,7 +9062,7 @@ static bool nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp)\n     }\n \n     if (n->params.msix_exclusive_bar && !pci_is_vf(pci_dev)) {\n-        bar_size = nvme_mbar_size(n->params.max_ioqpairs + 1, 0, NULL, NULL);\n+        bar_size = nvme_mbar_size(n->num_queues, 0, NULL, NULL);\n         memory_region_init_io(&n->iomem, OBJECT(n), &nvme_mmio_ops, n, \"nvme\",\n                               bar_size);\n         pci_register_bar(pci_dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY |\n@@ -9001,7 +9074,7 @@ static bool nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp)\n         /* add one to max_ioqpairs to account for the admin queue pair */\n         if (!pci_is_vf(pci_dev)) {\n             nr_vectors = n->params.msix_qsize;\n-            bar_size = nvme_mbar_size(n->params.max_ioqpairs + 1,\n+            bar_size = nvme_mbar_size(n->num_queues,\n                                       nr_vectors, &msix_table_offset,\n                                       &msix_pba_offset);\n         } else {\n@@ -9724,9 +9797,640 @@ static uint32_t nvme_pci_read_config(PCIDevice *dev, uint32_t address, int len)\n     return pci_default_read_config(dev, address, len);\n }\n \n+static const VMStateDescription nvme_vmstate_cqe = {\n+    .name = \"nvme-cqe\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT32(result, NvmeCqe),\n+        VMSTATE_UINT32(dw1, NvmeCqe),\n+        VMSTATE_UINT16(sq_head, NvmeCqe),\n+        VMSTATE_UINT16(sq_id, NvmeCqe),\n+        VMSTATE_UINT16(cid, NvmeCqe),\n+        VMSTATE_UINT16(status, NvmeCqe),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_cmd_dptr_sgl = {\n+    .name = \"nvme-request-cmd-dptr-sgl\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT64(addr, NvmeSglDescriptor),\n+        VMSTATE_UINT32(len, NvmeSglDescriptor),\n+        VMSTATE_UINT8_ARRAY(rsvd, NvmeSglDescriptor, 3),\n+        VMSTATE_UINT8(type, NvmeSglDescriptor),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_cmd_dptr = {\n+    .name = \"nvme-request-cmd-dptr\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT64(prp1, NvmeCmdDptr),\n+        VMSTATE_UINT64(prp2, NvmeCmdDptr),\n+        VMSTATE_STRUCT(sgl, NvmeCmdDptr, 0, nvme_vmstate_cmd_dptr_sgl, NvmeSglDescriptor),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_cmd = {\n+    .name = \"nvme-request-cmd\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT8(opcode, NvmeCmd),\n+        VMSTATE_UINT8(flags, NvmeCmd),\n+        VMSTATE_UINT16(cid, NvmeCmd),\n+        VMSTATE_UINT32(nsid, NvmeCmd),\n+        VMSTATE_UINT64(res1, NvmeCmd),\n+        VMSTATE_UINT64(mptr, NvmeCmd),\n+        VMSTATE_STRUCT(dptr, NvmeCmd, 0, nvme_vmstate_cmd_dptr, NvmeCmdDptr),\n+        VMSTATE_UINT32(cdw10, NvmeCmd),\n+        VMSTATE_UINT32(cdw11, NvmeCmd),\n+        VMSTATE_UINT32(cdw12, NvmeCmd),\n+        VMSTATE_UINT32(cdw13, NvmeCmd),\n+        VMSTATE_UINT32(cdw14, NvmeCmd),\n+        VMSTATE_UINT32(cdw15, NvmeCmd),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static bool nvme_req_pre_load(void *opaque, Error **errp)\n+{\n+    memset(opaque, 0x0, sizeof(NvmeRequest));\n+    return true;\n+}\n+\n+static const VMStateDescription nvme_vmstate_request = {\n+    .name = \"nvme-request\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .pre_load_errp = nvme_req_pre_load,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT16(status, NvmeRequest),\n+        VMSTATE_STRUCT(cqe, NvmeRequest, 0, nvme_vmstate_cqe, NvmeCqe),\n+        VMSTATE_STRUCT(cmd, NvmeRequest, 0, nvme_vmstate_cmd, NvmeCmd),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_bar = {\n+    .name = \"nvme-bar\",\n+    .minimum_version_id = 1,\n+    .version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT64(cap, NvmeBar),\n+        VMSTATE_UINT32(vs, NvmeBar),\n+        VMSTATE_UINT32(intms, NvmeBar),\n+        VMSTATE_UINT32(intmc, NvmeBar),\n+        VMSTATE_UINT32(cc, NvmeBar),\n+        VMSTATE_UINT8_ARRAY(rsvd24, NvmeBar, 4),\n+        VMSTATE_UINT32(csts, NvmeBar),\n+        VMSTATE_UINT32(nssr, NvmeBar),\n+        VMSTATE_UINT32(aqa, NvmeBar),\n+        VMSTATE_UINT64(asq, NvmeBar),\n+        VMSTATE_UINT64(acq, NvmeBar),\n+        VMSTATE_UINT32(cmbloc, NvmeBar),\n+        VMSTATE_UINT32(cmbsz, NvmeBar),\n+        VMSTATE_UINT32(bpinfo, NvmeBar),\n+        VMSTATE_UINT32(bprsel, NvmeBar),\n+        VMSTATE_UINT64(bpmbl, NvmeBar),\n+        VMSTATE_UINT64(cmbmsc, NvmeBar),\n+        VMSTATE_UINT32(cmbsts, NvmeBar),\n+        VMSTATE_UINT8_ARRAY(rsvd92, NvmeBar, 3492),\n+        VMSTATE_UINT32(pmrcap, NvmeBar),\n+        VMSTATE_UINT32(pmrctl, NvmeBar),\n+        VMSTATE_UINT32(pmrsts, NvmeBar),\n+        VMSTATE_UINT32(pmrebs, NvmeBar),\n+        VMSTATE_UINT32(pmrswtp, NvmeBar),\n+        VMSTATE_UINT32(pmrmscl, NvmeBar),\n+        VMSTATE_UINT32(pmrmscu, NvmeBar),\n+        VMSTATE_UINT8_ARRAY(css, NvmeBar, 484),\n+        VMSTATE_END_OF_LIST()\n+    },\n+};\n+\n+static bool nvme_cqueue_pre_load(void *opaque, Error **errp)\n+{\n+    NvmeCQueue *cq = opaque;\n+\n+    QTAILQ_INIT(&cq->req_list);\n+    return true;\n+}\n+\n+static const VMStateDescription nvme_vmstate_cqueue = {\n+    .name = \"nvme-cq\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .pre_load_errp = nvme_cqueue_pre_load,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT8(phase, NvmeCQueue),\n+        VMSTATE_UINT16(cqid, NvmeCQueue),\n+        VMSTATE_UINT16(irq_enabled, NvmeCQueue),\n+        VMSTATE_UINT32(head, NvmeCQueue),\n+        VMSTATE_UINT32(tail, NvmeCQueue),\n+        VMSTATE_UINT32(vector, NvmeCQueue),\n+        VMSTATE_UINT32(size, NvmeCQueue),\n+        VMSTATE_UINT64(dma_addr, NvmeCQueue),\n+\n+        VMSTATE_QTAILQ_V(req_list, NvmeCQueue, 1, nvme_vmstate_request,\n+                         NvmeRequest, entry),\n+\n+        /* db_addr, ei_addr, etc will be recalculated */\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_squeue = {\n+    .name = \"nvme-sq\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT16(sqid, NvmeSQueue),\n+        VMSTATE_UINT16(cqid, NvmeSQueue),\n+        VMSTATE_UINT32(head, NvmeSQueue),\n+        VMSTATE_UINT32(tail, NvmeSQueue),\n+        VMSTATE_UINT32(size, NvmeSQueue),\n+        VMSTATE_UINT64(dma_addr, NvmeSQueue),\n+        /* db_addr, ei_addr, etc will be recalculated */\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_async_event_result = {\n+    .name = \"nvme-async-event-result\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT8(event_type, NvmeAerResult),\n+        VMSTATE_UINT8(event_info, NvmeAerResult),\n+        VMSTATE_UINT8(log_page, NvmeAerResult),\n+        VMSTATE_UINT8(resv, NvmeAerResult),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_async_event = {\n+    .name = \"nvme-async-event\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_STRUCT(result, NvmeAsyncEvent, 0, nvme_vmstate_async_event_result, NvmeAerResult),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_hbs = {\n+    .name = \"nvme-hbs\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT8(acre, NvmeHostBehaviorSupport),\n+        VMSTATE_UINT8(etdas, NvmeHostBehaviorSupport),\n+        VMSTATE_UINT8(lbafee, NvmeHostBehaviorSupport),\n+        VMSTATE_UINT8(rsvd3, NvmeHostBehaviorSupport),\n+        VMSTATE_UINT16(cdfe, NvmeHostBehaviorSupport),\n+        VMSTATE_UINT8_ARRAY(rsvd6, NvmeHostBehaviorSupport, 506),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+const VMStateDescription nvme_vmstate_atomic = {\n+    .name = \"nvme-atomic\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT32(atomic_max_write_size, NvmeAtomic),\n+        VMSTATE_UINT64(atomic_boundary, NvmeAtomic),\n+        VMSTATE_UINT64(atomic_nabo, NvmeAtomic),\n+        VMSTATE_BOOL(atomic_writes, NvmeAtomic),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static bool pre_save_validate_aer_req(NvmeRequest *req, Error **errp)\n+{\n+    /*\n+     * Can't use assert() here, because we don't want\n+     * to just crash QEMU when user requests a migration.\n+     */\n+    if (!(req->cmd.opcode == NVME_ADM_CMD_ASYNC_EV_REQ)) {\n+        error_setg(errp, \"req->cmd.opcode (%u) != NVME_ADM_CMD_ASYNC_EV_REQ\", req->cmd.opcode);\n+        return false;\n+    }\n+\n+    if (!(req->ns == NULL)) {\n+        error_setg(errp, \"req->ns != NULL\");\n+        return false;\n+    }\n+\n+    if (!(req->sq == &req->sq->ctrl->admin_sq)) {\n+        error_setg(errp, \"req->sq != &req->sq->ctrl->admin_sq\");\n+        return false;\n+    }\n+\n+    if (!(req->aiocb == NULL)) {\n+        error_setg(errp, \"req->aiocb != NULL\");\n+        return false;\n+    }\n+\n+    if (!(req->opaque == NULL)) {\n+        error_setg(errp, \"req->opaque != NULL\");\n+        return false;\n+    }\n+\n+    if (!(req->atomic_write == false)) {\n+        error_setg(errp, \"req->atomic_write != false\");\n+        return false;\n+    }\n+\n+    if (req->sg.flags & NVME_SG_ALLOC) {\n+        error_setg(errp, \"unexpected NVME_SG_ALLOC flag in req->sg.flags\");\n+        return false;\n+    }\n+\n+    return true;\n+}\n+\n+static bool pre_save_validate_cq_req(NvmeRequest *req, Error **errp)\n+{\n+    if (!(req->ns == NULL)) {\n+        error_setg(errp, \"req->ns != NULL\");\n+        return false;\n+    }\n+\n+    if (!(req->aiocb == NULL)) {\n+        error_setg(errp, \"req->aiocb != NULL\");\n+        return false;\n+    }\n+\n+    if (!(req->opaque == NULL)) {\n+        error_setg(errp, \"req->opaque != NULL\");\n+        return false;\n+    }\n+\n+    if (!(req->atomic_write == false)) {\n+        error_setg(errp, \"req->atomic_write != false\");\n+        return false;\n+    }\n+\n+    if (req->sg.flags & NVME_SG_ALLOC) {\n+        error_setg(errp, \"unexpected NVME_SG_ALLOC flag in req->sg.flags\");\n+        return false;\n+    }\n+\n+    return true;\n+}\n+\n+static bool nvme_ctrl_pre_save(void *opaque, Error **errp)\n+{\n+    NvmeCtrl *n = opaque;\n+    int i;\n+\n+    trace_pci_nvme_pre_save_enter(n);\n+\n+    /*\n+     * We don't want to have a race with nvme_process_sq().\n+     * What implicitly protects us from this is BQL.\n+     */\n+    assert(bql_locked());\n+\n+    /* cancel all SQ processing BHs */\n+    for (i = 0; i < n->num_queues; i++) {\n+        NvmeSQueue *sq = n->sq[i];\n+\n+        if (!sq)\n+            continue;\n+\n+        qemu_bh_cancel(sq->bh);\n+    }\n+\n+    /* drain all IO */\n+    for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {\n+        NvmeNamespace *ns;\n+\n+        ns = nvme_ns(n, i);\n+        if (!ns) {\n+            continue;\n+        }\n+\n+        trace_pci_nvme_pre_save_ns_drain(n, i);\n+        nvme_ns_drain(ns);\n+    }\n+\n+    /*\n+     * Now, we should take care of AERs.\n+     *\n+     * 1. Save all queued events (n->aer_queue).\n+     *    This is done automatically, see nvme_vmstate VMStateDescription.\n+     *    Here we only need to print them for debugging purpose.\n+     * 2. Go over outstanding AER requests (n->aer_reqs) and check they are\n+     *    all have expected opcode (NVME_ADM_CMD_ASYNC_EV_REQ) and other fields.\n+     *\n+     * We must be really careful here, because in case of further QEMU NVMe changes,\n+     * we may break migration without noticing it, or worse, introduce silent\n+     * data corruption during migration.\n+     */\n+    if (n->aer_queued) {\n+        NvmeAsyncEvent *event;\n+\n+        QTAILQ_FOREACH(event, &n->aer_queue, entry) {\n+            trace_pci_nvme_pre_save_aer(event->result.event_type, event->result.event_info,\n+                                        event->result.log_page);\n+        }\n+    }\n+\n+    for (i = 0; i < n->outstanding_aers; i++) {\n+        NvmeRequest *req = n->aer_reqs[i];\n+\n+        if (!pre_save_validate_aer_req(req, errp)) {\n+            return false;\n+        }\n+    }\n+\n+    /* make sure that all in-flight IO requests (except NVME_ADM_CMD_ASYNC_EV_REQ) are processed */\n+    for (i = 0; i < n->num_queues; i++) {\n+        NvmeRequest *req;\n+        NvmeSQueue *sq = n->sq[i];\n+\n+        if (!sq)\n+            continue;\n+\n+        trace_pci_nvme_pre_save_sq_out_req_check(n, i, sq->head, sq->tail, sq->size);\n+\n+        QTAILQ_FOREACH(req, &sq->out_req_list, entry) {\n+            assert(req->cmd.opcode == NVME_ADM_CMD_ASYNC_EV_REQ);\n+        }\n+    }\n+\n+    /* wait when all IO requests completions are written to guest memory */\n+    for (i = 0; i < n->num_queues; i++) {\n+        NvmeCQueue *cq = n->cq[i];\n+\n+        if (!cq)\n+            continue;\n+\n+        qemu_bh_cancel(cq->bh);\n+        /* this should empty cq->req_list unless CQ is full */\n+        nvme_post_cqes(cq);\n+\n+        trace_pci_nvme_pre_save_cq_req_check(n, i, cq->head, cq->tail, cq->size);\n+\n+        if (!QTAILQ_EMPTY(&cq->req_list)) {\n+            NvmeRequest *req;\n+\n+            assert(nvme_cq_full(cq));\n+\n+            QTAILQ_FOREACH(req, &cq->req_list, entry) {\n+                trace_pci_nvme_pre_save_cq_unposted_cqe(n, i, nvme_cid(req),\n+                                                        nvme_nsid(req->ns),\n+                                                        le32_to_cpu(req->cqe.result),\n+                                                        le32_to_cpu(req->cqe.dw1),\n+                                                        req->status, req->cmd.opcode);\n+                if (!pre_save_validate_cq_req(req, errp)) {\n+                    return false;\n+                }\n+            }\n+        }\n+    }\n+\n+    for (uint32_t nsid = 0; nsid <= NVME_MAX_NAMESPACES; nsid++) {\n+        NvmeNamespace *ns = n->namespaces[nsid];\n+\n+        if (!ns)\n+            continue;\n+\n+        if (ns != &n->namespace) {\n+            error_setg(errp, \"only one NVMe namespace is supported for migration\");\n+            return false;\n+        }\n+    }\n+\n+    return true;\n+}\n+\n+static bool nvme_ctrl_post_load(void *opaque, int version_id, Error **errp)\n+{\n+    NvmeCtrl *n = opaque;\n+    int i;\n+\n+    trace_pci_nvme_post_load_enter(n);\n+\n+    /* restore CQs first */\n+    for (i = 0; i < n->num_queues; i++) {\n+        NvmeCQueue *cq = n->cq[i];\n+\n+        if (!cq)\n+            continue;\n+\n+        cq->ctrl = n;\n+        nvme_restore_cq(cq);\n+        trace_pci_nvme_post_load_restore_cq(n, i, cq->head, cq->tail, cq->size);\n+\n+        if (i == 0) {\n+            /*\n+             * Admin CQ lives in n->admin_cq, we don't need\n+             * memory allocated for it in get_ptrs_array_entry() anymore.\n+             *\n+             * nvme_restore_cq() also takes care of:\n+             * n->cq[0] = &n->admin_cq;\n+             * so n->cq[0] remains valid.\n+             */\n+            g_free(cq);\n+        }\n+    }\n+\n+    for (i = 0; i < n->num_queues; i++) {\n+        NvmeSQueue *sq = n->sq[i];\n+\n+        if (!sq)\n+            continue;\n+\n+        sq->ctrl = n;\n+        nvme_restore_sq(sq);\n+        trace_pci_nvme_post_load_restore_sq(n, i, sq->head, sq->tail, sq->size);\n+\n+        if (i == 0) {\n+            /* same as for CQ */\n+            g_free(sq);\n+        }\n+    }\n+\n+    /* restore cq->req_list-s */\n+    for (i = 0; i < n->num_queues; i++) {\n+        NvmeRequest *req_from, *next;\n+        typeof_field(NvmeCQueue, req_list) req_list;\n+        NvmeCQueue *cq = n->cq[i];\n+\n+        if (!cq || QTAILQ_EMPTY(&cq->req_list))\n+            continue;\n+\n+        /*\n+         * We use nvme_vmstate_request VMStateDescription to save/restore\n+         * NvmeRequest structures, but tricky thing here is that\n+         * memory for each cq->req_list item is allocated separately\n+         * during restore. It doesn't work for us. We need to take\n+         * an existing NvmeRequest structure from SQ's req_list pool\n+         * and fill it with data from the newly allocated one (req_from).\n+         * Then, we can safely release allocated memory for it.\n+         */\n+\n+        /* make a copy of cq->req_list (QTAILQ head) and clean cq->req_list */\n+        QTAILQ_INIT(&req_list);\n+        QTAILQ_FOREACH_SAFE(req_from, &cq->req_list, entry, next) {\n+            QTAILQ_REMOVE(&cq->req_list, req_from, entry);\n+            QTAILQ_INSERT_TAIL(&req_list, req_from, entry);\n+        }\n+        QTAILQ_INIT(&cq->req_list);\n+\n+        QTAILQ_FOREACH_SAFE(req_from, &req_list, entry, next) {\n+            uint16_t sqid = le16_to_cpu(req_from->cqe.sq_id);\n+            NvmeRequest *req;\n+            NvmeSQueue *sq;\n+\n+            assert(!nvme_check_sqid(n, sqid));\n+            sq = n->sq[sqid];\n+\n+            req = QTAILQ_FIRST(&sq->req_list);\n+            QTAILQ_REMOVE(&sq->req_list, req, entry);\n+            QTAILQ_INSERT_TAIL(&cq->req_list, req, entry);\n+            nvme_req_clear(req);\n+\n+            /* copy data from the source NvmeRequest */\n+            req->status = req_from->status;\n+            memcpy(&req->cqe, &req_from->cqe, sizeof(NvmeCqe));\n+            memcpy(&req->cmd, &req_from->cmd, sizeof(NvmeCmd));\n+\n+            QTAILQ_REMOVE(&req_list, req_from, entry);\n+            g_free(req_from);\n+        }\n+\n+        qemu_bh_schedule(cq->bh);\n+    }\n+\n+    if (n->aer_queued) {\n+        NvmeAsyncEvent *event;\n+\n+        QTAILQ_FOREACH(event, &n->aer_queue, entry) {\n+            trace_pci_nvme_post_load_aer(event->result.event_type, event->result.event_info,\n+                                         event->result.log_page);\n+        }\n+    }\n+\n+    for (i = 0; i < n->outstanding_aers; i++) {\n+        NvmeSQueue *sq = &n->admin_sq;\n+        NvmeRequest *req_from = n->aer_reqs[i];\n+        NvmeRequest *req;\n+\n+        /* Idea here is the same as for \"restore cq->req_list-s\" step */\n+\n+        /* take an NvmeRequest struct from SQ */\n+        req = QTAILQ_FIRST(&sq->req_list);\n+        QTAILQ_REMOVE(&sq->req_list, req, entry);\n+        QTAILQ_INSERT_TAIL(&sq->out_req_list, req, entry);\n+        nvme_req_clear(req);\n+\n+        /* copy data from the source NvmeRequest */\n+        req->status = req_from->status;\n+        memcpy(&req->cqe, &req_from->cqe, sizeof(NvmeCqe));\n+        memcpy(&req->cmd, &req_from->cmd, sizeof(NvmeCmd));\n+\n+        n->aer_reqs[i] = req;\n+        g_free(req_from);\n+    }\n+\n+    /*\n+     * We need to attach namespaces (currently, only one namespace is\n+     * supported for migration).\n+     * This logic comes from nvme_start_ctrl().\n+     */\n+    for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {\n+        NvmeNamespace *ns = nvme_subsys_ns(n->subsys, i);\n+\n+        if (!ns || (!ns->params.shared && ns->ctrl != n)) {\n+            continue;\n+        }\n+\n+        if (nvme_csi_supported(n, ns->csi) && !ns->params.detached) {\n+            if (!ns->attached || ns->params.shared) {\n+                nvme_attach_ns(n, ns);\n+            }\n+        }\n+    }\n+\n+    /* schedule SQ processing */\n+    for (i = 0; i < n->num_queues; i++) {\n+        NvmeSQueue *sq = n->sq[i];\n+\n+        if (!sq)\n+            continue;\n+\n+        qemu_bh_schedule(sq->bh);\n+    }\n+\n+    return true;\n+}\n+\n static const VMStateDescription nvme_vmstate = {\n     .name = \"nvme\",\n-    .unmigratable = 1,\n+    .minimum_version_id = 1,\n+    .version_id = 1,\n+    .pre_save_errp = nvme_ctrl_pre_save,\n+    .post_load_errp = nvme_ctrl_post_load,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_PCI_DEVICE(parent_obj, NvmeCtrl),\n+        VMSTATE_MSIX(parent_obj, NvmeCtrl),\n+        VMSTATE_STRUCT(bar, NvmeCtrl, 0, nvme_vmstate_bar, NvmeBar),\n+\n+        VMSTATE_BOOL(qs_created, NvmeCtrl),\n+        VMSTATE_UINT32(page_size, NvmeCtrl),\n+        VMSTATE_UINT16(page_bits, NvmeCtrl),\n+        VMSTATE_UINT16(max_prp_ents, NvmeCtrl),\n+        VMSTATE_UINT32(max_q_ents, NvmeCtrl),\n+        VMSTATE_UINT8(outstanding_aers, NvmeCtrl),\n+        VMSTATE_UINT32(irq_status, NvmeCtrl),\n+        VMSTATE_INT32(cq_pending, NvmeCtrl),\n+\n+        VMSTATE_UINT64(host_timestamp, NvmeCtrl),\n+        VMSTATE_UINT64(timestamp_set_qemu_clock_ms, NvmeCtrl),\n+        VMSTATE_UINT64(starttime_ms, NvmeCtrl),\n+        VMSTATE_UINT16(temperature, NvmeCtrl),\n+        VMSTATE_UINT8(smart_critical_warning, NvmeCtrl),\n+\n+        VMSTATE_UINT32(conf_msix_qsize, NvmeCtrl),\n+        VMSTATE_UINT32(conf_ioqpairs, NvmeCtrl),\n+        VMSTATE_UINT64(dbbuf_dbs, NvmeCtrl),\n+        VMSTATE_UINT64(dbbuf_eis, NvmeCtrl),\n+        VMSTATE_BOOL(dbbuf_enabled, NvmeCtrl),\n+\n+        VMSTATE_UINT8(aer_mask, NvmeCtrl),\n+        VMSTATE_VARRAY_OF_POINTER_TO_STRUCT_UINT8_ALLOC(\n+            aer_reqs, NvmeCtrl, outstanding_aers, 0, nvme_vmstate_request, NvmeRequest),\n+        VMSTATE_QTAILQ_V(aer_queue, NvmeCtrl, 1, nvme_vmstate_async_event,\n+                         NvmeAsyncEvent, entry),\n+        VMSTATE_INT32(aer_queued, NvmeCtrl),\n+\n+        VMSTATE_STRUCT(namespace, NvmeCtrl, 0, nvme_vmstate_ns, NvmeNamespace),\n+\n+        VMSTATE_VARRAY_OF_POINTER_TO_STRUCT_UINT32_ALLOC(\n+            sq, NvmeCtrl, num_queues, 0, nvme_vmstate_squeue, NvmeSQueue),\n+        VMSTATE_VARRAY_OF_POINTER_TO_STRUCT_UINT32_ALLOC(\n+            cq, NvmeCtrl, num_queues, 0, nvme_vmstate_cqueue, NvmeCQueue),\n+\n+        VMSTATE_UINT16(features.temp_thresh_hi, NvmeCtrl),\n+        VMSTATE_UINT16(features.temp_thresh_low, NvmeCtrl),\n+        VMSTATE_UINT32(features.async_config, NvmeCtrl),\n+        VMSTATE_STRUCT(features.hbs, NvmeCtrl, 0, nvme_vmstate_hbs, NvmeHostBehaviorSupport),\n+\n+        VMSTATE_UINT32(dn, NvmeCtrl),\n+        VMSTATE_STRUCT(atomic, NvmeCtrl, 0, nvme_vmstate_atomic, NvmeAtomic),\n+\n+        VMSTATE_END_OF_LIST()\n+    },\n };\n \n static void nvme_class_init(ObjectClass *oc, const void *data)\ndiff --git a/hw/nvme/ns.c b/hw/nvme/ns.c\nindex 38f86a17268..dd374677078 100644\n--- a/hw/nvme/ns.c\n+++ b/hw/nvme/ns.c\n@@ -20,6 +20,7 @@\n #include \"qemu/bitops.h\"\n #include \"system/system.h\"\n #include \"system/block-backend.h\"\n+#include \"migration/vmstate.h\"\n \n #include \"nvme.h\"\n #include \"trace.h\"\n@@ -886,6 +887,164 @@ static void nvme_ns_realize(DeviceState *dev, Error **errp)\n     }\n }\n \n+static const VMStateDescription nvme_vmstate_lbaf = {\n+    .name = \"nvme_lbaf\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT16(ms, NvmeLBAF),\n+        VMSTATE_UINT8(ds, NvmeLBAF),\n+        VMSTATE_UINT8(rp, NvmeLBAF),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_id_ns = {\n+    .name = \"nvme_id_ns\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT64(nsze, NvmeIdNs),\n+        VMSTATE_UINT64(ncap, NvmeIdNs),\n+        VMSTATE_UINT64(nuse, NvmeIdNs),\n+        VMSTATE_UINT8(nsfeat, NvmeIdNs),\n+        VMSTATE_UINT8(nlbaf, NvmeIdNs),\n+        VMSTATE_UINT8(flbas, NvmeIdNs),\n+        VMSTATE_UINT8(mc, NvmeIdNs),\n+        VMSTATE_UINT8(dpc, NvmeIdNs),\n+        VMSTATE_UINT8(dps, NvmeIdNs),\n+        VMSTATE_UINT8(nmic, NvmeIdNs),\n+        VMSTATE_UINT8(rescap, NvmeIdNs),\n+        VMSTATE_UINT8(fpi, NvmeIdNs),\n+        VMSTATE_UINT8(dlfeat, NvmeIdNs),\n+        VMSTATE_UINT16(nawun, NvmeIdNs),\n+        VMSTATE_UINT16(nawupf, NvmeIdNs),\n+        VMSTATE_UINT16(nacwu, NvmeIdNs),\n+        VMSTATE_UINT16(nabsn, NvmeIdNs),\n+        VMSTATE_UINT16(nabo, NvmeIdNs),\n+        VMSTATE_UINT16(nabspf, NvmeIdNs),\n+        VMSTATE_UINT16(noiob, NvmeIdNs),\n+        VMSTATE_UINT8_ARRAY(nvmcap, NvmeIdNs, 16),\n+        VMSTATE_UINT16(npwg, NvmeIdNs),\n+        VMSTATE_UINT16(npwa, NvmeIdNs),\n+        VMSTATE_UINT16(npdg, NvmeIdNs),\n+        VMSTATE_UINT16(npda, NvmeIdNs),\n+        VMSTATE_UINT16(nows, NvmeIdNs),\n+        VMSTATE_UINT16(mssrl, NvmeIdNs),\n+        VMSTATE_UINT32(mcl, NvmeIdNs),\n+        VMSTATE_UINT8(msrc, NvmeIdNs),\n+        VMSTATE_UINT8_ARRAY(rsvd81, NvmeIdNs, 18),\n+        VMSTATE_UINT8(nsattr, NvmeIdNs),\n+        VMSTATE_UINT16(nvmsetid, NvmeIdNs),\n+        VMSTATE_UINT16(endgid, NvmeIdNs),\n+        VMSTATE_UINT8_ARRAY(nguid, NvmeIdNs, 16),\n+        VMSTATE_UINT64(eui64, NvmeIdNs),\n+        VMSTATE_STRUCT_ARRAY(lbaf, NvmeIdNs, NVME_MAX_NLBAF, 1,\n+                             nvme_vmstate_lbaf, NvmeLBAF),\n+        VMSTATE_UINT8_ARRAY(vs, NvmeIdNs, 3712),\n+\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_id_ns_nvm = {\n+    .name = \"nvme_id_ns_nvm\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT64(lbstm, NvmeIdNsNvm),\n+        VMSTATE_UINT8(pic, NvmeIdNsNvm),\n+        VMSTATE_UINT8_ARRAY(rsvd9, NvmeIdNsNvm, 3),\n+        VMSTATE_UINT32_ARRAY(elbaf, NvmeIdNsNvm, NVME_MAX_NLBAF),\n+        VMSTATE_UINT32(npdgl, NvmeIdNsNvm),\n+        VMSTATE_UINT32(nprg, NvmeIdNsNvm),\n+        VMSTATE_UINT32(npra, NvmeIdNsNvm),\n+        VMSTATE_UINT32(nors, NvmeIdNsNvm),\n+        VMSTATE_UINT32(npdal, NvmeIdNsNvm),\n+        VMSTATE_UINT8_ARRAY(rsvd288, NvmeIdNsNvm, 3808),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+static const VMStateDescription nvme_vmstate_id_ns_ind = {\n+    .name = \"nvme_id_ns_ind\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_UINT8(nsfeat, NvmeIdNsInd),\n+        VMSTATE_UINT8(nmic, NvmeIdNsInd),\n+        VMSTATE_UINT8(rescap, NvmeIdNsInd),\n+        VMSTATE_UINT8(fpi, NvmeIdNsInd),\n+        VMSTATE_UINT32(anagrpid, NvmeIdNsInd),\n+        VMSTATE_UINT8(nsattr, NvmeIdNsInd),\n+        VMSTATE_UINT8(rsvd9, NvmeIdNsInd),\n+        VMSTATE_UINT16(nvmsetid, NvmeIdNsInd),\n+        VMSTATE_UINT16(endgrpid, NvmeIdNsInd),\n+        VMSTATE_UINT8(nstat, NvmeIdNsInd),\n+        VMSTATE_UINT8_ARRAY(rsvd15, NvmeIdNsInd, 4081),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+typedef struct TmpNvmeNamespace {\n+    NvmeNamespace *parent;\n+    bool enable_write_cache;\n+} TmpNvmeNamespace;\n+\n+static bool nvme_ns_tmp_pre_save(void *opaque, Error **errp)\n+{\n+    struct TmpNvmeNamespace *tns = opaque;\n+\n+    tns->enable_write_cache = blk_enable_write_cache(tns->parent->blkconf.blk);\n+\n+    return true;\n+}\n+\n+static bool nvme_ns_tmp_post_load(void *opaque, int version_id, Error **errp)\n+{\n+    struct TmpNvmeNamespace *tns = opaque;\n+\n+    blk_set_enable_write_cache(tns->parent->blkconf.blk, tns->enable_write_cache);\n+\n+    return true;\n+}\n+\n+static const VMStateDescription nvme_vmstate_ns_tmp = {\n+    .name = \"nvme_ns_tmp\",\n+    .pre_save_errp = nvme_ns_tmp_pre_save,\n+    .post_load_errp = nvme_ns_tmp_post_load,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_BOOL(enable_write_cache, TmpNvmeNamespace),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n+const VMStateDescription nvme_vmstate_ns = {\n+    .name = \"nvme_ns\",\n+    .version_id = 1,\n+    .minimum_version_id = 1,\n+    .fields = (const VMStateField[]) {\n+        VMSTATE_WITH_TMP(NvmeNamespace, TmpNvmeNamespace, nvme_vmstate_ns_tmp),\n+\n+        VMSTATE_STRUCT(id_ns, NvmeNamespace, 0, nvme_vmstate_id_ns, NvmeIdNs),\n+        VMSTATE_STRUCT(id_ns_nvm, NvmeNamespace, 0, nvme_vmstate_id_ns_nvm, NvmeIdNsNvm),\n+        VMSTATE_STRUCT(id_ns_ind, NvmeNamespace, 0, nvme_vmstate_id_ns_ind, NvmeIdNsInd),\n+        VMSTATE_STRUCT(lbaf, NvmeNamespace, 0, nvme_vmstate_lbaf, NvmeLBAF),\n+        VMSTATE_UINT32(nlbaf, NvmeNamespace),\n+        VMSTATE_UINT8(csi, NvmeNamespace),\n+        VMSTATE_UINT16(status, NvmeNamespace),\n+        VMSTATE_UINT8(pif, NvmeNamespace),\n+\n+        VMSTATE_UINT16(zns.zrwas, NvmeNamespace),\n+        VMSTATE_UINT16(zns.zrwafg, NvmeNamespace),\n+        VMSTATE_UINT32(zns.numzrwa, NvmeNamespace),\n+\n+        VMSTATE_UINT32(features.err_rec, NvmeNamespace),\n+        VMSTATE_STRUCT(atomic, NvmeNamespace, 0, nvme_vmstate_atomic, NvmeAtomic),\n+        VMSTATE_END_OF_LIST()\n+    }\n+};\n+\n static const Property nvme_ns_props[] = {\n     DEFINE_BLOCK_PROPERTIES(NvmeNamespace, blkconf),\n     DEFINE_PROP_BOOL(\"detached\", NvmeNamespace, params.detached, false),\n@@ -937,6 +1096,7 @@ static void nvme_ns_class_init(ObjectClass *oc, const void *data)\n     dc->bus_type = TYPE_NVME_BUS;\n     dc->realize = nvme_ns_realize;\n     dc->unrealize = nvme_ns_unrealize;\n+    dc->vmsd = &nvme_vmstate_ns;\n     device_class_set_props(dc, nvme_ns_props);\n     dc->desc = \"Virtual NVMe namespace\";\n }\ndiff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h\nindex 457b6637249..2e7597cded3 100644\n--- a/hw/nvme/nvme.h\n+++ b/hw/nvme/nvme.h\n@@ -444,6 +444,11 @@ typedef struct NvmeRequest {\n     NvmeSg                  sg;\n     bool                    atomic_write;\n     QTAILQ_ENTRY(NvmeRequest)entry;\n+    /*\n+     * If you add a new field here, please make sure to update\n+     * nvme_vmstate_request, pre_save_validate_aer_req() and\n+     * pre_save_validate_cq_req().\n+     */\n } NvmeRequest;\n \n typedef struct NvmeBounceContext {\n@@ -638,6 +643,7 @@ typedef struct NvmeCtrl {\n \n     NvmeNamespace   namespace;\n     NvmeNamespace   *namespaces[NVME_MAX_NAMESPACES + 1];\n+    uint32_t        num_queues;\n     NvmeSQueue      **sq;\n     NvmeCQueue      **cq;\n     NvmeSQueue      admin_sq;\n@@ -749,4 +755,7 @@ void nvme_atomic_configure_max_write_size(bool dn, uint16_t awun,\n void nvme_ns_atomic_configure_boundary(bool dn, uint16_t nabsn,\n                                        uint16_t nabspf, NvmeAtomic *atomic);\n \n+extern const VMStateDescription nvme_vmstate_atomic;\n+extern const VMStateDescription nvme_vmstate_ns;\n+\n #endif /* HW_NVME_NVME_H */\ndiff --git a/hw/nvme/trace-events b/hw/nvme/trace-events\nindex 6be0bfa1c1f..f97a6a11f36 100644\n--- a/hw/nvme/trace-events\n+++ b/hw/nvme/trace-events\n@@ -7,6 +7,16 @@ pci_nvme_dbbuf_config(uint64_t dbs_addr, uint64_t eis_addr) \"dbs_addr=0x%\"PRIx64\n pci_nvme_map_addr(uint64_t addr, uint64_t len) \"addr 0x%\"PRIx64\" len %\"PRIu64\"\"\n pci_nvme_map_addr_cmb(uint64_t addr, uint64_t len) \"addr 0x%\"PRIx64\" len %\"PRIu64\"\"\n pci_nvme_map_prp(uint64_t trans_len, uint32_t len, uint64_t prp1, uint64_t prp2, int num_prps) \"trans_len %\"PRIu64\" len %\"PRIu32\" prp1 0x%\"PRIx64\" prp2 0x%\"PRIx64\" num_prps %d\"\n+pci_nvme_pre_save_enter(void *n) \"n=%p\"\n+pci_nvme_pre_save_ns_drain(void *n, int i) \"n=%p i=%d\"\n+pci_nvme_pre_save_sq_out_req_check(void *n, int i, uint32_t head, uint32_t tail, uint32_t size) \"n=%p i=%d head=0x%\"PRIx32\" tail=0x%\"PRIx32\" size=0x%\"PRIx32\"\"\n+pci_nvme_pre_save_cq_req_check(void *n, int i, uint32_t head, uint32_t tail, uint32_t size) \"n=%p i=%d head=0x%\"PRIx32\" tail=0x%\"PRIx32\" size=0x%\"PRIx32\"\"\n+pci_nvme_pre_save_cq_unposted_cqe(void *n, int i, uint16_t cid, uint32_t nsid, uint32_t dw0, uint32_t dw1, uint16_t status, uint8_t opc) \"n=%p i=%d cid %\"PRIu16\" nsid %\"PRIu32\" dw0 0x%\"PRIx32\" dw1 0x%\"PRIx32\" status 0x%\"PRIx16\" opc 0x%\"PRIx8\"\"\n+pci_nvme_pre_save_aer(uint8_t typ, uint8_t info, uint8_t log_page) \"type 0x%\"PRIx8\" info 0x%\"PRIx8\" lid 0x%\"PRIx8\"\"\n+pci_nvme_post_load_enter(void *n) \"n=%p\"\n+pci_nvme_post_load_restore_cq(void *n, int i, uint32_t head, uint32_t tail, uint32_t size) \"n=%p i=%d head=0x%\"PRIx32\" tail=0x%\"PRIx32\" size=0x%\"PRIx32\"\"\n+pci_nvme_post_load_restore_sq(void *n, int i, uint32_t head, uint32_t tail, uint32_t size) \"n=%p i=%d head=0x%\"PRIx32\" tail=0x%\"PRIx32\" size=0x%\"PRIx32\"\"\n+pci_nvme_post_load_aer(uint8_t typ, uint8_t info, uint8_t log_page) \"type 0x%\"PRIx8\" info 0x%\"PRIx8\" lid 0x%\"PRIx8\"\"\n pci_nvme_map_sgl(uint8_t typ, uint64_t len) \"type 0x%\"PRIx8\" len %\"PRIu64\"\"\n pci_nvme_io_cmd(uint16_t cid, uint32_t nsid, uint16_t sqid, uint8_t opcode, const char *opname) \"cid %\"PRIu16\" nsid 0x%\"PRIx32\" sqid %\"PRIu16\" opc 0x%\"PRIx8\" opname '%s'\"\n pci_nvme_admin_cmd(uint16_t cid, uint16_t sqid, uint8_t opcode, const char *opname) \"cid %\"PRIu16\" sqid %\"PRIu16\" opc 0x%\"PRIx8\" opname '%s'\"\n","prefixes":["v6","6/8"]}