From patchwork Thu Mar 28 10:49:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 1068002 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 44VMMq6YtQz9sPc for ; Thu, 28 Mar 2019 22:00:07 +1100 (AEDT) Received: from localhost ([127.0.0.1]:34363 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h9SlR-0006jk-Qh for incoming@patchwork.ozlabs.org; Thu, 28 Mar 2019 07:00:05 -0400 Received: from eggs.gnu.org ([209.51.188.92]:48998) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h9Shb-0003mt-Kx for qemu-devel@nongnu.org; Thu, 28 Mar 2019 06:56:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h9SbT-0008H9-6L for qemu-devel@nongnu.org; Thu, 28 Mar 2019 06:49:48 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33798) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1h9SbS-0008Gq-TL for qemu-devel@nongnu.org; Thu, 28 Mar 2019 06:49:47 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 19C1089C33; Thu, 28 Mar 2019 10:49:46 +0000 (UTC) Received: from xz-x1.redhat.com (ovpn-12-162.pek2.redhat.com [10.72.12.162]) by smtp.corp.redhat.com (Postfix) with ESMTP id 534B460F9B; Thu, 28 Mar 2019 10:49:42 +0000 (UTC) From: Peter Xu To: qemu-devel@nongnu.org Date: Thu, 28 Mar 2019 18:49:32 +0800 Message-Id: <20190328104933.13465-2-peterx@redhat.com> In-Reply-To: <20190328104933.13465-1-peterx@redhat.com> References: <20190328104933.13465-1-peterx@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Thu, 28 Mar 2019 10:49:46 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 1/2] intel_iommu: Fix root_scalable migration breakage X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yi Liu , Yi Sun , "Michael S . Tsirkin" , Jason Wang , "Dr . David Alan Gilbert" , peterx@redhat.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" When introducing the initial support for scalable mode we added a new field into vmstate however we blindly migrate that field without notice. That'll break migration no matter forward or backward. The normal way should be that we use something like VMSTATE_UINT32_TEST() or subsections for the new vmstate field however for this case of vt-d we can even make it simpler because we've already migrated all the registers and it'll be fairly simple that we re-generate root_scalable field from the register values during post load of the device. Fixes: fb43cf739e ("intel_iommu: scalable mode emulation") Signed-off-by: Peter Xu Reviewed-by: Yi Sun --- hw/i386/intel_iommu.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c index b90de6c664..11ece40ed0 100644 --- a/hw/i386/intel_iommu.c +++ b/hw/i386/intel_iommu.c @@ -162,6 +162,15 @@ static inline void vtd_iommu_unlock(IntelIOMMUState *s) qemu_mutex_unlock(&s->iommu_lock); } +static void vtd_update_scalable_state(IntelIOMMUState *s) +{ + uint64_t val = vtd_get_quad_raw(s, DMAR_RTADDR_REG); + + if (s->scalable_mode) { + s->root_scalable = val & VTD_RTADDR_SMT; + } +} + /* Whether the address space needs to notify new mappings */ static inline gboolean vtd_as_has_map_notifier(VTDAddressSpace *as) { @@ -1710,11 +1719,10 @@ static void vtd_root_table_setup(IntelIOMMUState *s) { s->root = vtd_get_quad_raw(s, DMAR_RTADDR_REG); s->root_extended = s->root & VTD_RTADDR_RTT; - if (s->scalable_mode) { - s->root_scalable = s->root & VTD_RTADDR_SMT; - } s->root &= VTD_RTADDR_ADDR_MASK(s->aw_bits); + vtd_update_scalable_state(s); + trace_vtd_reg_dmar_root(s->root, s->root_extended); } @@ -2945,6 +2953,15 @@ static int vtd_post_load(void *opaque, int version_id) */ vtd_switch_address_space_all(iommu); + /* + * We don't need to migrate the root_scalable because we can + * simply do the calculation after the loading is complete. We + * can actually do similar things with root, + * dmar_enabled,... however since we've have them so we need to + * keep them for compatibility of migration. + */ + vtd_update_scalable_state(iommu); + return 0; } @@ -2966,7 +2983,6 @@ static const VMStateDescription vtd_vmstate = { VMSTATE_UINT8_ARRAY(csr, IntelIOMMUState, DMAR_REG_SIZE), VMSTATE_UINT8(iq_last_desc_type, IntelIOMMUState), VMSTATE_BOOL(root_extended, IntelIOMMUState), - VMSTATE_BOOL(root_scalable, IntelIOMMUState), VMSTATE_BOOL(dmar_enabled, IntelIOMMUState), VMSTATE_BOOL(qi_enabled, IntelIOMMUState), VMSTATE_BOOL(intr_enabled, IntelIOMMUState),