{"id":2230491,"url":"http://patchwork.ozlabs.org/api/1.1/patches/2230491/?format=json","web_url":"http://patchwork.ozlabs.org/project/qemu-devel/patch/20260429190532.26538-2-mohamed@unpredictable.fr/","project":{"id":14,"url":"http://patchwork.ozlabs.org/api/1.1/projects/14/?format=json","name":"QEMU Development","link_name":"qemu-devel","list_id":"qemu-devel.nongnu.org","list_email":"qemu-devel@nongnu.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20260429190532.26538-2-mohamed@unpredictable.fr>","date":"2026-04-29T19:05:18","name":"[v21,01/15] hw/intc: Add hvf vGIC interrupt controller support","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"0d7125cc0948bcd44dba16f26d6dd1d98b7d5487","submitter":{"id":91318,"url":"http://patchwork.ozlabs.org/api/1.1/people/91318/?format=json","name":"Mohamed Mediouni","email":"mohamed@unpredictable.fr"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/qemu-devel/patch/20260429190532.26538-2-mohamed@unpredictable.fr/mbox/","series":[{"id":502138,"url":"http://patchwork.ozlabs.org/api/1.1/series/502138/?format=json","web_url":"http://patchwork.ozlabs.org/project/qemu-devel/list/?series=502138","date":"2026-04-29T19:05:29","name":"HVF: Add support for platform vGIC and nested virtualisation","version":21,"mbox":"http://patchwork.ozlabs.org/series/502138/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2230491/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2230491/checks/","tags":{},"headers":{"Return-Path":"<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=unpredictable.fr header.i=@unpredictable.fr\n header.a=rsa-sha256 header.s=sig1 header.b=F7TyyqaR;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org\n (client-ip=209.51.188.17; helo=lists1p.gnu.org;\n envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from lists1p.gnu.org (lists1p.gnu.org [209.51.188.17])\n\t(using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g5RfC3RKSz1xqf\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 30 Apr 2026 05:08:19 +1000 (AEST)","from localhost ([::1] helo=lists1p.gnu.org)\n\tby lists1p.gnu.org with esmtp (Exim 4.90_1)\n\t(envelope-from <qemu-devel-bounces@nongnu.org>)\n\tid 1wIAEK-0001da-PA; Wed, 29 Apr 2026 15:05:52 -0400","from eggs.gnu.org ([2001:470:142:3::10])\n by lists1p.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <mohamed@unpredictable.fr>)\n id 1wIAEJ-0001Zh-3O\n for qemu-devel@nongnu.org; Wed, 29 Apr 2026 15:05:51 -0400","from ms-2001e-snip4-11.eps.apple.com ([57.103.73.181]\n helo=outbound.ms.icloud.com)\n by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <mohamed@unpredictable.fr>)\n id 1wIAEF-0000IC-QC\n for qemu-devel@nongnu.org; Wed, 29 Apr 2026 15:05:50 -0400","from outbound.ms.icloud.com (unknown [127.0.0.2])\n by p00-icloudmta-asmtp-us-west-3a-60-percent-6 (Postfix) with ESMTPS id\n 654BC18005F7; Wed, 29 Apr 2026 19:05:42 +0000 (UTC)","from localhost.localdomain (unknown [17.57.154.37])\n by p00-icloudmta-asmtp-us-west-3a-60-percent-6 (Postfix) with ESMTPSA id\n D31E718002D4; Wed, 29 Apr 2026 19:05:38 +0000 (UTC)"],"X-ICL-Out-Info":"\n HUtFAUMHWwJACUgBTUQeDx5WFlZNRAJCTQFIHV8DWRxBAUkdXw9LVxQEFVwFVgZXFHkNXR1FDlYZWgxSD1sOHBZLWFUJCgZdGFgVVgl3HlwASx1XBFQfUxJVHR0LRUtAEwRJB01fDl4fBBdGGVUERx5dVl4eGQJRHFYNV0NUBF9QSQxBUGxaAEcXSB1dGVlvUF0cDhhZG0AVXRFQGVYJXhUXHkFNWgJWTQVKA18BWwZCAEkKXQJYAF4LTgZeD0YAXVQXWwxaDlYwTBZDH1IPWxNNGVEBUkVUAgdYRxRHDg8TTAtHAlo0Vh9UGVoD","Dkim-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=unpredictable.fr;\n s=sig1; t=1777489546; x=1780081546;\n bh=FN721zsWnolGXh8Og1f+Y9J4K+c6rr2aL1Bc9qBg3Mc=;\n h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type:x-icloud-hme;\n b=F7TyyqaROuXD3gdVFiygT8rVAkevZuqSYj6sXJmUp97GYEjiib+lps3rzYmhNMNZ2V3/x0/clPm6bvpNpPDgtUdd0vif0Zs5I8ftMPpK9fzn/TTPmRJn+T8oCzrYAoUuUR9WAtwnCS/4Eh3vCiV6oAqXlK8wKmjnmbh57cKklCdJOahpAcWhUlfHB/0f4s41d22/wFbUM0yWWp+M7HzCE8YhP+PW/mkn2mEuN6dgmeYc5PKtaztIw8m3ODQNtzKaZaQN5QTjIWl2dn/RY7DaKpZ1VgqpxtXfSIXwVbaMURjWuGY4Xeix7jjXoLL9ouGLBnBEX8fbQj/HTpFjFKYETg==","mail-alias-created-date":"1752046281608","From":"Mohamed Mediouni <mohamed@unpredictable.fr>","To":"qemu-devel@nongnu.org","Cc":"Phil Dennis-Jordan <phil@philjordan.eu>,\n Yanan Wang <wangyanan55@huawei.com>, Paolo Bonzini <pbonzini@redhat.com>,\n Roman Bolshakov <rbolshakov@ddn.com>,\n =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>,\n qemu-arm@nongnu.org, Zhao Liu <zhao1.liu@intel.com>,\n Alexander Graf <agraf@csgraf.de>, Eduardo Habkost <eduardo@habkost.net>,\n Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,\n Peter Maydell <peter.maydell@linaro.org>,\n Mohamed Mediouni <mohamed@unpredictable.fr>,\n Manos Pitsidianakis <manos.pitsidianakis@linaro.org>","Subject":"[PATCH v21 01/15] hw/intc: Add hvf vGIC interrupt controller support","Date":"Wed, 29 Apr 2026 21:05:18 +0200","Message-ID":"<20260429190532.26538-2-mohamed@unpredictable.fr>","X-Mailer":"git-send-email 2.50.1","In-Reply-To":"<20260429190532.26538-1-mohamed@unpredictable.fr>","References":"<20260429190532.26538-1-mohamed@unpredictable.fr>","MIME-Version":"1.0","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"8bit","X-Authority-Info-Out":"v=2.4 cv=UfxciaSN c=1 sm=1 tr=0 ts=69f25688\n cx=c_apl:c_pps:t_out a=qkKslKyYc0ctBTeLUVfTFg==:117 a=IkcTkHD0fZMA:10\n a=A5OVakUREuEA:10 a=VkNPw1HP01LnGYTKEx00:22 a=KKAkSRfTAAAA:8\n a=D2JAlcJVxmvnjXxMMLIA:9 a=3ZKOabzyN94A:10 a=QEXdDO2ut3YA:10\n a=cvBusfyB2V15izCimMoJ:22","X-Proofpoint-ORIG-GUID":"PLY49tVahmSa9aCFbDVl6tG7upTe3OYZ","X-Proofpoint-Spam-Details-Enc":"AW1haW4tMjYwNDI5MDE5MCBTYWx0ZWRfXzUotNuVhOLBF\n Lmsd18V3PGddVIJ4kmamaMgN/H6rK2lchBT8j3YSUvMp2Odth8XAdrH45t3xeottt7cRtBKkDsm\n 6YHCT8TJOZ6UQ0sv4fS5KataTVM6NEAKoegNXaUMwyisaBVWP052I0xospVkngR+5Q75ZsMtsJ6\n 6CWouD7+/Z2Zccz3cMxN0e73J/cFLwcpCX8iwCEozqTrq6ElJQ9IUW1kqKRwfCYIERiNrOj1XQs\n ifxhpqobEdwJzrzrdw7BP2zQkMYvLmPUmr6vlFfpjfkDHyz9HEOG7im/LVGJOxPC1l+qmiJA1gi\n UAuWBQwOXSxBf70UChkVSgSipGBSNnFnMSQUhjSTiswYwcUc9V+yzVC1xulG2w=","X-Proofpoint-GUID":"PLY49tVahmSa9aCFbDVl6tG7upTe3OYZ","Received-SPF":"pass client-ip=57.103.73.181;\n envelope-from=mohamed@unpredictable.fr; helo=outbound.ms.icloud.com","X-Spam_score_int":"-27","X-Spam_score":"-2.8","X-Spam_bar":"--","X-Spam_report":"(-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1,\n DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,\n RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001,\n SPF_PASS=-0.001 autolearn=ham autolearn_force=no","X-Spam_action":"no action","X-BeenThere":"qemu-devel@nongnu.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"qemu development <qemu-devel.nongnu.org>","List-Unsubscribe":"<https://lists.nongnu.org/mailman/options/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>","List-Archive":"<https://lists.nongnu.org/archive/html/qemu-devel>","List-Post":"<mailto:qemu-devel@nongnu.org>","List-Help":"<mailto:qemu-devel-request@nongnu.org?subject=help>","List-Subscribe":"<https://lists.nongnu.org/mailman/listinfo/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=subscribe>","Errors-To":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org","Sender":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org"},"content":"This opens up the door to nested virtualisation support.\n\nSigned-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>\nReviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>\nReviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>\n---\n hw/intc/arm_gicv3_hvf.c            | 740 +++++++++++++++++++++++++++++\n hw/intc/meson.build                |   1 +\n include/hw/intc/arm_gicv3_common.h |   1 +\n 3 files changed, 742 insertions(+)\n create mode 100644 hw/intc/arm_gicv3_hvf.c","diff":"diff --git a/hw/intc/arm_gicv3_hvf.c b/hw/intc/arm_gicv3_hvf.c\nnew file mode 100644\nindex 0000000000..7935846bc7\n--- /dev/null\n+++ b/hw/intc/arm_gicv3_hvf.c\n@@ -0,0 +1,740 @@\n+/* SPDX-License-Identifier: GPL-2.0-or-later */\n+/*\n+ * ARM Generic Interrupt Controller using HVF platform support\n+ *\n+ * Copyright (c) 2025 Mohamed Mediouni\n+ * Based on vGICv3 KVM code by Pavel Fedin\n+ *\n+ */\n+\n+#include \"qemu/osdep.h\"\n+#include \"qapi/error.h\"\n+#include \"hw/intc/arm_gicv3_common.h\"\n+#include \"qemu/error-report.h\"\n+#include \"qemu/module.h\"\n+#include \"system/runstate.h\"\n+#include \"system/hvf.h\"\n+#include \"system/hvf_int.h\"\n+#include \"hvf_arm.h\"\n+#include \"gicv3_internal.h\"\n+#include \"vgic_common.h\"\n+#include \"qom/object.h\"\n+#include \"target/arm/cpregs.h\"\n+#include <Hypervisor/Hypervisor.h>\n+\n+/* For the GIC, override the check outright, as availability is checked elsewhere. */\n+#pragma clang diagnostic push\n+#pragma clang diagnostic ignored \"-Wunguarded-availability\"\n+\n+struct HVFARMGICv3Class {\n+    ARMGICv3CommonClass parent_class;\n+    DeviceRealize parent_realize;\n+    ResettablePhases parent_phases;\n+};\n+\n+typedef struct HVFARMGICv3Class HVFARMGICv3Class;\n+\n+/* This is reusing the GICv3State typedef from ARM_GICV3_ITS_COMMON */\n+DECLARE_OBJ_CHECKERS(GICv3State, HVFARMGICv3Class,\n+                     HVF_GICV3, TYPE_HVF_GICV3);\n+\n+/*\n+ * Loop through each distributor IRQ related register; since bits\n+ * corresponding to SPIs and PPIs are RAZ/WI when affinity routing\n+ * is enabled, we skip those.\n+ */\n+#define for_each_dist_irq_reg(_irq, _max, _field_width) \\\n+    for (_irq = GIC_INTERNAL; _irq < _max; _irq += (32 / _field_width))\n+\n+/*\n+ * Wrap calls to the vGIC APIs to assert_hvf_ok()\n+ * as a macro to keep the code clean.\n+ */\n+#define hv_gic_get_distributor_reg(offset, reg) \\\n+    assert_hvf_ok(hv_gic_get_distributor_reg(offset, reg))\n+\n+#define hv_gic_set_distributor_reg(offset, reg) \\\n+    assert_hvf_ok(hv_gic_set_distributor_reg(offset, reg))\n+\n+#define hv_gic_get_redistributor_reg(vcpu, reg, value) \\\n+    assert_hvf_ok(hv_gic_get_redistributor_reg(vcpu, reg, value))\n+\n+#define hv_gic_set_redistributor_reg(vcpu, reg, value) \\\n+    assert_hvf_ok(hv_gic_set_redistributor_reg(vcpu, reg, value))\n+\n+#define hv_gic_get_icc_reg(vcpu, reg, value) \\\n+    assert_hvf_ok(hv_gic_get_icc_reg(vcpu, reg, value))\n+\n+#define hv_gic_set_icc_reg(vcpu, reg, value) \\\n+    assert_hvf_ok(hv_gic_set_icc_reg(vcpu, reg, value))\n+\n+#define hv_gic_get_ich_reg(vcpu, reg, value) \\\n+    assert_hvf_ok(hv_gic_get_ich_reg(vcpu, reg, value))\n+\n+#define hv_gic_set_ich_reg(vcpu, reg, value) \\\n+    assert_hvf_ok(hv_gic_set_ich_reg(vcpu, reg, value))\n+\n+static void hvf_dist_get_priority(GICv3State *s, hv_gic_distributor_reg_t offset,\n+    uint8_t *bmp)\n+{\n+    uint64_t reg;\n+    uint32_t *field;\n+    int irq;\n+    field = (uint32_t *)(bmp);\n+\n+    for_each_dist_irq_reg(irq, s->num_irq, 8) {\n+        hv_gic_get_distributor_reg(offset, &reg);\n+        *field = reg;\n+        offset += 4;\n+        field++;\n+    }\n+}\n+\n+static void hvf_dist_put_priority(GICv3State *s, hv_gic_distributor_reg_t offset,\n+    uint8_t *bmp)\n+{\n+    uint32_t reg, *field;\n+    int irq;\n+    field = (uint32_t *)(bmp);\n+\n+    for_each_dist_irq_reg(irq, s->num_irq, 8) {\n+        reg = *field;\n+        hv_gic_set_distributor_reg(offset, reg);\n+        offset += 4;\n+        field++;\n+    }\n+}\n+\n+static void hvf_dist_get_edge_trigger(GICv3State *s, hv_gic_distributor_reg_t offset,\n+                                      uint32_t *bmp)\n+{\n+    uint64_t reg;\n+    int irq;\n+\n+    for_each_dist_irq_reg(irq, s->num_irq, 2) {\n+        hv_gic_get_distributor_reg(offset, &reg);\n+        reg = half_unshuffle32(reg >> 1);\n+        if (irq % 32 != 0) {\n+            reg = (reg << 16);\n+        }\n+        *gic_bmp_ptr32(bmp, irq) |= reg;\n+        offset += 4;\n+    }\n+}\n+\n+static void hvf_dist_put_edge_trigger(GICv3State *s, hv_gic_distributor_reg_t offset,\n+                                      uint32_t *bmp)\n+{\n+    uint32_t reg;\n+    int irq;\n+\n+    for_each_dist_irq_reg(irq, s->num_irq, 2) {\n+        reg = *gic_bmp_ptr32(bmp, irq);\n+        if (irq % 32 != 0) {\n+            reg = (reg & 0xffff0000) >> 16;\n+        } else {\n+            reg = reg & 0xffff;\n+        }\n+        reg = half_shuffle32(reg) << 1;\n+        hv_gic_set_distributor_reg(offset, reg);\n+        offset += 4;\n+    }\n+}\n+\n+/* Read a bitmap register group from the kernel VGIC. */\n+static void hvf_dist_getbmp(GICv3State *s, hv_gic_distributor_reg_t offset, uint32_t *bmp)\n+{\n+    uint64_t reg;\n+    int irq;\n+\n+    for_each_dist_irq_reg(irq, s->num_irq, 1) {\n+        hv_gic_get_distributor_reg(offset, &reg);\n+        *gic_bmp_ptr32(bmp, irq) = reg;\n+        offset += 4;\n+    }\n+}\n+\n+static void hvf_dist_putbmp(GICv3State *s, hv_gic_distributor_reg_t offset,\n+                            hv_gic_distributor_reg_t clroffset, uint32_t *bmp)\n+{\n+    uint32_t reg;\n+    int irq;\n+\n+    for_each_dist_irq_reg(irq, s->num_irq, 1) {\n+        /*\n+         * If this bitmap is a set/clear register pair, first write to the\n+         * clear-reg to clear all bits before using the set-reg to write\n+         * the 1 bits.\n+         */\n+        if (clroffset != 0) {\n+            reg = 0;\n+            hv_gic_set_distributor_reg(clroffset, reg);\n+            clroffset += 4;\n+        }\n+        reg = *gic_bmp_ptr32(bmp, irq);\n+        hv_gic_set_distributor_reg(offset, reg);\n+        offset += 4;\n+    }\n+}\n+\n+static void hvf_gicv3_check(GICv3State *s)\n+{\n+    uint64_t reg;\n+    uint32_t num_irq;\n+\n+    /* Sanity checking s->num_irq */\n+    hv_gic_get_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_TYPER, &reg);\n+    num_irq = ((reg & 0x1f) + 1) * 32;\n+\n+    if (num_irq < s->num_irq) {\n+        error_report(\"Model requests %u IRQs, but HVF supports max %u\",\n+                     s->num_irq, num_irq);\n+        abort();\n+    }\n+}\n+\n+static void hvf_gicv3_put_cpu_el2(CPUState *cpu_state, run_on_cpu_data arg)\n+{\n+    int num_pri_bits;\n+\n+    /* Redistributor state */\n+    GICv3CPUState *c = arg.host_ptr;\n+    hv_vcpu_t vcpu = c->cpu->accel->fd;\n+\n+    hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_VMCR_EL2, c->ich_vmcr_el2);\n+    hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_HCR_EL2, c->ich_hcr_el2);\n+\n+    for (int i = 0; i < GICV3_LR_MAX; i++) {\n+        hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_LR0_EL2, c->ich_lr_el2[i]);\n+    }\n+\n+    num_pri_bits = c->vpribits;\n+\n+    switch (num_pri_bits) {\n+    case 7:\n+      hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 3,\n+                         c->ich_apr[GICV3_G0][3]);\n+      hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 2,\n+                         c->ich_apr[GICV3_G0][2]);\n+      /* fall through */\n+    case 6:\n+      hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 1,\n+                         c->ich_apr[GICV3_G0][1]);\n+      /* fall through */\n+    default:\n+      hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2,\n+                         c->ich_apr[GICV3_G0][0]);\n+    }\n+\n+    switch (num_pri_bits) {\n+    case 7:\n+      hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 3,\n+                         c->ich_apr[GICV3_G1NS][3]);\n+      hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 2,\n+                         c->ich_apr[GICV3_G1NS][2]);\n+      /* fall through */\n+    case 6:\n+      hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 1,\n+                         c->ich_apr[GICV3_G1NS][1]);\n+      /* fall through */\n+    default:\n+      hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2,\n+                         c->ich_apr[GICV3_G1NS][0]);\n+    }\n+}\n+\n+static void hvf_gicv3_put_cpu(CPUState *cpu_state, run_on_cpu_data arg)\n+{\n+    uint32_t reg;\n+    uint64_t reg64;\n+    int i, num_pri_bits;\n+\n+    /* Redistributor state */\n+    GICv3CPUState *c = arg.host_ptr;\n+    hv_vcpu_t vcpu = c->cpu->accel->fd;\n+\n+    reg = c->gicr_waker;\n+    hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROUPR0, reg);\n+\n+    reg = c->gicr_igroupr0;\n+    hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROUPR0, reg);\n+\n+    reg = ~0;\n+    hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICENABLER0, reg);\n+    reg = c->gicr_ienabler0;\n+    hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISENABLER0, reg);\n+\n+    /* Restore config before pending so we treat level/edge correctly */\n+    reg = half_shuffle32(c->edge_trigger >> 16) << 1;\n+    hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICFGR1, reg);\n+\n+    reg = ~0;\n+    hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICPENDR0, reg);\n+    reg = c->gicr_ipendr0;\n+    hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISPENDR0, reg);\n+\n+    reg = ~0;\n+    hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICACTIVER0, reg);\n+    reg = c->gicr_iactiver0;\n+    hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISACTIVER0, reg);\n+\n+    for (i = 0; i < GIC_INTERNAL; i += 4) {\n+        reg = c->gicr_ipriorityr[i] |\n+            (c->gicr_ipriorityr[i + 1] << 8) |\n+            (c->gicr_ipriorityr[i + 2] << 16) |\n+            (c->gicr_ipriorityr[i + 3] << 24);\n+        hv_gic_set_redistributor_reg(vcpu,\n+            HV_GIC_REDISTRIBUTOR_REG_GICR_IPRIORITYR0 + i, reg);\n+    }\n+\n+    /* CPU interface state */\n+    hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_SRE_EL1, c->icc_sre_el1);\n+\n+    hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_CTLR_EL1,\n+                    c->icc_ctlr_el1[GICV3_NS]);\n+    hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN0_EL1,\n+                    c->icc_igrpen[GICV3_G0]);\n+    hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN1_EL1,\n+                    c->icc_igrpen[GICV3_G1NS]);\n+    hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_PMR_EL1, c->icc_pmr_el1);\n+    hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_BPR0_EL1, c->icc_bpr[GICV3_G0]);\n+    hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_BPR1_EL1, c->icc_bpr[GICV3_G1NS]);\n+\n+    num_pri_bits = ((c->icc_ctlr_el1[GICV3_NS] &\n+                    ICC_CTLR_EL1_PRIBITS_MASK) >>\n+                    ICC_CTLR_EL1_PRIBITS_SHIFT) + 1;\n+\n+    switch (num_pri_bits) {\n+    case 7:\n+           reg64 = c->icc_apr[GICV3_G0][3];\n+        hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 3, reg64);\n+        reg64 = c->icc_apr[GICV3_G0][2];\n+        hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 2, reg64);\n+        /* fall through */\n+    case 6:\n+        reg64 = c->icc_apr[GICV3_G0][1];\n+        hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 1, reg64);\n+        /* fall through */\n+    default:\n+        reg64 = c->icc_apr[GICV3_G0][0];\n+        hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1, reg64);\n+    }\n+\n+    switch (num_pri_bits) {\n+    case 7:\n+        reg64 = c->icc_apr[GICV3_G1NS][3];\n+        hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 3, reg64);\n+        reg64 = c->icc_apr[GICV3_G1NS][2];\n+        hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 2, reg64);\n+        /* fall through */\n+    case 6:\n+        reg64 = c->icc_apr[GICV3_G1NS][1];\n+        hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 1, reg64);\n+        /* fall through */\n+    default:\n+        reg64 = c->icc_apr[GICV3_G1NS][0];\n+        hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1, reg64);\n+    }\n+\n+    /* Registers beyond this point are with nested virt only */\n+    if (c->gic->maint_irq) {\n+        hvf_gicv3_put_cpu_el2(cpu_state, arg);\n+    }\n+}\n+\n+static void hvf_gicv3_put(GICv3State *s)\n+{\n+    uint32_t reg;\n+    int ncpu, i;\n+\n+    hvf_gicv3_check(s);\n+\n+    reg = s->gicd_ctlr;\n+    hv_gic_set_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_CTLR, reg);\n+\n+    /* per-CPU state */\n+\n+    for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {\n+        run_on_cpu_data data;\n+        data.host_ptr = &s->cpu[ncpu];\n+        run_on_cpu(s->cpu[ncpu].cpu, hvf_gicv3_put_cpu, data);\n+    }\n+\n+    /* s->enable bitmap -> GICD_ISENABLERn */\n+    hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISENABLER0,\n+        HV_GIC_DISTRIBUTOR_REG_GICD_ICENABLER0, s->enabled);\n+\n+    /* s->group bitmap -> GICD_IGROUPRn */\n+    hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_IGROUPR0,\n+        0, s->group);\n+\n+    /* Restore targets before pending to ensure the pending state is set on\n+     * the appropriate CPU interfaces in the kernel\n+     */\n+\n+    /* s->gicd_irouter[irq] -> GICD_IROUTERn */\n+    for (i = GIC_INTERNAL; i < s->num_irq; i++) {\n+        uint32_t offset = HV_GIC_DISTRIBUTOR_REG_GICD_IROUTER32 + (8 * i)\n+            - (8 * GIC_INTERNAL);\n+        hv_gic_set_distributor_reg(offset, s->gicd_irouter[i]);\n+    }\n+\n+    /*\n+     * s->trigger bitmap -> GICD_ICFGRn\n+     * (restore configuration registers before pending IRQs so we treat\n+     * level/edge correctly)\n+     */\n+    hvf_dist_put_edge_trigger(s, HV_GIC_DISTRIBUTOR_REG_GICD_ICFGR0, s->edge_trigger);\n+\n+    /* s->pending bitmap -> GICD_ISPENDRn */\n+    hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISPENDR0,\n+        HV_GIC_DISTRIBUTOR_REG_GICD_ICPENDR0, s->pending);\n+\n+    /* s->active bitmap -> GICD_ISACTIVERn */\n+    hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISACTIVER0,\n+        HV_GIC_DISTRIBUTOR_REG_GICD_ICACTIVER0, s->active);\n+\n+    /* s->gicd_ipriority[] -> GICD_IPRIORITYRn */\n+    hvf_dist_put_priority(s, HV_GIC_DISTRIBUTOR_REG_GICD_IPRIORITYR0, s->gicd_ipriority);\n+}\n+\n+static void hvf_gicv3_get_cpu_el2(CPUState *cpu_state, run_on_cpu_data arg)\n+{\n+    int num_pri_bits;\n+\n+    /* Redistributor state */\n+    GICv3CPUState *c = arg.host_ptr;\n+    hv_vcpu_t vcpu = c->cpu->accel->fd;\n+\n+    hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_VMCR_EL2, &c->ich_vmcr_el2);\n+    hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_HCR_EL2, &c->ich_hcr_el2);\n+\n+    for (int i = 0; i < GICV3_LR_MAX; i++) {\n+        hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_LR0_EL2, &c->ich_lr_el2[i]);\n+    }\n+\n+    num_pri_bits = c->vpribits;\n+\n+    switch (num_pri_bits) {\n+    case 7:\n+      hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 3,\n+                         &c->ich_apr[GICV3_G0][3]);\n+      hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 2,\n+                         &c->ich_apr[GICV3_G0][2]);\n+      /* fall through */\n+    case 6:\n+      hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 1,\n+                         &c->ich_apr[GICV3_G0][1]);\n+      /* fall through */\n+    default:\n+      hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2,\n+                         &c->ich_apr[GICV3_G0][0]);\n+    }\n+\n+    switch (num_pri_bits) {\n+    case 7:\n+      hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 3,\n+                         &c->ich_apr[GICV3_G1NS][3]);\n+      hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 2,\n+                         &c->ich_apr[GICV3_G1NS][2]);\n+      /* fall through */\n+    case 6:\n+      hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 1,\n+                         &c->ich_apr[GICV3_G1NS][1]);\n+      /* fall through */\n+    default:\n+      hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2,\n+                         &c->ich_apr[GICV3_G1NS][0]);\n+    }\n+}\n+\n+static void hvf_gicv3_get_cpu(CPUState *cpu_state, run_on_cpu_data arg)\n+{\n+    uint64_t reg;\n+    int i, num_pri_bits;\n+\n+    /* Redistributor state */\n+    GICv3CPUState *c = arg.host_ptr;\n+    hv_vcpu_t vcpu = c->cpu->accel->fd;\n+\n+    hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROUPR0,\n+                                 &reg);\n+    c->gicr_igroupr0 = reg;\n+    hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISENABLER0,\n+                                 &reg);\n+    c->gicr_ienabler0 = reg;\n+    hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICFGR1,\n+                                 &reg);\n+    c->edge_trigger = half_unshuffle32(reg >> 1) << 16;\n+    hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISPENDR0,\n+                                 &reg);\n+    c->gicr_ipendr0 = reg;\n+    hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISACTIVER0,\n+                                 &reg);\n+    c->gicr_iactiver0 = reg;\n+\n+    for (i = 0; i < GIC_INTERNAL; i += 4) {\n+        hv_gic_get_redistributor_reg(\n+          vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IPRIORITYR0 + i, &reg);\n+        c->gicr_ipriorityr[i] = extract32(reg, 0, 8);\n+        c->gicr_ipriorityr[i + 1] = extract32(reg, 8, 8);\n+        c->gicr_ipriorityr[i + 2] = extract32(reg, 16, 8);\n+        c->gicr_ipriorityr[i + 3] = extract32(reg, 24, 8);\n+    }\n+\n+    /* CPU interface */\n+    hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_SRE_EL1, &c->icc_sre_el1);\n+\n+    hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_CTLR_EL1,\n+                       &c->icc_ctlr_el1[GICV3_NS]);\n+    hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN0_EL1,\n+                       &c->icc_igrpen[GICV3_G0]);\n+    hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN1_EL1,\n+                       &c->icc_igrpen[GICV3_G1NS]);\n+    hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_PMR_EL1, &c->icc_pmr_el1);\n+    hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_BPR0_EL1, &c->icc_bpr[GICV3_G0]);\n+    hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_BPR1_EL1, &c->icc_bpr[GICV3_G1NS]);\n+    num_pri_bits = ((c->icc_ctlr_el1[GICV3_NS] & ICC_CTLR_EL1_PRIBITS_MASK) >>\n+                    ICC_CTLR_EL1_PRIBITS_SHIFT) +\n+                   1;\n+\n+    switch (num_pri_bits) {\n+    case 7:\n+      hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 3,\n+                         &c->icc_apr[GICV3_G0][3]);\n+      hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 2,\n+                         &c->icc_apr[GICV3_G0][2]);\n+      /* fall through */\n+    case 6:\n+      hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 1,\n+                         &c->icc_apr[GICV3_G0][1]);\n+      /* fall through */\n+    default:\n+      hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1,\n+                         &c->icc_apr[GICV3_G0][0]);\n+    }\n+\n+    switch (num_pri_bits) {\n+    case 7:\n+      hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 3,\n+                         &c->icc_apr[GICV3_G1NS][3]);\n+      hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 2,\n+                         &c->icc_apr[GICV3_G1NS][2]);\n+      /* fall through */\n+    case 6:\n+      hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 1,\n+                         &c->icc_apr[GICV3_G1NS][1]);\n+      /* fall through */\n+    default:\n+      hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1,\n+                         &c->icc_apr[GICV3_G1NS][0]);\n+    }\n+\n+    /* Registers beyond this point are with nested virt only */\n+    if (c->gic->maint_irq) {\n+        hvf_gicv3_get_cpu_el2(cpu_state, arg);\n+    }\n+}\n+\n+static void hvf_gicv3_get(GICv3State *s)\n+{\n+    uint64_t reg;\n+    int ncpu, i;\n+\n+    hvf_gicv3_check(s);\n+\n+    hv_gic_get_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_CTLR, &reg);\n+    s->gicd_ctlr = reg;\n+\n+    /* Redistributor state (one per CPU) */\n+\n+    for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {\n+        run_on_cpu_data data;\n+        data.host_ptr = &s->cpu[ncpu];\n+        run_on_cpu(s->cpu[ncpu].cpu, hvf_gicv3_get_cpu, data);\n+    }\n+\n+    /* GICD_IGROUPRn -> s->group bitmap */\n+    hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_IGROUPR0, s->group);\n+\n+    /* GICD_ISENABLERn -> s->enabled bitmap */\n+    hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISENABLER0, s->enabled);\n+\n+    /* GICD_ISPENDRn -> s->pending bitmap */\n+    hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISPENDR0, s->pending);\n+\n+    /* GICD_ISACTIVERn -> s->active bitmap */\n+    hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISACTIVER0, s->active);\n+\n+    /* GICD_ICFGRn -> s->trigger bitmap */\n+    hvf_dist_get_edge_trigger(s, HV_GIC_DISTRIBUTOR_REG_GICD_ICFGR0,\n+        s->edge_trigger);\n+\n+    /* GICD_IPRIORITYRn -> s->gicd_ipriority[] */\n+    hvf_dist_get_priority(s, HV_GIC_DISTRIBUTOR_REG_GICD_IPRIORITYR0,\n+        s->gicd_ipriority);\n+\n+    /* GICD_IROUTERn -> s->gicd_irouter[irq] */\n+    for (i = GIC_INTERNAL; i < s->num_irq; i++) {\n+        uint32_t offset = HV_GIC_DISTRIBUTOR_REG_GICD_IROUTER32\n+            + (8 * i) - (8 * GIC_INTERNAL);\n+        hv_gic_get_distributor_reg(offset, &s->gicd_irouter[i]);\n+    }\n+}\n+\n+static void hvf_gicv3_set_irq(void *opaque, int irq, int level)\n+{\n+    GICv3State *s = opaque;\n+    if (irq > s->num_irq) {\n+        return;\n+    }\n+    hv_gic_set_spi(GIC_INTERNAL + irq, !!level);\n+}\n+\n+static void hvf_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)\n+{\n+    GICv3CPUState *c;\n+\n+    c = env->gicv3state;\n+    c->icc_pmr_el1 = 0;\n+    /*\n+     * Architecturally the reset value of the ICC_BPR registers\n+     * is UNKNOWN. We set them all to 0 here; when the kernel\n+     * uses these values to program the ICH_VMCR_EL2 fields that\n+     * determine the guest-visible ICC_BPR register values, the\n+     * hardware's \"writing a value less than the minimum sets\n+     * the field to the minimum value\" behaviour will result in\n+     * them effectively resetting to the correct minimum value\n+     * for the host GIC.\n+     */\n+    c->icc_bpr[GICV3_G0] = 0;\n+    c->icc_bpr[GICV3_G1] = 0;\n+    c->icc_bpr[GICV3_G1NS] = 0;\n+\n+    c->icc_sre_el1 = 0x7;\n+    memset(c->icc_apr, 0, sizeof(c->icc_apr));\n+    memset(c->icc_igrpen, 0, sizeof(c->icc_igrpen));\n+}\n+\n+static void hvf_gicv3_reset_hold(Object *obj, ResetType type)\n+{\n+    GICv3State *s = ARM_GICV3_COMMON(obj);\n+    HVFARMGICv3Class *kgc = HVF_GICV3_GET_CLASS(s);\n+\n+    if (kgc->parent_phases.hold) {\n+        kgc->parent_phases.hold(obj, type);\n+    }\n+\n+    hvf_gicv3_put(s);\n+}\n+\n+\n+/*\n+ * CPU interface registers of GIC needs to be reset on CPU reset.\n+ * For the calling arm_gicv3_icc_reset() on CPU reset, we register\n+ * below ARMCPRegInfo. As we reset the whole cpu interface under single\n+ * register reset, we define only one register of CPU interface instead\n+ * of defining all the registers.\n+ */\n+static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {\n+    { .name = \"ICC_CTLR_EL1\", .state = ARM_CP_STATE_BOTH,\n+      .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 12, .opc2 = 4,\n+      /*\n+       * If ARM_CP_NOP is used, resetfn is not called,\n+       * So ARM_CP_NO_RAW is appropriate type.\n+       */\n+      .type = ARM_CP_NO_RAW,\n+      .access = PL1_RW,\n+      .readfn = arm_cp_read_zero,\n+      .writefn = arm_cp_write_ignore,\n+      /*\n+       * We hang the whole cpu interface reset routine off here\n+       * rather than parcelling it out into one little function\n+       * per register\n+       */\n+      .resetfn = hvf_gicv3_icc_reset,\n+    },\n+};\n+\n+static void hvf_gicv3_realize(DeviceState *dev, Error **errp)\n+{\n+    ERRP_GUARD();\n+    GICv3State *s = HVF_GICV3(dev);\n+    HVFARMGICv3Class *kgc = HVF_GICV3_GET_CLASS(s);\n+    int i;\n+\n+    kgc->parent_realize(dev, errp);\n+    if (*errp) {\n+        return;\n+    }\n+\n+    if (s->revision != 3) {\n+        error_setg(errp, \"unsupported GIC revision %d for platform GIC\",\n+                   s->revision);\n+    }\n+\n+    if (s->security_extn) {\n+        error_setg(errp, \"the platform vGICv3 does not implement the \"\n+                   \"security extensions\");\n+        return;\n+    }\n+\n+    if (s->nmi_support) {\n+        error_setg(errp, \"NMI is not supported with the platform GIC\");\n+        return;\n+    }\n+\n+    if (s->nb_redist_regions > 1) {\n+        error_setg(errp, \"Multiple VGICv3 redistributor regions are not \"\n+                   \"supported by HVF\");\n+        error_append_hint(errp, \"A maximum of %d VCPUs can be used\",\n+                          s->redist_region_count[0]);\n+        return;\n+    }\n+\n+    gicv3_init_irqs_and_mmio(s, hvf_gicv3_set_irq, NULL);\n+\n+    for (i = 0; i < s->num_cpu; i++) {\n+        ARMCPU *cpu = ARM_CPU(qemu_get_cpu(i));\n+\n+        define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);\n+    }\n+\n+    if (s->maint_irq && s->maint_irq != HV_GIC_INT_MAINTENANCE) {\n+        error_setg(errp, \"vGIC maintenance IRQ mismatch with the hardcoded one in HVF.\");\n+        return;\n+    }\n+}\n+\n+static void hvf_gicv3_class_init(ObjectClass *klass, const void *data)\n+{\n+    DeviceClass *dc = DEVICE_CLASS(klass);\n+    ResettableClass *rc = RESETTABLE_CLASS(klass);\n+    ARMGICv3CommonClass *agcc = ARM_GICV3_COMMON_CLASS(klass);\n+    HVFARMGICv3Class *kgc = HVF_GICV3_CLASS(klass);\n+\n+    agcc->pre_save = hvf_gicv3_get;\n+    agcc->post_load = hvf_gicv3_put;\n+\n+    device_class_set_parent_realize(dc, hvf_gicv3_realize,\n+                                    &kgc->parent_realize);\n+    resettable_class_set_parent_phases(rc, NULL, hvf_gicv3_reset_hold, NULL,\n+                                       &kgc->parent_phases);\n+}\n+\n+static const TypeInfo hvf_arm_gicv3_info = {\n+    .name = TYPE_HVF_GICV3,\n+    .parent = TYPE_ARM_GICV3_COMMON,\n+    .instance_size = sizeof(GICv3State),\n+    .class_init = hvf_gicv3_class_init,\n+    .class_size = sizeof(HVFARMGICv3Class),\n+};\n+\n+static void hvf_gicv3_register_types(void)\n+{\n+    type_register_static(&hvf_arm_gicv3_info);\n+}\n+\n+type_init(hvf_gicv3_register_types)\n+\n+#pragma clang diagnostic pop\ndiff --git a/hw/intc/meson.build b/hw/intc/meson.build\nindex 96742df090..b7baf8a0f6 100644\n--- a/hw/intc/meson.build\n+++ b/hw/intc/meson.build\n@@ -42,6 +42,7 @@ arm_common_ss.add(when: 'CONFIG_ARM_GIC', if_true: files('arm_gicv3_cpuif_common\n arm_common_ss.add(when: 'CONFIG_ARM_GICV3', if_true: files('arm_gicv3_cpuif.c'))\n specific_ss.add(when: 'CONFIG_ARM_GIC_KVM', if_true: files('arm_gic_kvm.c'))\n specific_ss.add(when: ['CONFIG_WHPX', 'TARGET_AARCH64'], if_true: files('arm_gicv3_whpx.c'))\n+specific_ss.add(when: ['CONFIG_HVF', 'CONFIG_ARM_GICV3'], if_true: files('arm_gicv3_hvf.c'))\n specific_ss.add(when: ['CONFIG_ARM_GIC_KVM', 'TARGET_AARCH64'], if_true: files('arm_gicv3_kvm.c', 'arm_gicv3_its_kvm.c'))\n arm_common_ss.add(when: 'CONFIG_ARM_V7M', if_true: files('armv7m_nvic.c'))\n specific_ss.add(when: 'CONFIG_GRLIB', if_true: files('grlib_irqmp.c'))\ndiff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h\nindex c55cf18120..9adcab0a0c 100644\n--- a/include/hw/intc/arm_gicv3_common.h\n+++ b/include/hw/intc/arm_gicv3_common.h\n@@ -315,6 +315,7 @@ DECLARE_OBJ_CHECKERS(GICv3State, ARMGICv3CommonClass,\n \n /* Types for GICv3 kernel-irqchip */\n #define TYPE_WHPX_GICV3 \"whpx-arm-gicv3\"\n+#define TYPE_HVF_GICV3 \"hvf-arm-gicv3\"\n \n struct ARMGICv3CommonClass {\n     /*< private >*/\n","prefixes":["v21","01/15"]}