diff mbox series

[RFC] hw/display: add virtio-ramfb device

Message ID 20210309213513.12925-1-j@getutm.app
State New
Headers show
Series [RFC] hw/display: add virtio-ramfb device | expand

Commit Message

Joelle van Dyne March 9, 2021, 9:35 p.m. UTC
Like virtio-vga, but using ramfb instead of legacy vga.
Useful for booting from OVMF (with updated drivers) into Windows
ARM which expects a linear FB that the virtio-gpu driver in OVMF
does not provide.

This code was originally written by Gerd Hoffmann and was
updated to contain later changes to the display driver tree.

Co-authored-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Joelle van Dyne <j@getutm.app>
---
 hw/display/virtio-ramfb.c | 188 ++++++++++++++++++++++++++++++++++++++
 hw/display/meson.build    |   1 +
 2 files changed, 189 insertions(+)
 create mode 100644 hw/display/virtio-ramfb.c

Comments

Gerd Hoffmann March 10, 2021, 9:51 a.m. UTC | #1
On Tue, Mar 09, 2021 at 01:35:13PM -0800, Joelle van Dyne wrote:
> Like virtio-vga, but using ramfb instead of legacy vga.
> Useful for booting from OVMF (with updated drivers) into Windows
> ARM which expects a linear FB that the virtio-gpu driver in OVMF
> does not provide.

What is wrong with "-device ramfb" for that use case?

> This code was originally written by Gerd Hoffmann and was
> updated to contain later changes to the display driver tree.

Well, the tricky part with that is OVMF driver binding.  We don't
want two drivers bind to the same device.

We have QemuRamfbDxe + QemuVideoDxe + VirtioGpuDxe.

 - QemuRamfbDxe handles ramfb.
 - QemuVideoDxe handles all vga devices (including virtio-vga)
   plus bochs-display.
 - VirtioGpuDxe handles all virtio-gpu devices (except virtio-vga).

VirtioGpuDxe could easily handle virtio-vga too but doesn't to avoid
the conflict with QemuVideoDxe.  It detects that by looking at the pci
class code.  virtio-vga is tagged as display/vga whereas virtio-gpu-pci
is display/other.

Problem of the virtio-ramfb device is that the guest can't figure
whenever the virtio-gpu device comes with ramfb support or not.
Merging this is a non-starter unless we have a solution for that
problem.

A use case which actually needs that would be helpful to drive that
effort.  I don't see one.  If your guest comes with virtio-gpu drivers
you don't need ramfb support.  The VirtioGpuDxe driver covers the boot
loader, and the guest driver everything else.  If your guest has no
virtio-gpu drivers the virtio-ramfb combo device is useless, you can
simply use standalone ramfb instead.

take care,
  Gerd
Laszlo Ersek March 10, 2021, 12:45 p.m. UTC | #2
On 03/10/21 10:51, Gerd Hoffmann wrote:
> On Tue, Mar 09, 2021 at 01:35:13PM -0800, Joelle van Dyne wrote:
>> Like virtio-vga, but using ramfb instead of legacy vga.
>> Useful for booting from OVMF (with updated drivers) into Windows
>> ARM which expects a linear FB that the virtio-gpu driver in OVMF
>> does not provide.
> 
> What is wrong with "-device ramfb" for that use case?
> 
>> This code was originally written by Gerd Hoffmann and was
>> updated to contain later changes to the display driver tree.
> 
> Well, the tricky part with that is OVMF driver binding.  We don't
> want two drivers bind to the same device.
> 
> We have QemuRamfbDxe + QemuVideoDxe + VirtioGpuDxe.
> 
>  - QemuRamfbDxe handles ramfb.
>  - QemuVideoDxe handles all vga devices (including virtio-vga)
>    plus bochs-display.
>  - VirtioGpuDxe handles all virtio-gpu devices (except virtio-vga).
> 
> VirtioGpuDxe could easily handle virtio-vga too but doesn't to avoid
> the conflict with QemuVideoDxe.  It detects that by looking at the pci
> class code.  virtio-vga is tagged as display/vga whereas virtio-gpu-pci
> is display/other.
> 
> Problem of the virtio-ramfb device is that the guest can't figure
> whenever the virtio-gpu device comes with ramfb support or not.
> Merging this is a non-starter unless we have a solution for that
> problem.
> 
> A use case which actually needs that would be helpful to drive that
> effort.  I don't see one.  If your guest comes with virtio-gpu drivers
> you don't need ramfb support.  The VirtioGpuDxe driver covers the boot
> loader, and the guest driver everything else.  If your guest has no
> virtio-gpu drivers the virtio-ramfb combo device is useless, you can
> simply use standalone ramfb instead.

Thanks for the CC and the summary, and I agree.


Two (tangential) additions:

- The arbitration between VirtioGpuDxe and QemuVideoDxe, on a virtio-vga
device, happens actually in Virtio10Dxe (the virtio-1.0 transport
driver). When Virtio10Dxe recognizes virtio-vga, it does not expose it
as a virtio device at all.

The reason for this is that VirtioGpuDxe consumes VIRTIO_DEVICE_PROTOCOL
(does not deal with PCI [*]), and QemuVideoDxe consumes
EFI_PCI_IO_PROTOCOL (does not deal with virtio). Therefore, the
arbitration needs to happen in a layer that deals with both of those
abstractions at the same time; and that's the virtio transport driver,
which produces VIRTIO_DEVICE_PROTOCOL on top of EFI_PCI_IO_PROTOCOL.

[*] I'm lying a bit here; it does consume PciIo momentarily. But, for
nothing relevant to the UEFI driver model. VirtioGpuDxe *attempts* using
PciIo for formatting the human-readable device name, with the B/D/F in
it; but even if that fails, the driver works just fine (with a less
detailed human-readable device name).

- QemuRamfbDxe is a platform DXE driver, not a UEFI driver that follows
the UEFI driver model. The reason is that the fw_cfg stuff underlying
ramfb is a "platform device" (a singleton one at that), not an
enumerable device.


So, if you combined ramfb with any other (enumerable) display device
into a single device, that would allow the QemuRamfbDxe platform driver
and the other (UEFI) driver to bind the *same display hardware* via
different interfaces at the same time.

And arbitrating between such drivers is practically impossible without
violating the UEFI driver model: first, the QemuRamfbDxe platform driver
has no way of knowing whether the same display hardware qualifies for
the other (UEFI) driver though PCI (or another enumerable interface);
second, the other (UEFI) driver would have to check for a platform
device (fw_cfg in this case), which is *wrong*. (Consider e.g. what
happens if we have multiple (separate) PCI-based display devices, plus
one ramfb device -- if ramfb were allowed to share the underlying
hardware with one of the PCI ones, how would we tell which PCI device
the ramfb device belonged to?)

(... In fact, the second argument is akin to why I keep rejecting
various manifestations of a GVT-g driver for OVMF -- refer to
<https://bugzilla.tianocore.org/show_bug.cgi?id=935>. Due to the
opregion being based on fw_cfg, the hardware itself is a fusion of a PCI
device and a platform device -- and that's wrong for both a platform DXE
driver, and a UEFI driver that follows the UEFI driver model. It's not
that the driver is impossible to implement (three variants have been
written already, mutually independently of each other), but that any
such driver involves a layering / driver model violation one way or
another. But, I digress.)

Thanks
Laszlo
Joelle van Dyne March 10, 2021, 4:42 p.m. UTC | #3
On Wed, Mar 10, 2021 at 4:45 AM Laszlo Ersek <lersek@redhat.com> wrote:
>
> On 03/10/21 10:51, Gerd Hoffmann wrote:
> > On Tue, Mar 09, 2021 at 01:35:13PM -0800, Joelle van Dyne wrote:
> >> Like virtio-vga, but using ramfb instead of legacy vga.
> >> Useful for booting from OVMF (with updated drivers) into Windows
> >> ARM which expects a linear FB that the virtio-gpu driver in OVMF
> >> does not provide.
> >
> > What is wrong with "-device ramfb" for that use case?
> >
> >> This code was originally written by Gerd Hoffmann and was
> >> updated to contain later changes to the display driver tree.
> >
> > Well, the tricky part with that is OVMF driver binding.  We don't
> > want two drivers bind to the same device.
> >
> > We have QemuRamfbDxe + QemuVideoDxe + VirtioGpuDxe.
> >
> >  - QemuRamfbDxe handles ramfb.
> >  - QemuVideoDxe handles all vga devices (including virtio-vga)
> >    plus bochs-display.
> >  - VirtioGpuDxe handles all virtio-gpu devices (except virtio-vga).
> >
> > VirtioGpuDxe could easily handle virtio-vga too but doesn't to avoid
> > the conflict with QemuVideoDxe.  It detects that by looking at the pci
> > class code.  virtio-vga is tagged as display/vga whereas virtio-gpu-pci
> > is display/other.
> >
> > Problem of the virtio-ramfb device is that the guest can't figure
> > whenever the virtio-gpu device comes with ramfb support or not.
> > Merging this is a non-starter unless we have a solution for that
> > problem.
> >
> > A use case which actually needs that would be helpful to drive that
> > effort.  I don't see one.  If your guest comes with virtio-gpu drivers
> > you don't need ramfb support.  The VirtioGpuDxe driver covers the boot
> > loader, and the guest driver everything else.  If your guest has no
> > virtio-gpu drivers the virtio-ramfb combo device is useless, you can
> > simply use standalone ramfb instead.
>
> Thanks for the CC and the summary, and I agree.
>
>
> Two (tangential) additions:
>
> - The arbitration between VirtioGpuDxe and QemuVideoDxe, on a virtio-vga
> device, happens actually in Virtio10Dxe (the virtio-1.0 transport
> driver). When Virtio10Dxe recognizes virtio-vga, it does not expose it
> as a virtio device at all.
>
> The reason for this is that VirtioGpuDxe consumes VIRTIO_DEVICE_PROTOCOL
> (does not deal with PCI [*]), and QemuVideoDxe consumes
> EFI_PCI_IO_PROTOCOL (does not deal with virtio). Therefore, the
> arbitration needs to happen in a layer that deals with both of those
> abstractions at the same time; and that's the virtio transport driver,
> which produces VIRTIO_DEVICE_PROTOCOL on top of EFI_PCI_IO_PROTOCOL.
>
> [*] I'm lying a bit here; it does consume PciIo momentarily. But, for
> nothing relevant to the UEFI driver model. VirtioGpuDxe *attempts* using
> PciIo for formatting the human-readable device name, with the B/D/F in
> it; but even if that fails, the driver works just fine (with a less
> detailed human-readable device name).
>
> - QemuRamfbDxe is a platform DXE driver, not a UEFI driver that follows
> the UEFI driver model. The reason is that the fw_cfg stuff underlying
> ramfb is a "platform device" (a singleton one at that), not an
> enumerable device.
>
>
> So, if you combined ramfb with any other (enumerable) display device
> into a single device, that would allow the QemuRamfbDxe platform driver
> and the other (UEFI) driver to bind the *same display hardware* via
> different interfaces at the same time.
>
> And arbitrating between such drivers is practically impossible without
> violating the UEFI driver model: first, the QemuRamfbDxe platform driver
> has no way of knowing whether the same display hardware qualifies for
> the other (UEFI) driver though PCI (or another enumerable interface);
> second, the other (UEFI) driver would have to check for a platform
> device (fw_cfg in this case), which is *wrong*. (Consider e.g. what
> happens if we have multiple (separate) PCI-based display devices, plus
> one ramfb device -- if ramfb were allowed to share the underlying
> hardware with one of the PCI ones, how would we tell which PCI device
> the ramfb device belonged to?)
>
> (... In fact, the second argument is akin to why I keep rejecting
> various manifestations of a GVT-g driver for OVMF -- refer to
> <https://bugzilla.tianocore.org/show_bug.cgi?id=935>. Due to the
> opregion being based on fw_cfg, the hardware itself is a fusion of a PCI
> device and a platform device -- and that's wrong for both a platform DXE
> driver, and a UEFI driver that follows the UEFI driver model. It's not
> that the driver is impossible to implement (three variants have been
> written already, mutually independently of each other), but that any
> such driver involves a layering / driver model violation one way or
> another. But, I digress.)
>
> Thanks
> Laszlo
>

Thanks for the feedback, Laszlo and Gerd. To avoid the XY problem
here, what I am trying to solve is that currently there is no good way
to boot into Windows ARM with virtio-gpu without using ramfb first to
install the drivers. The only solutions I can think of is:

* Implement a linear FB in virtio-gpu
* Hack in ramfb in virtio-gpu

And the second one seems easier. But perhaps I'm missing some other solutions?

-j
Laszlo Ersek March 10, 2021, 7:39 p.m. UTC | #4
On 03/10/21 17:42, Joelle van Dyne wrote:
> On Wed, Mar 10, 2021 at 4:45 AM Laszlo Ersek <lersek@redhat.com> wrote:
>>
>> On 03/10/21 10:51, Gerd Hoffmann wrote:
>>> On Tue, Mar 09, 2021 at 01:35:13PM -0800, Joelle van Dyne wrote:
>>>> Like virtio-vga, but using ramfb instead of legacy vga.
>>>> Useful for booting from OVMF (with updated drivers) into Windows
>>>> ARM which expects a linear FB that the virtio-gpu driver in OVMF
>>>> does not provide.
>>>
>>> What is wrong with "-device ramfb" for that use case?
>>>
>>>> This code was originally written by Gerd Hoffmann and was
>>>> updated to contain later changes to the display driver tree.
>>>
>>> Well, the tricky part with that is OVMF driver binding.  We don't
>>> want two drivers bind to the same device.
>>>
>>> We have QemuRamfbDxe + QemuVideoDxe + VirtioGpuDxe.
>>>
>>>  - QemuRamfbDxe handles ramfb.
>>>  - QemuVideoDxe handles all vga devices (including virtio-vga)
>>>    plus bochs-display.
>>>  - VirtioGpuDxe handles all virtio-gpu devices (except virtio-vga).
>>>
>>> VirtioGpuDxe could easily handle virtio-vga too but doesn't to avoid
>>> the conflict with QemuVideoDxe.  It detects that by looking at the pci
>>> class code.  virtio-vga is tagged as display/vga whereas virtio-gpu-pci
>>> is display/other.
>>>
>>> Problem of the virtio-ramfb device is that the guest can't figure
>>> whenever the virtio-gpu device comes with ramfb support or not.
>>> Merging this is a non-starter unless we have a solution for that
>>> problem.
>>>
>>> A use case which actually needs that would be helpful to drive that
>>> effort.  I don't see one.  If your guest comes with virtio-gpu drivers
>>> you don't need ramfb support.  The VirtioGpuDxe driver covers the boot
>>> loader, and the guest driver everything else.  If your guest has no
>>> virtio-gpu drivers the virtio-ramfb combo device is useless, you can
>>> simply use standalone ramfb instead.
>>
>> Thanks for the CC and the summary, and I agree.
>>
>>
>> Two (tangential) additions:
>>
>> - The arbitration between VirtioGpuDxe and QemuVideoDxe, on a virtio-vga
>> device, happens actually in Virtio10Dxe (the virtio-1.0 transport
>> driver). When Virtio10Dxe recognizes virtio-vga, it does not expose it
>> as a virtio device at all.
>>
>> The reason for this is that VirtioGpuDxe consumes VIRTIO_DEVICE_PROTOCOL
>> (does not deal with PCI [*]), and QemuVideoDxe consumes
>> EFI_PCI_IO_PROTOCOL (does not deal with virtio). Therefore, the
>> arbitration needs to happen in a layer that deals with both of those
>> abstractions at the same time; and that's the virtio transport driver,
>> which produces VIRTIO_DEVICE_PROTOCOL on top of EFI_PCI_IO_PROTOCOL.
>>
>> [*] I'm lying a bit here; it does consume PciIo momentarily. But, for
>> nothing relevant to the UEFI driver model. VirtioGpuDxe *attempts* using
>> PciIo for formatting the human-readable device name, with the B/D/F in
>> it; but even if that fails, the driver works just fine (with a less
>> detailed human-readable device name).
>>
>> - QemuRamfbDxe is a platform DXE driver, not a UEFI driver that follows
>> the UEFI driver model. The reason is that the fw_cfg stuff underlying
>> ramfb is a "platform device" (a singleton one at that), not an
>> enumerable device.
>>
>>
>> So, if you combined ramfb with any other (enumerable) display device
>> into a single device, that would allow the QemuRamfbDxe platform driver
>> and the other (UEFI) driver to bind the *same display hardware* via
>> different interfaces at the same time.
>>
>> And arbitrating between such drivers is practically impossible without
>> violating the UEFI driver model: first, the QemuRamfbDxe platform driver
>> has no way of knowing whether the same display hardware qualifies for
>> the other (UEFI) driver though PCI (or another enumerable interface);
>> second, the other (UEFI) driver would have to check for a platform
>> device (fw_cfg in this case), which is *wrong*. (Consider e.g. what
>> happens if we have multiple (separate) PCI-based display devices, plus
>> one ramfb device -- if ramfb were allowed to share the underlying
>> hardware with one of the PCI ones, how would we tell which PCI device
>> the ramfb device belonged to?)
>>
>> (... In fact, the second argument is akin to why I keep rejecting
>> various manifestations of a GVT-g driver for OVMF -- refer to
>> <https://bugzilla.tianocore.org/show_bug.cgi?id=935>. Due to the
>> opregion being based on fw_cfg, the hardware itself is a fusion of a PCI
>> device and a platform device -- and that's wrong for both a platform DXE
>> driver, and a UEFI driver that follows the UEFI driver model. It's not
>> that the driver is impossible to implement (three variants have been
>> written already, mutually independently of each other), but that any
>> such driver involves a layering / driver model violation one way or
>> another. But, I digress.)
>>
>> Thanks
>> Laszlo
>>
> 
> Thanks for the feedback, Laszlo and Gerd. To avoid the XY problem
> here, what I am trying to solve is that currently there is no good way
> to boot into Windows ARM with virtio-gpu without using ramfb first to
> install the drivers. The only solutions I can think of is:
> 
> * Implement a linear FB in virtio-gpu
> * Hack in ramfb in virtio-gpu
> 
> And the second one seems easier. But perhaps I'm missing some other solutions?

The situation is similar to setting up an x86 Windows guest with an
assigned (discrete) PCIe GPU. At first, one installs the guest with VGA
or QXL (the assigned GPU may or may not be present in the domain config
already). Then the native GPU drivers are installed (usually from
separate media, such as an ISO image). Finally, the in-guest displays
are reconfigured to make the assigned GPU the primary one, and/or the
domain config is modified to remove the VGA (or QXL) device altogether.
(This assumes that the assigned GPU comes with an x86 UEFI GOP driver in
its option ROM.)

In other words, it's fine to have two *separate* pieces of display
hardware (such as two graphics windows), temporarily, until the guest
setup is complete. Similarly to how, in the above GPU assignment
scenario, there is a short time when the x86 Windows guest runs in a
kind of "multi-head" setup (a VGA or QXL display window, and a separate,
dedicated physical monitor output).

So it should be fine to specify both "-device ramfb" and "-device
virtio-gpu-pci" on the QEMU command line. Each firmware driver will bind
each matching device, the firmware graphics console will be multiplexed
to both. When Windows starts, only ramfb will be useful (the Basic
Display Adapter driver should drive it), until virtio-gpu drivers are
manually installed.

If that's inconvenient, then the virtio-gpu driver has to be integrated
into the Windows install media. I vaguely recall that tweaking
"boot.wim" or whatever is (allegedly) possible.

(I saw the hacker news thread on "getutm.app"; I must admit I'm not at
all a fan of the approach highlighted on the website, "QEMU without the
headache", and "UTM was created for macOS and *only* for Apple
platforms", emphasis yours. I disagree that Apple user comfort justifies
contorting the QEMU device model and/or the UEFI driver model. That's
not to say that my opinion necessarily *matters* with regard to QEMU
patches, of course.)

With regard to adding a linear FB to virtio-gpu-pci -- that won't work;
if it worked, we'd just use (a variant of) virtio-vga. The problem is
that, with the framebuffer in a PCI MMIO BAR in the guest, the address
range is marked / mapped as "device memory" (uncacheable) from the guest
side, while it is still mapped as RAM from the host side. The way
aarch64 combines the stage1 and stage2 mappings differs from the x86
combination: on aarch64 (at least without using a special architecture
extension whose name I forget), the strictest mapping takes effect,
while on x86, the host-side mapping takes effect. So in the aarch64
case, framebuffer accesses in the guest go straight to host RAM
(avoiding CPU caches), but in QEMU, framebuffer accesses go through the
CPU caches. You get display corruption as a result. Ramfb avoids this
because the framebuffer in the guest is not mapped from a PCI MMIO BAR,
but from normal guest RAM. IOW, ramfb is a workaround for aarch64
"knowing better" how to combine stage1 and stage2 mappings than x86.

(Disclaimer: I can never remember how stage1 and stage2 map to "host"
vs. "guest"; sorry about that.)

In summary, I recommend using a multi-head setup temporarily (ramfb +
virtio-gpu-pci added as separate devices on the QEMU command line, with
matching separate display *backends*). Once virtio-gpu-pci is working
fine in the Windows guest, ramfb may be removed from the domain config
altogether.

Thanks,
Laszlo
diff mbox series

Patch

diff --git a/hw/display/virtio-ramfb.c b/hw/display/virtio-ramfb.c
new file mode 100644
index 0000000000..d08bb90a14
--- /dev/null
+++ b/hw/display/virtio-ramfb.c
@@ -0,0 +1,188 @@ 
+#include "qemu/osdep.h"
+#include "hw/pci/pci.h"
+#include "ui/console.h"
+#include "hw/qdev-properties.h"
+#include "hw/virtio/virtio-gpu-pci.h"
+#include "qapi/error.h"
+#include "hw/display/ramfb.h"
+#include "qom/object.h"
+
+/*
+ * virtio-ramfb-base: This extends VirtioPCIProxy.
+ */
+#define TYPE_VIRTIO_RAMFB_BASE "virtio-ramfb-base"
+OBJECT_DECLARE_TYPE(VirtIORAMFBBase, VirtIORAMFBBaseClass,
+                    VIRTIO_RAMFB_BASE)
+
+struct VirtIORAMFBBase {
+    VirtIOPCIProxy parent_obj;
+
+    VirtIOGPUBase *vgpu;
+    RAMFBState    *ramfb;
+};
+
+struct VirtIORAMFBBaseClass {
+    VirtioPCIClass parent_class;
+
+    DeviceReset parent_reset;
+};
+
+static void virtio_ramfb_invalidate_display(void *opaque)
+{
+    VirtIORAMFBBase *vramfb = opaque;
+    VirtIOGPUBase *g = vramfb->vgpu;
+
+    if (g->enable) {
+        g->hw_ops->invalidate(g);
+    }
+}
+
+static void virtio_ramfb_update_display(void *opaque)
+{
+    VirtIORAMFBBase *vramfb = opaque;
+    VirtIOGPUBase *g = vramfb->vgpu;
+
+    if (g->enable) {
+        g->hw_ops->gfx_update(g);
+    } else {
+        ramfb_display_update(g->scanout[0].con, vramfb->ramfb);
+    }
+}
+
+static int virtio_ramfb_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
+{
+    VirtIORAMFBBase *vramfb = opaque;
+    VirtIOGPUBase *g = vramfb->vgpu;
+
+    if (g->hw_ops->ui_info) {
+        return g->hw_ops->ui_info(g, idx, info);
+    }
+    return -1;
+}
+
+static void virtio_ramfb_gl_block(void *opaque, bool block)
+{
+    VirtIORAMFBBase *vramfb = opaque;
+    VirtIOGPUBase *g = vramfb->vgpu;
+
+    if (g->hw_ops->gl_block) {
+        g->hw_ops->gl_block(g, block);
+    }
+}
+
+static const GraphicHwOps virtio_ramfb_ops = {
+    .invalidate = virtio_ramfb_invalidate_display,
+    .gfx_update = virtio_ramfb_update_display,
+    .ui_info = virtio_ramfb_ui_info,
+    .gl_block = virtio_ramfb_gl_block,
+};
+
+static const VMStateDescription vmstate_virtio_ramfb = {
+    .name = "virtio-ramfb",
+    .version_id = 2,
+    .minimum_version_id = 2,
+    .fields = (VMStateField[]) {
+        /* no pci stuff here, saving the virtio device will handle that */
+        /* FIXME */
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+/* RAMFB device wrapper around PCI device around virtio GPU */
+static void virtio_ramfb_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
+{
+    VirtIORAMFBBase *vramfb = VIRTIO_RAMFB_BASE(vpci_dev);
+    VirtIOGPUBase *g = vramfb->vgpu;
+    int i;
+
+    /* init virtio bits */
+    virtio_pci_force_virtio_1(vpci_dev);
+    if (!qdev_realize(DEVICE(g), BUS(&vpci_dev->bus), errp)) {
+        return;
+    }
+
+    /* init ramfb */
+    vramfb->ramfb = ramfb_setup(errp);
+    graphic_console_set_hwops(g->scanout[0].con, &virtio_ramfb_ops, vramfb);
+
+    for (i = 0; i < g->conf.max_outputs; i++) {
+        object_property_set_link(OBJECT(g->scanout[i].con), "device",
+                                 OBJECT(vpci_dev), &error_abort);
+    }
+}
+
+static void virtio_ramfb_reset(DeviceState *dev)
+{
+    VirtIORAMFBBaseClass *klass = VIRTIO_RAMFB_BASE_GET_CLASS(dev);
+
+    /* reset virtio-gpu */
+    klass->parent_reset(dev);
+}
+
+static Property virtio_ramfb_base_properties[] = {
+    DEFINE_VIRTIO_GPU_PCI_PROPERTIES(VirtIOPCIProxy),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_ramfb_base_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
+    VirtIORAMFBBaseClass *v = VIRTIO_RAMFB_BASE_CLASS(klass);
+    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
+
+    set_bit(DEVICE_CATEGORY_DISPLAY, dc->categories);
+    device_class_set_props(dc, virtio_ramfb_base_properties);
+    dc->vmsd = &vmstate_virtio_ramfb;
+    dc->hotpluggable = false;
+    device_class_set_parent_reset(dc, virtio_ramfb_reset,
+                                  &v->parent_reset);
+
+    k->realize = virtio_ramfb_realize;
+    pcidev_k->class_id = PCI_CLASS_DISPLAY_OTHER;
+}
+
+static TypeInfo virtio_ramfb_base_info = {
+    .name          = TYPE_VIRTIO_RAMFB_BASE,
+    .parent        = TYPE_VIRTIO_PCI,
+    .instance_size = sizeof(VirtIORAMFBBase),
+    .class_size    = sizeof(VirtIORAMFBBaseClass),
+    .class_init    = virtio_ramfb_base_class_init,
+    .abstract      = true,
+};
+
+#define TYPE_VIRTIO_RAMFB "virtio-ramfb"
+
+typedef struct VirtIORAMFB VirtIORAMFB;
+DECLARE_INSTANCE_CHECKER(VirtIORAMFB, VIRTIO_RAMFB,
+                         TYPE_VIRTIO_RAMFB)
+
+struct VirtIORAMFB {
+    VirtIORAMFBBase parent_obj;
+
+    VirtIOGPU     vdev;
+};
+
+static void virtio_ramfb_inst_initfn(Object *obj)
+{
+    VirtIORAMFB *dev = VIRTIO_RAMFB(obj);
+
+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+                                TYPE_VIRTIO_GPU);
+    VIRTIO_RAMFB_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
+}
+
+static VirtioPCIDeviceTypeInfo virtio_ramfb_info = {
+    .generic_name  = TYPE_VIRTIO_RAMFB,
+    .parent        = TYPE_VIRTIO_RAMFB_BASE,
+    .instance_size = sizeof(VirtIORAMFB),
+    .instance_init = virtio_ramfb_inst_initfn,
+};
+
+static void virtio_ramfb_register_types(void)
+{
+    type_register_static(&virtio_ramfb_base_info);
+    virtio_pci_types_register(&virtio_ramfb_info);
+}
+
+type_init(virtio_ramfb_register_types)
diff --git a/hw/display/meson.build b/hw/display/meson.build
index 9d79e3951d..14f5fa39f4 100644
--- a/hw/display/meson.build
+++ b/hw/display/meson.build
@@ -60,6 +60,7 @@  if config_all_devices.has_key('CONFIG_VIRTIO_GPU')
   virtio_gpu_ss.add(when: ['CONFIG_VIRTIO_GPU', 'CONFIG_VIRGL'],
                     if_true: [files('virtio-gpu-3d.c'), pixman, virgl])
   virtio_gpu_ss.add(when: 'CONFIG_VHOST_USER_GPU', if_true: files('vhost-user-gpu.c'))
+  virtio_gpu_ss.add(when: 'CONFIG_FW_CFG_DMA', if_true: files('virtio-ramfb.c'))
   hw_display_modules += {'virtio-gpu': virtio_gpu_ss}
 endif