diff mbox

use corect depth from DisplaySurface in vmware_vga.c

Message ID 20090817100828.GA22029@1und1.de
State Superseded
Headers show

Commit Message

Reimar Döffinger Aug. 17, 2009, 10:08 a.m. UTC
Hello,
for what I can tell, there is no way for vmware_vga to work correctly
right now. It assumes that the framebuffer bits-per-pixel and the one
from the DisplaySurface are identical (it uses directly the VRAM from
vga.c), but it always assumes 3 bytes per pixel, which is never possible
with the current version of DisplaySurface.
Attached patch fixes that by using ds_get_bits_per_pixel.
Note that this further breaks the already broken compilation if you use
#undef EMBED_STDVGA (maybe it is time to throw away all that broken
code??).

Comments

andrzej zaborowski Aug. 23, 2009, 5:25 p.m. UTC | #1
2009/8/17 Reimar Döffinger <Reimar.Doeffinger@gmx.de>:
> Hello,
> for what I can tell, there is no way for vmware_vga to work correctly
> right now. It assumes that the framebuffer bits-per-pixel and the one
> from the DisplaySurface are identical (it uses directly the VRAM from
> vga.c), but it always assumes 3 bytes per pixel, which is never possible
> with the current version of DisplaySurface.
> Attached patch fixes that by using ds_get_bits_per_pixel.

It was discussed at some point earlier that at the time this code runs
SDL is not initialised and the depth returned is an arbitrary value
from default allocator.  What vmware_vga really should do is ask SDL
for the host's depth and set the surface's pixelformat to that.
Unfortunately the ability to know host's pixel depth was dropped
during video API conversion and afaik hasn't been added till now.  The
also arbitrary value of 24 bits was stuffed there for some reason but
it seems to work for me.

Cheers
Reimar Döffinger Aug. 24, 2009, 1:22 p.m. UTC | #2
On Sun, Aug 23, 2009 at 07:25:32PM +0200, andrzej zaborowski wrote:
> 2009/8/17 Reimar Döffinger <Reimar.Doeffinger@gmx.de>:
> > Hello,
> > for what I can tell, there is no way for vmware_vga to work correctly
> > right now. It assumes that the framebuffer bits-per-pixel and the one
> > from the DisplaySurface are identical (it uses directly the VRAM from
> > vga.c), but it always assumes 3 bytes per pixel, which is never possible
> > with the current version of DisplaySurface.
> > Attached patch fixes that by using ds_get_bits_per_pixel.
> 
> It was discussed at some point earlier that at the time this code runs
> SDL is not initialised and the depth returned is an arbitrary value
> from default allocator.

It is not arbitrary. It matches exactly the DisplaySurface's
linesize and data buffer size.
As such I claim that my patch is correct, it may uncover some
bugs that were very carefully swept under the rug, but that only
makes it incomplete.
Also a reference to either the previous discussion or at least a proper
"bug report" about what/how/where it breaks with my patch applied would
be very helpful, e.g. which operating system/hardware driver/SDL version
(I guess there is some reason why I get a different bit depth).
As such, I want to add that the revert commit message of "was
incorrect." doesn't qualify as useful to me.

> What vmware_vga really should do is ask SDL
> for the host's depth and set the surface's pixelformat to that.

Obvious question: why shouldn't SDL ask the VGA for its depth and try
to use a surface with that format? Has the advantage that the depth
of the emulated stuff stays the same, whereas with your suggestion
if I tried a loadvm from a savevm of your machine qemu would get
in a bit in trouble.
(Though looking at sdl_setdata/sdl_update some conversion should
be done anyway, though that requires that the values in the
DisplaySurface and in the vmware_vga depth variable match, which
at least at some points in time they currently don't).

> Unfortunately the ability to know host's pixel depth was dropped
> during video API conversion and afaik hasn't been added till now.

Considering the above, I think that might count as an "accidentally
good" decision.

Greetings,
Reimar Döffinger
andrzej zaborowski Aug. 24, 2009, 11:45 p.m. UTC | #3
2009/8/24 Reimar Döffinger <Reimar.Doeffinger@gmx.de>:
> On Sun, Aug 23, 2009 at 07:25:32PM +0200, andrzej zaborowski wrote:
>> It was discussed at some point earlier that at the time this code runs
>> SDL is not initialised and the depth returned is an arbitrary value
>> from default allocator.
>
> It is not arbitrary. It matches exactly the DisplaySurface's
> linesize and data buffer size.

Only at the moment the function is called.  The value is still
hardcoded, just elsewhere.  Once display backend initialises this
value may be invalid.

> As such, I want to add that the revert commit message of "was
> incorrect." doesn't qualify as useful to me.

I wasn't intending to push this commit, instead I responded to the
thread but later noticed I had pushed it.

>
>> What vmware_vga really should do is ask SDL
>> for the host's depth and set the surface's pixelformat to that.
>
> Obvious question: why shouldn't SDL ask the VGA for its depth and try
> to use a surface with that format?

It should, the VGA should create the surface using
qemu_create_displaysurface... like in vga.c.  But, this depth is not
set by the guest, it should match the host's depth because this is how
the vmware's "specification" (if you can call it that) defines it.

Beside that it's an obvious performance gain.  The API change did not
magically remove the pixel by pixel conversion of the colour space, it
just hid it in SDL, under more indirection.

> Has the advantage that the depth
> of the emulated stuff stays the same, whereas with your suggestion
> if I tried a loadvm from a savevm of your machine qemu would get
> in a bit in trouble.

I'm not sure I understand this sentence, but apparently there's some
way vmware can communicate to the guest that the bit depth has
changed.  This is not implemented in vmware_vga.c yet.

Similarly when the window is resized, instead of zooming we could
communicate the resolution change to the guest, but it's not
implemented yet.

Cheers
diff mbox

Patch

diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c
index 5ceebf1..23d5fc8 100644
--- a/hw/vmware_vga.c
+++ b/hw/vmware_vga.c
@@ -923,7 +927,7 @@  static void vmsvga_reset(struct vmsvga_state_s *s)
     s->width = -1;
     s->height = -1;
     s->svgaid = SVGA_ID;
-    s->depth = 24;
+    s->depth = ds_get_bits_per_pixel(s->vga.ds);
     s->bypp = (s->depth + 7) >> 3;
     s->cursor.on = 0;
     s->redraw_fifo_first = 0;
@@ -1126,8 +1130,6 @@  static void vmsvga_init(struct vmsvga_state_s *s, int vga_ram_size)
     s->scratch_size = SVGA_SCRATCH_SIZE;
     s->scratch = (uint32_t *) qemu_malloc(s->scratch_size * 4);
 
-    vmsvga_reset(s);
-
 #ifdef EMBED_STDVGA
     vga_common_init((VGAState *) s, vga_ram_size);
     vga_init((VGAState *) s);
@@ -1142,6 +1144,8 @@  static void vmsvga_init(struct vmsvga_state_s *s, int vga_ram_size)
                                      vmsvga_screen_dump,
                                      vmsvga_text_update, &s->vga);
 
+    vmsvga_reset(s);
+
 #ifdef CONFIG_BOCHS_VBE
     /* XXX: use optimized standard vga accesses */
     cpu_register_physical_memory(VBE_DISPI_LFB_PHYSICAL_ADDRESS,