diff mbox

Gives user ability to select endian format for video display - fixes Mac OS X guest color issue.

Message ID 3154DC97-3B67-4DBF-BD02-C64E4A9591E3@gmail.com
State New
Headers show

Commit Message

Programmingkid Jan. 5, 2015, 9:27 p.m. UTC
This patches does the following:

- Allows user to select endian format of video display. 
This allows Mac OS X to be used as a guest and show all its colors
correctly. Just add -display-endian-big to the command line to use 
this feature. 

- Removes unneeded #ifdefs in drawRect: method. Correct method
selection is done at runtime instead of compile time. This makes one
binary work on more versions of Mac OS X rather than just the 
version that was used to compile it. It also makes the code
more efficient. Rather than setting the same variables over and over again, 
just do it once. 

- Adds runtime identification code for determining which version of the Mac
OS QEMU is running on.

Signed-off-by: John Arbuckle <programmingkidx@gmail.com>

---
 ui/cocoa.m |   49 ++++++++++++++++++++++++++++++++++++++++++-------
 1 files changed, 42 insertions(+), 7 deletions(-)

Comments

Peter Maydell Jan. 5, 2015, 10:06 p.m. UTC | #1
On 5 January 2015 at 21:27, Programmingkid <programmingkidx@gmail.com> wrote:
> This patches does the following:
>
> - Allows user to select endian format of video display.
> This allows Mac OS X to be used as a guest and show all its colors
> correctly. Just add -display-endian-big to the command line to use
> this feature.

I don't understand the purpose of this option. There are two
video display endiannesses I can think of:

(1) The guest video device endianness. We should just get this
correct according to the definition of what the hardware does
(and it's not dependent on which host UI we're using)
(2) The host graphics framebuffer endianness. We should
automatically determine this by either asking the host's
GUI about it or by just knowing it if the host is always one
way. (This isn't dependent on what guest is running, either.)

So what is this option doing? It shouldn't be needed for either
(1) or (2)...

thanks
-- PMM
Programmingkid Jan. 6, 2015, 12:22 a.m. UTC | #2
On Jan 5, 2015, at 5:06 PM, Peter Maydell wrote:

> On 5 January 2015 at 21:27, Programmingkid <programmingkidx@gmail.com> wrote:
>> This patches does the following:
>> 
>> - Allows user to select endian format of video display.
>> This allows Mac OS X to be used as a guest and show all its colors
>> correctly. Just add -display-endian-big to the command line to use
>> this feature.
> 
> I don't understand the purpose of this option. There are two
> video display endiannesses I can think of:
> 
> (1) The guest video device endianness. We should just get this
> correct according to the definition of what the hardware does
> (and it's not dependent on which host UI we're using)
> (2) The host graphics framebuffer endianness. We should
> automatically determine this by either asking the host's
> GUI about it or by just knowing it if the host is always one
> way. (This isn't dependent on what guest is running, either.)
> 
> So what is this option doing? It shouldn't be needed for either
> (1) or (2)...
> 
> thanks
> -- PMM

http://virtuallyfun.superglobalmegacorp.com/?p=3197
This is how Mac OS X looks like in QEMU with a Mac OS X host. The colors are all wrong. 

This patch fixes that problem so the colors look normal. Things aren't always so perfect with computers. Out of all the operating systems I have ran on qemu-system-ppc, Mac OS X is the only one that has this color issue. I'm not sure about Mac OS 9 yet. This solution does make using any operating system in QEMU display the correct colors. 

Your option 1. QEMU does not emulate any video card that shipped with a Mac. So we can't use this option. 

Option 2. I don't know what you mean by asking the host's GUI about it. I do know that having the user choosing the right option does work. 

Does any VGA genius know if there is some way to automatically detect the correct endianness? Or is little endian the format that was required by the standard?
Peter Maydell Jan. 6, 2015, 9:47 a.m. UTC | #3
On 6 January 2015 at 00:22, Programmingkid <programmingkidx@gmail.com> wrote:
> http://virtuallyfun.superglobalmegacorp.com/?p=3197
> This is how Mac OS X looks like in QEMU with a Mac OS X host. The colors are all wrong.

Right, so that says there is a bug somewhere. But this patch isn't
fixing a bug, it's adding a command line switch.

> Your option 1. QEMU does not emulate any video card that shipped
> with a Mac. So we can't use this option.

We *must* be emulating a video card, otherwise we would not be
displaying anything!

> Option 2. I don't know what you mean by asking the host's GUI
> about it. I do know that having the user choosing the right
> option does work.

Yes, but it's basically making the user manually toggle a
setting which we should be getting right ourselves. We
should find out what QEMU's actually not doing correctly
and fix that.

thanks
-- PMM
Peter Maydell Jan. 6, 2015, 10:04 a.m. UTC | #4
On 6 January 2015 at 09:47, Peter Maydell <peter.maydell@linaro.org> wrote:
> Yes, but it's basically making the user manually toggle a
> setting which we should be getting right ourselves. We
> should find out what QEMU's actually not doing correctly
> and fix that.

First step to find out what's happening: which of the
following combinations give the messed up colours?

 * OSX host, Linux guest   [no, at least for me]
 * OSX host, OSX guest     [yes]
 * Linux host, Linux guest [no]
 * Linux host, OSX guest   ???

Secondly, are we talking about specifically x86 OSX host,
or both x86 and PPC OSX host, or just PPC OSX host?
And is the OSX guest x86, ppc or both?

thanks
-- PMM
Programmingkid Jan. 6, 2015, 2:46 p.m. UTC | #5
On Jan 6, 2015, at 5:04 AM, Peter Maydell wrote:

> On 6 January 2015 at 09:47, Peter Maydell <peter.maydell@linaro.org> wrote:
>> Yes, but it's basically making the user manually toggle a
>> setting which we should be getting right ourselves. We
>> should find out what QEMU's actually not doing correctly
>> and fix that.
> 
> First step to find out what's happening: which of the
> following combinations give the messed up colours?
> 
> * OSX host, Linux guest   [no, at least for me]
> * OSX host, OSX guest     [yes]
> * Linux host, Linux guest [no]
> * Linux host, OSX guest   ???
Linux host, OSX guest [no] . 
Mac OS X is the only operating system I know that has this problem. I am betting this is a bug with Mac OS X, not QEMU. 

> 
> Secondly, are we talking about specifically x86 OSX host,
> or both x86 and PPC OSX host, or just PPC OSX host?
> And is the OSX guest x86, ppc or both?

I am pretty sure only PowerPC and x86 OS X host have this problem. The OS X guest is PowerPC. I have not tried the x86 version yet. 

http://virtuallyfun.superglobalmegacorp.com/?p=267
This post and picture suggest that Mac OS X x86 as a guest does not have this problem.
Programmingkid Jan. 6, 2015, 5:19 p.m. UTC | #6
On Jan 6, 2015, at 11:46 AM, Peter Maydell wrote:

> On 6 January 2015 at 16:30, Programmingkid <programmingkidx@gmail.com> wrote:
>> I was doing some searching and thought I should show you this:
>> file: vga.c
>> 
>> This indicates that all operations are expected to be in the little endian format.
> 
> That controls the endianness to be used when writing words to
> the VGA registers, which is not the same as the endianness of
> the pixel format (which is what your previous patch was affecting).
> 
> See the comment in vga_common_init():
> 
>    /*
>     * Set default fb endian based on target, could probably be turned
>     * into a device attribute set by the machine/platform to remove
>     * all target endian dependencies from this file.
>     */
> #ifdef TARGET_WORDS_BIGENDIAN
>    s->default_endian_fb = true;
> #else
>    s->default_endian_fb = false;
> #endif
> 
> What do we sent default_endian_fb to in the configuration that
> doesn't give the right colours, and if we set it to the other
> setting does it work?
> 
> -- PMM


After investigating the TARGET_WORDS_BIGENDIAN code, I noticed that s->default_endian_fb was being set to true. So I undefined the macro and then ran QEMU. The i386 target showed no change in colors. The ppc target still had the same incorrect colors. Sorry, this doesn't fix the problem.
Peter Maydell Jan. 6, 2015, 5:30 p.m. UTC | #7
On 6 January 2015 at 17:19, Programmingkid <programmingkidx@gmail.com> wrote:
> After investigating the TARGET_WORDS_BIGENDIAN code, I noticed
> that s->default_endian_fb was being set to true. So I undefined
> the macro and then ran QEMU. The i386 target showed no change
> in colors. The ppc target still had the same incorrect colors.
> Sorry, this doesn't fix the problem.

What macro? You don't want to undefine TARGET_WORDS_BIGENDIAN,
that will wreak all kinds of havoc. Just test with manually
setting default_endian_fb to false.

-- PMM
Programmingkid Jan. 6, 2015, 5:57 p.m. UTC | #8
On Jan 6, 2015, at 12:30 PM, Peter Maydell wrote:

> On 6 January 2015 at 17:19, Programmingkid <programmingkidx@gmail.com> wrote:
>> After investigating the TARGET_WORDS_BIGENDIAN code, I noticed
>> that s->default_endian_fb was being set to true. So I undefined
>> the macro and then ran QEMU. The i386 target showed no change
>> in colors. The ppc target still had the same incorrect colors.
>> Sorry, this doesn't fix the problem.
> 
> What macro? You don't want to undefine TARGET_WORDS_BIGENDIAN,
> that will wreak all kinds of havoc. Just test with manually
> setting default_endian_fb to false.
> 
> -- PMM

Just tried that. It didn't fix the color problem. Here are the test I have done so far:

Experiment: Disable preprocessor code so that thebool byteswap = !s->big_endian_fb code is used.  Line 1440. 
Result: No change in colors for Mac OS X guest. They are still incorrect. 

Experiment: Set the HOST_WORDS_BIGENDIAN macro at line 94. 
Result: Did not change the colors. They are still messed up. It made all the colors in the PC emulator look weird - like a Nintendo cartridge not fully seated correctly. But it then changed back to the normal color layout. 

Experiment: undefine TARGET_WORDS_BIGENDIAN at line 2155 in vga.c. 
Result: PC emulator shows no change in colors - good. Mac emulator shows the same incorrect colors -bad. 

Experiment: remove the ! from byteswap in vga.c:1495. 
Result: The colors are still incorrect in qemu-system-ppc with a Mac OS X guest. Colors are unaffected in the i386 target.

Experiment: Just manually set default_endian_fb to false. 
Result: Did not fix the problem. Colors still show up incorrectly in the ppc emulator. 

Sorry but I don't think this is a bug with QEMU. I think the pixel format is suppose to be in the little endian format. IBM developed this standard for this PCs which used little endian x86 processors. Why would they want to spend the time and money on making their vga cards work on non-pc/2 compatible computers? They simply had no reason to support the big endian format.
Programmingkid Jan. 6, 2015, 6:07 p.m. UTC | #9
https://opensource.apple.com/source/IOGraphics/IOGraphics-45.3/IOGraphicsFamily/IOBootFramebuffer.cpp
This file is used for the frame buffer in Mac OS 10.2. There is no mention of the endian format for the pixels. That seems to indicate an oversight on Apple's part. 

http://www.mcamafia.de/pdf/ibm_vgaxga_trm2.pdf
This file is the specifications to the VGA standard. It makes no mention of pixel endian format. There is no mention of bit order in the specifications. It's probably assumed to be little endian.
Paolo Bonzini Jan. 6, 2015, 8:29 p.m. UTC | #10
On 06/01/2015 19:07, Programmingkid wrote:
> http://www.mcamafia.de/pdf/ibm_vgaxga_trm2.pdf This file is the
> specifications to the VGA standard. It makes no mention of pixel
> endian format. There is no mention of bit order in the
> specifications. It's probably assumed to be little endian.

The VGA didn't even have modes with more than 256 colors, so there's no
endianness at all.

How are you starting the guest?  Can you paste the output of "lspci -vv"
in a Linux guest that (apart from the disk contents) has the same
hardware as your Mac OS X guest?

Paolo
Programmingkid Jan. 6, 2015, 9:33 p.m. UTC | #11
I start the guest like this:
qemu-system-ppc -hdd ~/machd.img -boot c -prom-env boot-args=-v

Hope this is what you wanted:

00:00.0 Host bridge: Motorola MPC106 [Grackle] (prog-if 01)
        Subsystem: Qumranet, Inc. Device 1100
        Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-

00:01.0 VGA compatible controller: Technical Corp. Device 1111 (rev
02) (prog-if 00 [VGA controller])
        Subsystem: Qumranet, Inc. Device 1100
        Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
        Region 0: Memory at 80000000 (32-bit, prefetchable) [size=16M]
        Region 2: Memory at 81000000 (32-bit, non-prefetchable) [size=4K]
        Expansion ROM at 81010000 [disabled] [size=64K]

00:02.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8029(AS)
        Subsystem: Qumranet, Inc. Device 1100
        Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 23
        Region 0: I/O ports at 0400 [size=256]
        Expansion ROM at 81040000 [disabled] [size=256K]
        Kernel driver in use: ne2k-pci
        Kernel modules: ne2k-pci

00:03.0 Class ff00: Apple Computer Inc. Heathrow Mac I/O
        Subsystem: Qumranet, Inc. Device 1100
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 24
        Region 0: Memory at 81080000 (32-bit, non-prefetchable) [size=512K]
        Kernel driver in use: macio


On 1/6/15, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>
> On 06/01/2015 19:07, Programmingkid wrote:
>> http://www.mcamafia.de/pdf/ibm_vgaxga_trm2.pdf This file is the
>> specifications to the VGA standard. It makes no mention of pixel
>> endian format. There is no mention of bit order in the
>> specifications. It's probably assumed to be little endian.
>
> The VGA didn't even have modes with more than 256 colors, so there's no
> endianness at all.
>
> How are you starting the guest?  Can you paste the output of "lspci -vv"
> in a Linux guest that (apart from the disk contents) has the same
> hardware as your Mac OS X guest?
>
> Paolo
>
Programmingkid Jan. 6, 2015, 11:57 p.m. UTC | #12
Just curious, if someone installed a cirrus vga video card into a PowerMac with Mac OS 10.2 installed, and it had the same color issue that QEMU has, would you be convinced that this problem is an issue with Mac OS X?
Paolo Bonzini Jan. 7, 2015, 5:17 a.m. UTC | #13
On 07/01/2015 00:57, Programmingkid wrote:
> Just curious, if someone installed a cirrus vga video card into a
> PowerMac with Mac OS 10.2 installed, and it had the same color issue
> that QEMU has, would you be convinced that this problem is an issue
> with Mac OS X?

G 3 replied that he's not using Cirrus, though.

Paolo
Paolo Bonzini Jan. 7, 2015, 10:35 a.m. UTC | #14
On 06/01/2015 22:33, G 3 wrote:
> 
> 00:01.0 VGA compatible controller: Technical Corp. Device 1111 (rev
> 02) (prog-if 00 [VGA controller])
>         Subsystem: Qumranet, Inc. Device 1100
>         Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop-
> ParErr- Stepping- SERR- FastB2B- DisINTx-
>         Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> <TAbort- <MAbort- >SERR- <PERR- INTx-
>         Region 0: Memory at 80000000 (32-bit, prefetchable) [size=16M]
>         Region 2: Memory at 81000000 (32-bit, non-prefetchable) [size=4K]
>         Expansion ROM at 81010000 [disabled] [size=64K]

Yes.  So this is "-vga std", not "-vga cirrus".  In this case, it might
make sense to add a property to the VGA that forces an endianness over
the other, and then to specify that property in order to use Mac OS X.

However, you should specify it whatever the host endianness and the host
OS is.  If this is not the case, you're just exchanging a bug with another.

If something

a) works with Linux host but not with Mac OS X host

b) and works with Linux guest but not with Mac OS X guest

the only logical explanation is that you have more than one bug, and
they somehow cancel each other.  The fix is to find and stomp all the
bugs, not to introduce an option for the cases that end up buggy.

Paolo
Gerd Hoffmann Jan. 7, 2015, 2:43 p.m. UTC | #15
Hi,

> However, you should specify it whatever the host endianness and the host
> OS is.  If this is not the case, you're just exchanging a bug with another.
> 
> If something
> 
> a) works with Linux host but not with Mac OS X host
> 
> b) and works with Linux guest but not with Mac OS X guest
> 
> the only logical explanation is that you have more than one bug, and
> they somehow cancel each other.

It isn't that simple I think.  Linux and MacOS X using different video
modes could also have this effect.  Also there are a number of ways
linux can drive the video card, depending on the kernel version.
kernels 3.14+ have a drm driver for the qemu stdvga, which runs the card
with 32bpp (if enabled).  On older kernels the only option is offb,
which IIRC by default runs with 8bpp modes.  Also offb has quirks to set
the palette registers on the qemu stdvga.

So, one interesting question is how MacOS X drives the video card?  Just
using what openfirmware has initialized?  Which video mode?

Turning on DEBUG_VGA in vga.c should help shed a light on what the guest
is doing and which video mode is active.

Also: what UI is in use?  cocoa?  gtk?  sdl?  Has using another ui
(assuming it is available on macosx hosts) any effect?

cheers,
  Gerd
Programmingkid Jan. 7, 2015, 4:26 p.m. UTC | #16
On Jan 7, 2015, at 9:43 AM, Gerd Hoffmann wrote:

>  Hi,
> 
>> However, you should specify it whatever the host endianness and the host
>> OS is.  If this is not the case, you're just exchanging a bug with another.
>> 
>> If something
>> 
>> a) works with Linux host but not with Mac OS X host
>> 
>> b) and works with Linux guest but not with Mac OS X guest
>> 
>> the only logical explanation is that you have more than one bug, and
>> they somehow cancel each other.
> 
> It isn't that simple I think.  Linux and MacOS X using different video
> modes could also have this effect.  Also there are a number of ways
> linux can drive the video card, depending on the kernel version.
> kernels 3.14+ have a drm driver for the qemu stdvga, which runs the card
> with 32bpp (if enabled).  On older kernels the only option is offb,
> which IIRC by default runs with 8bpp modes.  Also offb has quirks to set
> the palette registers on the qemu stdvga.
> 
> So, one interesting question is how MacOS X drives the video card?  Just
> using what openfirmware has initialized?  Which video mode?

That is an intriguing idea. What if the problem is with openbios? I wonder if it is as simple as settings a property in a node. 


> 
> Turning on DEBUG_VGA in vga.c should help shed a light on what the guest
> is doing and which video mode is active.

I turned on all debug options in vga.c. Here is the output:

VGA: write addr=0x03c0 data=0x00
VBE: write index=0x4 val=0x0
VBE: write index=0x8 val=0x0
VBE: write index=0x9 val=0x0
VBE: write index=0x1 val=0x320
VBE: write index=0x2 val=0x258
VBE: write index=0x3 val=0x20
VBE: write index=0x4 val=0x1
VGA: write addr=0x03c0 data=0x00
VGA: write addr=0x03c0 data=0x20
VGA: Using shared surface for depth=32 swap=1
VGA: write addr=0x03c8 data=0x00
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c8 data=0x01
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c8 data=0x02
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c8 data=0x03
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c9 data=0x00
VGA: write addr=0x03c8 data=0x04
VGA: write addr=0x03c9 data=0x01
VGA: write addr=0x03c9 data=0x01
VGA: write addr=0x03c9 data=0x01
VGA: write addr=0x03c8 data=0x05
VGA: write addr=0x03c9 data=0x01
VGA: write addr=0x03c9 data=0x01
VGA: write addr=0x03c9 data=0x01
VGA: write addr=0x03c8 data=0x06


> 
> Also: what UI is in use?  cocoa?  gtk?  sdl?  Has using another ui
> (assuming it is available on macosx hosts) any effect?
Cocoa is what I am using. There are currently no other UI's available on Mac OS X.
Programmingkid Jan. 7, 2015, 5:38 p.m. UTC | #17
On Jan 7, 2015, at 5:35 AM, Paolo Bonzini wrote:

> 
> 
> On 06/01/2015 22:33, G 3 wrote:
>> 
>> 00:01.0 VGA compatible controller: Technical Corp. Device 1111 (rev
>> 02) (prog-if 00 [VGA controller])
>>        Subsystem: Qumranet, Inc. Device 1100
>>        Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop-
>> ParErr- Stepping- SERR- FastB2B- DisINTx-
>>        Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
>> <TAbort- <MAbort- >SERR- <PERR- INTx-
>>        Region 0: Memory at 80000000 (32-bit, prefetchable) [size=16M]
>>        Region 2: Memory at 81000000 (32-bit, non-prefetchable) [size=4K]
>>        Expansion ROM at 81010000 [disabled] [size=64K]
> 
> Yes.  So this is "-vga std", not "-vga cirrus".  In this case, it might
> make sense to add a property to the VGA that forces an endianness over
> the other, and then to specify that property in order to use Mac OS X.
> 
> However, you should specify it whatever the host endianness and the host
> OS is.  If this is not the case, you're just exchanging a bug with another.
> 
> If something
> 
> a) works with Linux host but not with Mac OS X host
> 
> b) and works with Linux guest but not with Mac OS X guest
> 
> the only logical explanation is that you have more than one bug, and
> they somehow cancel each other.  The fix is to find and stomp all the
> bugs, not to introduce an option for the cases that end up buggy.

I was told there is a frame buffer byte swap in Linux, that is why the colors appear correctly. But if that happened all the time, then Linux guest would have the color problem.

Maybe we should all makes theories and guest where to look. Here is my theories so far:
- Incorrect OpenBIOS settings
- VGA isn't correctly supported in Mac OS X
- Vague standards that didn't take endianness of processor into account
- Some function of VGA not implemented or not implemented fully
- Some assumption about writing to IO ports that is true on real Macs but not implemented on QEMU

If anyone wants to add to the list, please do so.

-Incorrect OpenBIOS settings
If this is true, it wouldn't be the first time OpenBIOS issues have effected Mac OS X.

- VGA isn't correctly supported in Mac OS X:
If there is someone out there that has some early VGA PCI card, and a PowerMac that runs Mac OS X, it would help a lot to let us know if it displays colors correctly. I am pretty sure all such cards were made for PC's only, so this would probably not work. I'm surprised Apple implemented a generic VGA driver in the first place. 

- Vague standards that didn't take endianness of processor into account
Never seen any mention of endianness of pixel data in the VGA standards. It was probably just assumed little endian.

- Some function of VGA not implemented or not implemented fully
This could be a possibility, but not sure about it. 

- Some assumption about writing to IO ports that is true on real Macs but not implemented on QEMU
This is just a guess for now. It would take a lot of memory poking to prove this.
Gerd Hoffmann Jan. 8, 2015, 9:02 a.m. UTC | #18
Hi,

> VGA: Using shared surface for depth=32 swap=1

Ok, 32bpp.  byteswapping needed.

I guess the host is a intel macintosh then?

Having a quick look at the cocoa code it seems it doesn't look at the
color masks and shifts, only the color depth.  So having the UI handle
the byteswapping that way isn't going to fly.

Try setting force_shadow (vga.c, needs git master) to one.  That way
vga.c will byteswap and not expect the UI to do it.  Alternatively make
cocoa UI properly handle the color masks and shifts, so non-native
ordering works.

I have some patches from benh in the pipeline allowing to negotiate
supported formats for shared buffers, with that in place hard-coded
assumptions about what formats the UI code is able to handle will go
away.  Guess I should rank them up in my priority list ;)

cheers,
  Gerd
Programmingkid Jan. 8, 2015, 5:07 p.m. UTC | #19
On Jan 8, 2015, at 4:02 AM, Gerd Hoffmann wrote:

>  Hi,
> 
>> VGA: Using shared surface for depth=32 swap=1
> 
> Ok, 32bpp.  byteswapping needed.
> 
> I guess the host is a intel macintosh then?

Yes. I unfortunately don't have a fast enough PowerPC Mac to handle QEMU. It would be interesting to find out if this color issue is on PowerPC hosts. 

> 
> Having a quick look at the cocoa code it seems it doesn't look at the
> color masks and shifts, only the color depth.  So having the UI handle
> the byteswapping that way isn't going to fly.
> 
> Try setting force_shadow (vga.c, needs git master) to one.  That way
> vga.c will byteswap and not expect the UI to do it.  Alternatively make
> cocoa UI properly handle the color masks and shifts, so non-native
> ordering works.

Is this what you mean?
 s->force_shadow = 1;
    share_surface = (!s->force_shadow) &&
            ( depth == 32 || (depth == 16 && !byteswap) );

I tried it out and didn't notice any change in colors for the Mac OS X guest. 


I do have an idea. What if on cocoa_update(DisplayChangeListener ...), we find out the format of the framebuffer. 

The DisplayChangeListener object has a QemuConsole object. The QemuConsole object has a DisplaySurface object. The DisplaySurface object has a pixman_format_code_t format variable. This format variable tells us what format the framebuffer is in. So is it possible to use it? The format types are listed in pixman.h.

If it reports the correct format like BGR, then it would be possible to automatically adjust the UI to display the correct colors. I am currently attempting this, but accessing the format variable is not as easy as I thought it would be. I think the code will look something like this: dcl->con->surface->format. 


> I have some patches from benh in the pipeline allowing to negotiate
> supported formats for shared buffers, with that in place hard-coded
> assumptions about what formats the UI code is able to handle will go
> away.  Guess I should rank them up in my priority list ;)

Sounds interesting. 

> 
> cheers,
>  Gerd

Thanks.
Gerd Hoffmann Jan. 9, 2015, 8:58 a.m. UTC | #20
On Do, 2015-01-08 at 12:07 -0500, Programmingkid wrote:
> On Jan 8, 2015, at 4:02 AM, Gerd Hoffmann wrote:
> 
> >  Hi,
> > 
> >> VGA: Using shared surface for depth=32 swap=1
> > 
> > Ok, 32bpp.  byteswapping needed.
> > 
> > I guess the host is a intel macintosh then?
> 
> Yes.

So we have be guest @ le host.

>  I unfortunately don't have a fast enough PowerPC Mac to handle QEMU.
> It would be interesting to find out if this color issue is on PowerPC
> hosts. 

Indeed.

> Is this what you mean?
>  s->force_shadow = 1;
>     share_surface = (!s->force_shadow) &&
>             ( depth == 32 || (depth == 16 && !byteswap) );

Yes.

> I tried it out and didn't notice any change in colors for the Mac OS X guest. 

Hmm, strange.

Can you test
   https://www.kraxel.org/cgit/qemu/log/?h=rebase/console-wip ?

> I do have an idea. What if on cocoa_update(DisplayChangeListener ...),
> we find out the format of the framebuffer. 
> 
> The DisplayChangeListener object has a QemuConsole object. The
> QemuConsole object has a DisplaySurface object. The DisplaySurface
> object has a pixman_format_code_t format variable. This format
> variable tells us what format the framebuffer is in. So is it possible
> to use it? The format types are listed in pixman.h.

Better place is probably switchSurface, so you have to look only once
for every surface, not on every display update.

Just look at surface->format.

cheers,
  Gerd
Programmingkid Jan. 9, 2015, 3:11 p.m. UTC | #21
On Jan 9, 2015, at 3:58 AM, Gerd Hoffmann wrote:

> On Do, 2015-01-08 at 12:07 -0500, Programmingkid wrote:
>> On Jan 8, 2015, at 4:02 AM, Gerd Hoffmann wrote:
>> 
>>> Hi,
>>> 
>>>> VGA: Using shared surface for depth=32 swap=1
>>> 
>>> Ok, 32bpp.  byteswapping needed.
>>> 
>>> I guess the host is a intel macintosh then?
>> 
>> Yes.
> 
> So we have be guest @ le host.
> 
>> I unfortunately don't have a fast enough PowerPC Mac to handle QEMU.
>> It would be interesting to find out if this color issue is on PowerPC
>> hosts. 
> 
> Indeed.
> 
>> Is this what you mean?
>> s->force_shadow = 1;
>>    share_surface = (!s->force_shadow) &&
>>            ( depth == 32 || (depth == 16 && !byteswap) );
> 
> Yes.
> 
>> I tried it out and didn't notice any change in colors for the Mac OS X guest. 
> 
> Hmm, strange.

I had done a bunch of changes to the code, so the data I gave you is probably inaccurate.  Please discard it. 

> 
> Can you test
>   https://www.kraxel.org/cgit/qemu/log/?h=rebase/console-wip ?

Ok. I will put it on my to do list. 

> 
>> I do have an idea. What if on cocoa_update(DisplayChangeListener ...),
>> we find out the format of the framebuffer. 
>> 
>> The DisplayChangeListener object has a QemuConsole object. The
>> QemuConsole object has a DisplaySurface object. The DisplaySurface
>> object has a pixman_format_code_t format variable. This format
>> variable tells us what format the framebuffer is in. So is it possible
>> to use it? The format types are listed in pixman.h.
> 
> Better place is probably switchSurface, so you have to look only once
> for every surface, not on every display update.
> 
> Just look at surface->format.

Great idea. Looks like I have solved the Mac OS X guest color problem. This code in cocoa_switch() does the trick:

/* Determines the pixel format of the frame buffer */
    if (surface->format == PIXMAN_b8g8r8x8) {
        bitmap_info = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
    }

I was tempted to just see if the bit depth was 32, then just set bitmap_info to the above value. Doubtful such a patch would have been accepted.
diff mbox

Patch

diff --git a/ui/cocoa.m b/ui/cocoa.m
index 704d199..562fa29 100644
--- a/ui/cocoa.m
+++ b/ui/cocoa.m
@@ -64,6 +64,10 @@  static int last_buttons;
 int gArgc;
 char **gArgv;
 
+/* Used in drawRect:. Starts with little endian format. */
+static int bitmap_info = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
+SInt32 current_mac_os_version;
+
 // keymap conversion
 int keymap[] =
 {
@@ -238,7 +242,33 @@  static int cocoa_keycode_to_qemu(int keycode)
     return keymap[keycode];
 }
 
+/* Looks for the -display-endian-big option being sent to QEMU.
+   Fixes the issue with the colors not being displayed correctly with Mac OS X as the host.
+   It is normally assumed that the frame buffer has data in the little endian format.
+   Mac OS X uses the big endian format.
+*/
+static void scanForDisplayEndianOption(int argc, char * argv[])
+{
+	// search for the -display-endian-big option
+	for(int i = 0; i < argc; i++) {
+	    if(strcmp(argv[i], "-display-endian-big") == 0) {
+            bitmap_info = kCGImageAlphaNoneSkipFirst;
+            // remove the option from the argv array
+            sprintf(argv[i], "%s", "-no-frame");  // no-frame does nothing in cocoa, so it is harmless
+            break;
+	    }
+	}
+}
 
+/* Finds out what version of the Mac OS your computer is using. */
+static void determineMacOSVersion()
+{
+    OSErr err_num = Gestalt(gestaltSystemVersion, &current_mac_os_version);
+    if(err_num != noErr) {
+        current_mac_os_version = -1;
+        fprintf(stderr, "\nWarning: Failed to determine Mac OS version of your system!\n");
+    }
+}
 
 /*
  ------------------------------------------------------
@@ -257,6 +287,7 @@  static int cocoa_keycode_to_qemu(int keycode)
     BOOL isAbsoluteEnabled;
     BOOL isMouseDeassociated;
     NSDictionary * window_mode_dict; /* keeps track of the guest' graphic settings */
+    CGColorSpaceRef color_space;  /* used in drawRect: */
 }
 - (void) switchSurface:(DisplaySurface *)surface;
 - (void) grabMouse;
@@ -299,6 +330,13 @@  QemuCocoaView *cocoaView;
         screen.width = frameRect.size.width;
         screen.height = frameRect.size.height;
         [self updateWindowModeSettings];
+
+        if (current_mac_os_version >= MAC_OS_X_VERSION_10_4)
+            color_space = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
+        else {
+            /* Using this in Mac OS 10.6 causes occasional crashes in drawRect:. */
+            color_space = CGColorSpaceCreateDeviceRGB();
+        }
     }
     return self;
 }
@@ -361,13 +399,8 @@  QemuCocoaView *cocoaView;
             screen.bitsPerComponent, //bitsPerComponent
             screen.bitsPerPixel, //bitsPerPixel
             (screen.width * (screen.bitsPerComponent/2)), //bytesPerRow
-#ifdef __LITTLE_ENDIAN__
-            CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), //colorspace for OS X >= 10.4
-            kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst,
-#else
-            CGColorSpaceCreateDeviceRGB(), //colorspace for OS X < 10.4 (actually ppc)
-            kCGImageAlphaNoneSkipFirst, //bitmapInfo
-#endif
+            color_space,
+            bitmap_info,
             dataProviderRef, //provider
             NULL, //decode
             0, //interpolate
@@ -835,6 +868,7 @@  QemuCocoaView *cocoaView;
 
     self = [super init];
     if (self) {
+        determineMacOSVersion();
 
         // create a view and add it to the window
         cocoaView = [[QemuCocoaView alloc] initWithFrame:NSMakeRect(0.0, 0.0, 640.0, 480.0)];
@@ -918,6 +952,7 @@  QemuCocoaView *cocoaView;
     COCOA_DEBUG("QemuCocoaAppController: startEmulationWithArgc\n");
 
     int status;
+    scanForDisplayEndianOption(argc, argv);
     status = qemu_main(argc, argv, *_NSGetEnviron());
     exit(status);
 }