Message ID | 6BA6355D-D77A-40F4-A8C4-61901A926E71@suse.de |
---|---|
State | New |
Headers | show |
On 2011-09-13 10:40, Alexander Graf wrote: > Btw, it still tries to execute invalid code even with your patch. #if 0'ing out the memory region updates at least get the guest booting for me. Btw, to get it working you also need a patch for the interrupt controller (another breakage thanks to memory api). > > diff --git a/hw/heathrow_pic.c b/hw/heathrow_pic.c > index 51996ab..16f48d1 100644 > --- a/hw/heathrow_pic.c > +++ b/hw/heathrow_pic.c > @@ -126,7 +126,7 @@ static uint64_t pic_read(void *opaque, target_phys_addr_t addr, > static const MemoryRegionOps heathrow_pic_ops = { > .read = pic_read, > .write = pic_write, > - .endianness = DEVICE_NATIVE_ENDIAN, > + .endianness = DEVICE_LITTLE_ENDIAN, > }; > > static void heathrow_pic_set_irq(void *opaque, int num, int level) > With out without this fix, with or without active chain-4 optimization, I just get an empty yellow screen when firing up qemu-system-ppc (also when using the Debian ISO). Do I need to specify a specific machine type? Jan
On 13.09.2011, at 11:00, Jan Kiszka wrote: > On 2011-09-13 10:40, Alexander Graf wrote: >> Btw, it still tries to execute invalid code even with your patch. #if 0'ing out the memory region updates at least get the guest booting for me. Btw, to get it working you also need a patch for the interrupt controller (another breakage thanks to memory api). >> >> diff --git a/hw/heathrow_pic.c b/hw/heathrow_pic.c >> index 51996ab..16f48d1 100644 >> --- a/hw/heathrow_pic.c >> +++ b/hw/heathrow_pic.c >> @@ -126,7 +126,7 @@ static uint64_t pic_read(void *opaque, target_phys_addr_t addr, >> static const MemoryRegionOps heathrow_pic_ops = { >> .read = pic_read, >> .write = pic_write, >> - .endianness = DEVICE_NATIVE_ENDIAN, >> + .endianness = DEVICE_LITTLE_ENDIAN, >> }; >> >> static void heathrow_pic_set_irq(void *opaque, int num, int level) >> > > With out without this fix, with or without active chain-4 optimization, > I just get an empty yellow screen when firing up qemu-system-ppc (also > when using the Debian ISO). Do I need to specify a specific machine type? Ugh. No, you only need this patch: [PATCH] PPC: Fix via-cuda memory registration which fixes another recently introduced regression :) Alex
Am 13.09.2011 um 11:00 schrieb Jan Kiszka: > On 2011-09-13 10:40, Alexander Graf wrote: >> Btw, it still tries to execute invalid code even with your patch. >> #if 0'ing out the memory region updates at least get the guest >> booting for me. Btw, to get it working you also need a patch for >> the interrupt controller (another breakage thanks to memory api). >> >> diff --git a/hw/heathrow_pic.c b/hw/heathrow_pic.c >> index 51996ab..16f48d1 100644 >> --- a/hw/heathrow_pic.c >> +++ b/hw/heathrow_pic.c >> @@ -126,7 +126,7 @@ static uint64_t pic_read(void *opaque, >> target_phys_addr_t addr, >> static const MemoryRegionOps heathrow_pic_ops = { >> .read = pic_read, >> .write = pic_write, >> - .endianness = DEVICE_NATIVE_ENDIAN, >> + .endianness = DEVICE_LITTLE_ENDIAN, >> }; >> >> static void heathrow_pic_set_irq(void *opaque, int num, int level) >> > > With out without this fix, with or without active chain-4 > optimization, > I just get an empty yellow screen when firing up qemu-system-ppc (also > when using the Debian ISO). Do I need to specify a specific machine > type? No. Did you try with Alex' via-cuda patch? That's the only one I have on my branch for Linux host. Andreas
On 2011-09-13 11:42, Alexander Graf wrote: > > On 13.09.2011, at 11:00, Jan Kiszka wrote: > >> On 2011-09-13 10:40, Alexander Graf wrote: >>> Btw, it still tries to execute invalid code even with your patch. #if 0'ing out the memory region updates at least get the guest booting for me. Btw, to get it working you also need a patch for the interrupt controller (another breakage thanks to memory api). >>> >>> diff --git a/hw/heathrow_pic.c b/hw/heathrow_pic.c >>> index 51996ab..16f48d1 100644 >>> --- a/hw/heathrow_pic.c >>> +++ b/hw/heathrow_pic.c >>> @@ -126,7 +126,7 @@ static uint64_t pic_read(void *opaque, target_phys_addr_t addr, >>> static const MemoryRegionOps heathrow_pic_ops = { >>> .read = pic_read, >>> .write = pic_write, >>> - .endianness = DEVICE_NATIVE_ENDIAN, >>> + .endianness = DEVICE_LITTLE_ENDIAN, >>> }; >>> >>> static void heathrow_pic_set_irq(void *opaque, int num, int level) >>> >> >> With out without this fix, with or without active chain-4 optimization, >> I just get an empty yellow screen when firing up qemu-system-ppc (also >> when using the Debian ISO). Do I need to specify a specific machine type? > > Ugh. No, you only need this patch: > > [PATCH] PPC: Fix via-cuda memory registration > > which fixes another recently introduced regression :) That works now - and allowed me to identify the bug after enhancing info mtree a bit: (qemu) info mtree memory addr 00000000 prio 0 size 7fffffffffffffff system addr 80880000 prio 1 size 80000 macio addr 808e0000 prio 0 size 20000 macio-nvram addr 808a0000 prio 0 size 1000 pmac-ide addr 80896000 prio 0 size 2000 cuda addr 80893000 prio 0 size 40 escc-bar addr 80888000 prio 0 size 1000 dbdma addr 80880000 prio 0 size 1000 heathrow-pic addr 80000000 prio 1 size 800000 vga.vram addr 800a0000 prio 1 size 20000 vga-lowmem ... Here is the problem: Both the vram and the ISA range get mapped into system address space, but the former eclipses the latter as it shows up earlier in the list and has the same priority. This picture changes with the chain-4 alias which has prio 2, thus maps over the vram. It looks to me like the ISA address space is either misplaced at 0x80000000 or is not supposed to be mapped at all on PPC. Comments? Jan
On Tue, Sep 13, 2011 at 11:34 AM, Jan Kiszka <jan.kiszka@siemens.com> wrote: > On 2011-09-13 11:42, Alexander Graf wrote: >> >> On 13.09.2011, at 11:00, Jan Kiszka wrote: >> >>> On 2011-09-13 10:40, Alexander Graf wrote: >>>> Btw, it still tries to execute invalid code even with your patch. #if 0'ing out the memory region updates at least get the guest booting for me. Btw, to get it working you also need a patch for the interrupt controller (another breakage thanks to memory api). >>>> >>>> diff --git a/hw/heathrow_pic.c b/hw/heathrow_pic.c >>>> index 51996ab..16f48d1 100644 >>>> --- a/hw/heathrow_pic.c >>>> +++ b/hw/heathrow_pic.c >>>> @@ -126,7 +126,7 @@ static uint64_t pic_read(void *opaque, target_phys_addr_t addr, >>>> static const MemoryRegionOps heathrow_pic_ops = { >>>> .read = pic_read, >>>> .write = pic_write, >>>> - .endianness = DEVICE_NATIVE_ENDIAN, >>>> + .endianness = DEVICE_LITTLE_ENDIAN, >>>> }; >>>> >>>> static void heathrow_pic_set_irq(void *opaque, int num, int level) >>>> >>> >>> With out without this fix, with or without active chain-4 optimization, >>> I just get an empty yellow screen when firing up qemu-system-ppc (also >>> when using the Debian ISO). Do I need to specify a specific machine type? >> >> Ugh. No, you only need this patch: >> >> [PATCH] PPC: Fix via-cuda memory registration >> >> which fixes another recently introduced regression :) > > That works now - and allowed me to identify the bug after enhancing info > mtree a bit: > > (qemu) info mtree > memory > addr 00000000 prio 0 size 7fffffffffffffff system > addr 80880000 prio 1 size 80000 macio > addr 808e0000 prio 0 size 20000 macio-nvram > addr 808a0000 prio 0 size 1000 pmac-ide > addr 80896000 prio 0 size 2000 cuda > addr 80893000 prio 0 size 40 escc-bar > addr 80888000 prio 0 size 1000 dbdma > addr 80880000 prio 0 size 1000 heathrow-pic > addr 80000000 prio 1 size 800000 vga.vram > addr 800a0000 prio 1 size 20000 vga-lowmem > ... > > Here is the problem: Both the vram and the ISA range get mapped into > system address space, but the former eclipses the latter as it shows up > earlier in the list and has the same priority. This picture changes with > the chain-4 alias which has prio 2, thus maps over the vram. > > It looks to me like the ISA address space is either misplaced at > 0x80000000 or is not supposed to be mapped at all on PPC. Comments? Since there is no PCI-ISA bridge, ISA address space shouldn't exist.
On 09/13/2011 10:39 PM, Blue Swirl wrote: > > > > Here is the problem: Both the vram and the ISA range get mapped into > > system address space, but the former eclipses the latter as it shows up > > earlier in the list and has the same priority. This picture changes with > > the chain-4 alias which has prio 2, thus maps over the vram. > > > > It looks to me like the ISA address space is either misplaced at > > 0x80000000 or is not supposed to be mapped at all on PPC. Comments? > > Since there is no PCI-ISA bridge, ISA address space shouldn't exist. Where does the vga device sit then?
On 14.09.2011, at 09:11, Avi Kivity wrote: > On 09/13/2011 10:39 PM, Blue Swirl wrote: >> > >> > Here is the problem: Both the vram and the ISA range get mapped into >> > system address space, but the former eclipses the latter as it shows up >> > earlier in the list and has the same priority. This picture changes with >> > the chain-4 alias which has prio 2, thus maps over the vram. >> > >> > It looks to me like the ISA address space is either misplaced at >> > 0x80000000 or is not supposed to be mapped at all on PPC. Comments? >> >> Since there is no PCI-ISA bridge, ISA address space shouldn't exist. > > Where does the vga device sit then? On the PCI bus? :) Alex
On 2011-09-14 09:42, Alexander Graf wrote: > > On 14.09.2011, at 09:11, Avi Kivity wrote: > >> On 09/13/2011 10:39 PM, Blue Swirl wrote: >>>> >>>> Here is the problem: Both the vram and the ISA range get mapped into >>>> system address space, but the former eclipses the latter as it shows up >>>> earlier in the list and has the same priority. This picture changes with >>>> the chain-4 alias which has prio 2, thus maps over the vram. >>>> >>>> It looks to me like the ISA address space is either misplaced at >>>> 0x80000000 or is not supposed to be mapped at all on PPC. Comments? >>> >>> Since there is no PCI-ISA bridge, ISA address space shouldn't exist. >> >> Where does the vga device sit then? > > On the PCI bus? :) Then make sure that the container for ISA resources is a dummy region - or even NULL so that VGA will know that it's supposed to skip ISA registrations. Jan
On 09/14/2011 10:42 AM, Alexander Graf wrote: > On 14.09.2011, at 09:11, Avi Kivity wrote: > > > On 09/13/2011 10:39 PM, Blue Swirl wrote: > >> > > >> > Here is the problem: Both the vram and the ISA range get mapped into > >> > system address space, but the former eclipses the latter as it shows up > >> > earlier in the list and has the same priority. This picture changes with > >> > the chain-4 alias which has prio 2, thus maps over the vram. > >> > > >> > It looks to me like the ISA address space is either misplaced at > >> > 0x80000000 or is not supposed to be mapped at all on PPC. Comments? > >> > >> Since there is no PCI-ISA bridge, ISA address space shouldn't exist. > > > > Where does the vga device sit then? > > On the PCI bus? :) > I thought it was std vga, which is an ISA device. Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it supposed to be mapped?
On 2011-09-14 10:17, Avi Kivity wrote: > On 09/14/2011 10:42 AM, Alexander Graf wrote: >> On 14.09.2011, at 09:11, Avi Kivity wrote: >> >>> On 09/13/2011 10:39 PM, Blue Swirl wrote: >>>> > >>>> > Here is the problem: Both the vram and the ISA range get mapped into >>>> > system address space, but the former eclipses the latter as it shows up >>>> > earlier in the list and has the same priority. This picture changes with >>>> > the chain-4 alias which has prio 2, thus maps over the vram. >>>> > >>>> > It looks to me like the ISA address space is either misplaced at >>>> > 0x80000000 or is not supposed to be mapped at all on PPC. Comments? >>>> >>>> Since there is no PCI-ISA bridge, ISA address space shouldn't exist. >>> >>> Where does the vga device sit then? >> >> On the PCI bus? :) >> > > I thought it was std vga, which is an ISA device. There are both types (ISA-only and PCI). > > Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it > supposed to be mapped? ...but not all PCI bridges make use of this feature / forward legacy requests. Jan
On 09/14/2011 11:20 AM, Jan Kiszka wrote: > > > > Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it > > supposed to be mapped? > > ...but not all PCI bridges make use of this feature / forward legacy > requests. > Then this should be fixed in the bridge?
On 2011-09-14 10:22, Avi Kivity wrote: > On 09/14/2011 11:20 AM, Jan Kiszka wrote: >>> >>> Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it >>> supposed to be mapped? >> >> ...but not all PCI bridges make use of this feature / forward legacy >> requests. >> > > Then this should be fixed in the bridge? Yes, it's a PPC bug. Jan
On 14.09.2011, at 10:24, Jan Kiszka wrote: > On 2011-09-14 10:22, Avi Kivity wrote: >> On 09/14/2011 11:20 AM, Jan Kiszka wrote: >>>> >>>> Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it >>>> supposed to be mapped? >>> >>> ...but not all PCI bridges make use of this feature / forward legacy >>> requests. >>> >> >> Then this should be fixed in the bridge? > > Yes, it's a PPC bug. So how does the bridge not forward it then? Alex
On 2011-09-14 10:27, Alexander Graf wrote: > > On 14.09.2011, at 10:24, Jan Kiszka wrote: > >> On 2011-09-14 10:22, Avi Kivity wrote: >>> On 09/14/2011 11:20 AM, Jan Kiszka wrote: >>>>> >>>>> Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it >>>>> supposed to be mapped? >>>> >>>> ...but not all PCI bridges make use of this feature / forward legacy >>>> requests. >>>> >>> >>> Then this should be fixed in the bridge? >> >> Yes, it's a PPC bug. > > So how does the bridge not forward it then? On real HW, by keeping the "VGA Enable" bit off. Or just not issuing requests to the a0000..bffff range. Under QEMU, I would simply provide the VGA model a memory region for legacy stuff that remains unregistered. Jan
On 09/14/2011 11:27 AM, Alexander Graf wrote: > On 14.09.2011, at 10:24, Jan Kiszka wrote: > > > On 2011-09-14 10:22, Avi Kivity wrote: > >> On 09/14/2011 11:20 AM, Jan Kiszka wrote: > >>>> > >>>> Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it > >>>> supposed to be mapped? > >>> > >>> ...but not all PCI bridges make use of this feature / forward legacy > >>> requests. > >>> > >> > >> Then this should be fixed in the bridge? > > > > Yes, it's a PPC bug. > > So how does the bridge not forward it then? > I expect that currently vga adds the region to pci_address_space(). We need to create a pci_address_space_vga() function that returns a region for vga to use. Then add or remove the region to pci_address_space(), within the bridge code, depending on whether the bridge forwards vga accesses or not. (assuming I understood the problem correctly - not sure)
On Wed, Sep 14, 2011 at 8:35 AM, Avi Kivity <avi@redhat.com> wrote: > On 09/14/2011 11:27 AM, Alexander Graf wrote: >> >> On 14.09.2011, at 10:24, Jan Kiszka wrote: >> >> > On 2011-09-14 10:22, Avi Kivity wrote: >> >> On 09/14/2011 11:20 AM, Jan Kiszka wrote: >> >>>> >> >>>> Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it >> >>>> supposed to be mapped? >> >>> >> >>> ...but not all PCI bridges make use of this feature / forward legacy >> >>> requests. >> >>> >> >> >> >> Then this should be fixed in the bridge? >> > >> > Yes, it's a PPC bug. >> >> So how does the bridge not forward it then? >> > > I expect that currently vga adds the region to pci_address_space(). We need > to create a pci_address_space_vga() function that returns a region for vga > to use. Then add or remove the region to pci_address_space(), within the > bridge code, depending on whether the bridge forwards vga accesses or not. Similar treatment should be also needed for VGA IO ports 0x3b0 etc. > (assuming I understood the problem correctly - not sure) I think you did.
On 14.09.2011, at 22:06, Blue Swirl wrote: > On Wed, Sep 14, 2011 at 8:35 AM, Avi Kivity <avi@redhat.com> wrote: >> On 09/14/2011 11:27 AM, Alexander Graf wrote: >>> >>> On 14.09.2011, at 10:24, Jan Kiszka wrote: >>> >>>> On 2011-09-14 10:22, Avi Kivity wrote: >>>>> On 09/14/2011 11:20 AM, Jan Kiszka wrote: >>>>>>> >>>>>>> Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it >>>>>>> supposed to be mapped? >>>>>> >>>>>> ...but not all PCI bridges make use of this feature / forward legacy >>>>>> requests. >>>>>> >>>>> >>>>> Then this should be fixed in the bridge? >>>> >>>> Yes, it's a PPC bug. >>> >>> So how does the bridge not forward it then? >>> >> >> I expect that currently vga adds the region to pci_address_space(). We need >> to create a pci_address_space_vga() function that returns a region for vga >> to use. Then add or remove the region to pci_address_space(), within the >> bridge code, depending on whether the bridge forwards vga accesses or not. > > Similar treatment should be also needed for VGA IO ports 0x3b0 etc. > >> (assuming I understood the problem correctly - not sure) > > I think you did. Well I don't completely, so would anybody who feels reasonably savvy in messing with the new memory api like to step up and implement this? :) Alex
On 09/14/2011 11:06 PM, Blue Swirl wrote: > On Wed, Sep 14, 2011 at 8:35 AM, Avi Kivity<avi@redhat.com> wrote: > > On 09/14/2011 11:27 AM, Alexander Graf wrote: > >> > >> On 14.09.2011, at 10:24, Jan Kiszka wrote: > >> > >> > On 2011-09-14 10:22, Avi Kivity wrote: > >> >> On 09/14/2011 11:20 AM, Jan Kiszka wrote: > >> >>>> > >> >>>> Anyway PCI supports the vga region at 0xa0000-0xc0000. Where is it > >> >>>> supposed to be mapped? > >> >>> > >> >>> ...but not all PCI bridges make use of this feature / forward legacy > >> >>> requests. > >> >>> > >> >> > >> >> Then this should be fixed in the bridge? > >> > > >> > Yes, it's a PPC bug. > >> > >> So how does the bridge not forward it then? > >> > > > > I expect that currently vga adds the region to pci_address_space(). We need > > to create a pci_address_space_vga() function that returns a region for vga > > to use. Then add or remove the region to pci_address_space(), within the > > bridge code, depending on whether the bridge forwards vga accesses or not. > > Similar treatment should be also needed for VGA IO ports 0x3b0 etc. > > > (assuming I understood the problem correctly - not sure) > > I think you did. Maybe, but the solution can't be right. The bridge can't distinguish between a BAR mapped at 0xa0000 and the vga device claiming accesses to 0xa0000. Is this what is happening? The current pci bridge implementation (440fx) uses an alias to instantiate pci 0xa0000-0xc0000 at the same address in the host address space. If you disable it, those addresses map back to RAM - but there is no distinction between a BAR at that address and a VGA card at that address.
On 09/14/2011 11:14 PM, Alexander Graf wrote: > >> (assuming I understood the problem correctly - not sure) > > > > I think you did. > > Well I don't completely, so would anybody who feels reasonably savvy in messing with the new memory api like to step up and implement this? :) > > Can you explain what the memory map looks like from the hardware point of view?
On Wed, Sep 14, 2011 at 8:15 PM, Avi Kivity <avi@redhat.com> wrote: > On 09/14/2011 11:06 PM, Blue Swirl wrote: >> >> On Wed, Sep 14, 2011 at 8:35 AM, Avi Kivity<avi@redhat.com> wrote: >> > On 09/14/2011 11:27 AM, Alexander Graf wrote: >> >> >> >> On 14.09.2011, at 10:24, Jan Kiszka wrote: >> >> >> >> > On 2011-09-14 10:22, Avi Kivity wrote: >> >> >> On 09/14/2011 11:20 AM, Jan Kiszka wrote: >> >> >>>> >> >> >>>> Anyway PCI supports the vga region at 0xa0000-0xc0000. Where >> >> is it >> >> >>>> supposed to be mapped? >> >> >>> >> >> >>> ...but not all PCI bridges make use of this feature / forward >> >> legacy >> >> >>> requests. >> >> >>> >> >> >> >> >> >> Then this should be fixed in the bridge? >> >> > >> >> > Yes, it's a PPC bug. >> >> >> >> So how does the bridge not forward it then? >> >> >> > >> > I expect that currently vga adds the region to pci_address_space(). We >> > need >> > to create a pci_address_space_vga() function that returns a region for >> > vga >> > to use. Then add or remove the region to pci_address_space(), within >> > the >> > bridge code, depending on whether the bridge forwards vga accesses or >> > not. >> >> Similar treatment should be also needed for VGA IO ports 0x3b0 etc. >> >> > (assuming I understood the problem correctly - not sure) >> >> I think you did. > > Maybe, but the solution can't be right. The bridge can't distinguish > between a BAR mapped at 0xa0000 and the vga device claiming accesses to > 0xa0000. Is this what is happening? If VGA enable is set, the bridge will forward the accesses to VGA memory or ports to secondary interface. It doesn't care which device uses them. This is described in VGA support chapter in the PCI bridge spec. > The current pci bridge implementation (440fx) uses an alias to instantiate > pci 0xa0000-0xc0000 at the same address in the host address space. If you > disable it, those addresses map back to RAM - but there is no distinction > between a BAR at that address and a VGA card at that address. > > -- > I have a truly marvellous patch that fixes the bug which this > signature is too narrow to contain. > >
Am 14.09.2011 um 22:16 schrieb Avi Kivity <avi@redhat.com>: > On 09/14/2011 11:14 PM, Alexander Graf wrote: >> >> (assuming I understood the problem correctly - not sure) >> > >> > I think you did. >> >> Well I don't completely, so would anybody who feels reasonably savvy in messing with the new memory api like to step up and implement this? :) >> >> > > Can you explain what the memory map looks like from the hardware point of view? If you can tell me where to find out :). I seriously have zero experience in VGA mapping - and it sounds as if Blue has a pretty good idea what's going on. Alex >
On 09/14/2011 01:35 PM, Alexander Graf wrote: >> Can you explain what the memory map looks like from the hardware point of view? > > If you can tell me where to find out :). I seriously have zero experience in VGA mapping - and it sounds as if Blue has a pretty good idea what's going on. He's not interested in the VGA bits, but in the PPC board bits. How are addresses forwarded from the main system bus to the PCI host bridge, for instance? r~
Am 14.09.2011 um 22:42 schrieb Richard Henderson: > On 09/14/2011 01:35 PM, Alexander Graf wrote: > >>> Can you explain what the memory map looks like from the hardware >>> point of view? >> >> If you can tell me where to find out :). I seriously have zero >> experience in VGA mapping - and it sounds as if Blue has a pretty >> good idea what's going on. > > He's not interested in the VGA bits, but in the PPC board bits. > How are addresses forwarded from the main system bus to the > PCI host bridge, for instance? Does this help? http://tibit.org/ppc/imac-333.html The historical dev trees at http://penguinppc.org/historical/dev-trees-html/trees-index.html seem to have disappeared, this iMac seems closest: The ATI VGA card sits directly on /pci. Later models of PowerMac G3 had AGP graphics, the final PowerMac G5 models had PCIe graphics. Note that ATI graphic cards were sold as special Mac Edition, so I wouldn't completely rule out deviations from PC standards... Andreas
On 14.09.2011, at 22:42, Richard Henderson wrote: > On 09/14/2011 01:35 PM, Alexander Graf wrote: > >>> Can you explain what the memory map looks like from the hardware point of view? >> >> If you can tell me where to find out :). I seriously have zero experience in VGA mapping - and it sounds as if Blue has a pretty good idea what's going on. > > He's not interested in the VGA bits, but in the PPC board bits. > How are addresses forwarded from the main system bus to the > PCI host bridge, for instance? Yeah, and what I'm trying to tell you is that I know about as much about the g3 beige as you do. However, Ben might now a bit more here. Let's ask him :). Alex
On Wed, 2011-09-14 at 23:41 +0200, Alexander Graf wrote: > On 14.09.2011, at 22:42, Richard Henderson wrote: > > > On 09/14/2011 01:35 PM, Alexander Graf wrote: > > > >>> Can you explain what the memory map looks like from the hardware > point of view? > >> > >> If you can tell me where to find out :). I seriously have zero > experience in VGA mapping - and it sounds as if Blue has a pretty good > idea what's going on. > > > > He's not interested in the VGA bits, but in the PPC board bits. > > How are addresses forwarded from the main system bus to the > > PCI host bridge, for instance? > > Yeah, and what I'm trying to tell you is that I know about as much > about the g3 beige as you do. However, Ben might now a bit more here. > Let's ask him :). Can I have a bit of context please ? :-) Cheers, Ben.
On 09/14/2011 11:25 PM, Blue Swirl wrote: > On Wed, Sep 14, 2011 at 8:15 PM, Avi Kivity<avi@redhat.com> wrote: > > On 09/14/2011 11:06 PM, Blue Swirl wrote: > >> > >> On Wed, Sep 14, 2011 at 8:35 AM, Avi Kivity<avi@redhat.com> wrote: > >> > On 09/14/2011 11:27 AM, Alexander Graf wrote: > >> >> > >> >> On 14.09.2011, at 10:24, Jan Kiszka wrote: > >> >> > >> >> > On 2011-09-14 10:22, Avi Kivity wrote: > >> >> >> On 09/14/2011 11:20 AM, Jan Kiszka wrote: > >> >> >>>> > >> >> >>>> Anyway PCI supports the vga region at 0xa0000-0xc0000. Where > >> >> is it > >> >> >>>> supposed to be mapped? > >> >> >>> > >> >> >>> ...but not all PCI bridges make use of this feature / forward > >> >> legacy > >> >> >>> requests. > >> >> >>> > >> >> >> > >> >> >> Then this should be fixed in the bridge? > >> >> > > >> >> > Yes, it's a PPC bug. > >> >> > >> >> So how does the bridge not forward it then? > >> >> > >> > > >> > I expect that currently vga adds the region to pci_address_space(). We > >> > need > >> > to create a pci_address_space_vga() function that returns a region for > >> > vga > >> > to use. Then add or remove the region to pci_address_space(), within > >> > the > >> > bridge code, depending on whether the bridge forwards vga accesses or > >> > not. > >> > >> Similar treatment should be also needed for VGA IO ports 0x3b0 etc. > >> > >> > (assuming I understood the problem correctly - not sure) > >> > >> I think you did. > > > > Maybe, but the solution can't be right. The bridge can't distinguish > > between a BAR mapped at 0xa0000 and the vga device claiming accesses to > > 0xa0000. Is this what is happening? > > If VGA enable is set, the bridge will forward the accesses to VGA > memory or ports to secondary interface. It doesn't care which device > uses them. This is described in VGA support chapter in the PCI bridge > spec. That doesn't match the original description, where it appeared that the vga window collided with a BAR.
On 09/15/2011 04:24 AM, Benjamin Herrenschmidt wrote: > On Wed, 2011-09-14 at 23:41 +0200, Alexander Graf wrote: > > On 14.09.2011, at 22:42, Richard Henderson wrote: > > > > > On 09/14/2011 01:35 PM, Alexander Graf wrote: > > > > > >>> Can you explain what the memory map looks like from the hardware > > point of view? > > >> > > >> If you can tell me where to find out :). I seriously have zero > > experience in VGA mapping - and it sounds as if Blue has a pretty good > > idea what's going on. > > > > > > He's not interested in the VGA bits, but in the PPC board bits. > > > How are addresses forwarded from the main system bus to the > > > PCI host bridge, for instance? > > > > Yeah, and what I'm trying to tell you is that I know about as much > > about the g3 beige as you do. However, Ben might now a bit more here. > > Let's ask him :). > > Can I have a bit of context please ? :-) What does the memory map regarding a pci vga card looks like? Right now it looks like some BAR has collided with the legacy vga range at 0xa0000 (maped to 0x8000a0000) or such. What is supposed to be where?
On 15.09.2011, at 03:24, Benjamin Herrenschmidt wrote: > On Wed, 2011-09-14 at 23:41 +0200, Alexander Graf wrote: >> On 14.09.2011, at 22:42, Richard Henderson wrote: >> >>> On 09/14/2011 01:35 PM, Alexander Graf wrote: >>> >>>>> Can you explain what the memory map looks like from the hardware >> point of view? >>>> >>>> If you can tell me where to find out :). I seriously have zero >> experience in VGA mapping - and it sounds as if Blue has a pretty good >> idea what's going on. >>> >>> He's not interested in the VGA bits, but in the PPC board bits. >>> How are addresses forwarded from the main system bus to the >>> PCI host bridge, for instance? >> >> Yeah, and what I'm trying to tell you is that I know about as much >> about the g3 beige as you do. However, Ben might now a bit more here. >> Let's ask him :). > > Can I have a bit of context please ? :-) Sure :). So the problem is that when emulating the G3 Beige machine in QEMU (default ppc32 target) we also add a PCI VGA adapter. Apparently, on x86 that PCI VGA adapter can map the special VGA regions to somewhere, namely 0xa0000. With the memory api overhaul, this also slipped into the PPC world where mapping 0xa0000 with VGA adapters is a pretty bad idea, as it's occupied by RAM. Now the discussion was on which level that mapping would happen and which devices go through which buses which then would filter certain ranges from being mapped. Basically, which way does a memory request from the CPU go on a G3 Beige machine until it arrives the VGA adapter? I hope that concludes the actual question. Avi, if I explained this wrong, please correct me. Alex
> Sure :). So the problem is that when emulating the G3 Beige machine in > QEMU (default ppc32 target) we also add a PCI VGA adapter. Apparently, > on x86 that PCI VGA adapter can map the special VGA regions to > somewhere, namely 0xa0000. With the memory api overhaul, this also > slipped into the PPC world where mapping 0xa0000 with VGA adapters is > a pretty bad idea, as it's occupied by RAM. > > Now the discussion was on which level that mapping would happen and > which devices go through which buses which then would filter certain > ranges from being mapped. Basically, which way does a memory request > from the CPU go on a G3 Beige machine until it arrives the VGA > adapter? > > I hope that concludes the actual question. Avi, if I explained this > wrong, please correct me. Ok so there's several things here. First, the mapping from CPU addresses to PCI addresses. This depends on the host bridge chip. The MPC106, used in the Beige G3, itself supports different type of mappings. From memory, the way it's configured in a G3 is to have a 1:1 mapping of 80000000 CPU to 80000000 PCI. That means that with this basic mapping, you cannot generate memory accesses to low PCI addresses such as 0xa0000. I don't remember (but it's possible) if it has another region which maps some other (high address) part of the address space down to 0 PCI. Typically that would be a smaller region which specifically allow access to the "ISA hole" that way. There is code in pci_process_bridge_OF_ranges() that will detect such an ISA hole, and while it cannot add it to the normal resource management, it is remembered, so we -could- use it from the VGA code if we wanted to (we don't today). The problem is that most bridges used on Macs, typically the Apple ones simply don't provide such a hole. In fact, most bridges used by IBM aren't configured for that either. Now back to your VGA adapter. The legacy VGA stuff does something called "hard decoding", which means it decodes those legacy addresses, usually without a BAR, but using a fixed address decoding. This is old ISA crap emulated on top of PCI, exist only thanks to a "hack" in the PCI spec, in order to be backward compatible with DOS and shit like that. Ideally, you should provide a BAR to allow remapping that stuff elsewhere and a setting to enable/disable the hard-decoding. That way, on power, the firmware could just whack that setting and turn your VGA device into something that behaves properly like a normal PCI device. Macs or pSeries never really used the legacy crap. We always had drivers that configured the cards into "native" mode, which means no hard decoding (some old cards still hard decoded some IO ports but that went away on anything modern), and just using proper MMIO register and linear framebuffer from the driver. That does mean we never used text mode though. It would be possible to still allow text mode by having a BAR in the emulated card that can be used to move the VGA legacy crap around tho if we really wanted to. BTW. I haven't looked at the code, but I've been told that for some of the splice stuff or other higher level additions, you have implemented special IO ports in the card. This is totally ass backward. IOs are old bad junk and must die. MMIO is semi-acceptable, commands in virtio would be better (and perform better), but not IO ports, please ..... Cheers, Ben.
On 09/15/2011 01:01 PM, Benjamin Herrenschmidt wrote: > > Sure :). So the problem is that when emulating the G3 Beige machine in > > QEMU (default ppc32 target) we also add a PCI VGA adapter. Apparently, > > on x86 that PCI VGA adapter can map the special VGA regions to > > somewhere, namely 0xa0000. With the memory api overhaul, this also > > slipped into the PPC world where mapping 0xa0000 with VGA adapters is > > a pretty bad idea, as it's occupied by RAM. > > > > Now the discussion was on which level that mapping would happen and > > which devices go through which buses which then would filter certain > > ranges from being mapped. Basically, which way does a memory request > > from the CPU go on a G3 Beige machine until it arrives the VGA > > adapter? > > > > I hope that concludes the actual question. Avi, if I explained this > > wrong, please correct me. > > Ok so there's several things here. > > First, the mapping from CPU addresses to PCI addresses. This depends on > the host bridge chip. The MPC106, used in the Beige G3, itself supports > different type of mappings. > > From memory, the way it's configured in a G3 is to have a 1:1 mapping of > 80000000 CPU to 80000000 PCI. > > That means that with this basic mapping, you cannot generate memory > accesses to low PCI addresses such as 0xa0000. Alex, what this means (I think is) that: pci_grackle_init() needs to create a container memory region and pass it to pc_register_bus() as the pci address space, and create and alias starting at 0x80000000 of the pci address space, and map that alias at address 0x80000000 of the system address space. See pc_init1() creating pci_memory and passing it to i440fx_init(), which then maps some aliases into the system address space and also gives it to pci_bus_new(). It's essentially the same thing with different details. > I don't remember (but it's possible) if it has another region which maps > some other (high address) part of the address space down to 0 PCI. > Typically that would be a smaller region which specifically allow access > to the "ISA hole" that way. That would be done by mapping yet another alias.
diff --git a/hw/heathrow_pic.c b/hw/heathrow_pic.c index 51996ab..16f48d1 100644 --- a/hw/heathrow_pic.c +++ b/hw/heathrow_pic.c @@ -126,7 +126,7 @@ static uint64_t pic_read(void *opaque, target_phys_addr_t addr, static const MemoryRegionOps heathrow_pic_ops = { .read = pic_read, .write = pic_write, - .endianness = DEVICE_NATIVE_ENDIAN, + .endianness = DEVICE_LITTLE_ENDIAN, }; static void heathrow_pic_set_irq(void *opaque, int num, int level)