Patchwork [RFC,3/3] hw/pci-host: catch acesses to unassigned pci addresses

login
register
mail settings
Submitter Marcel Apfelbaum
Date Sept. 2, 2013, 2:13 p.m.
Message ID <1378131189-25538-4-git-send-email-marcel.a@redhat.com>
Download mbox | patch
Permalink /patch/271954/
State New
Headers show

Comments

Marcel Apfelbaum - Sept. 2, 2013, 2:13 p.m.
Added a memory region that has negative priority and
extends over all the pci adddress space. This region will
"catch" all the accesses to the unassigned pci
addresses and it will be possible to emulate the
master abort scenario (When no device on the bus claims
the transaction).

Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
---
 hw/pci-host/piix.c        |  8 ++++++++
 hw/pci-host/q35.c         | 19 ++++++++++++++++---
 include/hw/pci-host/q35.h |  1 +
 3 files changed, 25 insertions(+), 3 deletions(-)
Peter Maydell - Sept. 2, 2013, 2:39 p.m.
On 2 September 2013 15:13, Marcel Apfelbaum <marcel.a@redhat.com> wrote:
> Added a memory region that has negative priority and
> extends over all the pci adddress space. This region will
> "catch" all the accesses to the unassigned pci
> addresses and it will be possible to emulate the
> master abort scenario (When no device on the bus claims
> the transaction).
>
> Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
> ---
>  hw/pci-host/piix.c        |  8 ++++++++
>  hw/pci-host/q35.c         | 19 ++++++++++++++++---
>  include/hw/pci-host/q35.h |  1 +

This is happening at the wrong layer -- you want this memory
region to be created and managed in the PCI core code so that
we get correct PCI-spec behaviour for all our PCI controllers,
not just the two x86 ones you've changed here.

-- PMM
Marcel Apfelbaum - Sept. 2, 2013, 3:42 p.m.
On Mon, 2013-09-02 at 15:39 +0100, Peter Maydell wrote:
> On 2 September 2013 15:13, Marcel Apfelbaum <marcel.a@redhat.com> wrote:
> > Added a memory region that has negative priority and
> > extends over all the pci adddress space. This region will
> > "catch" all the accesses to the unassigned pci
> > addresses and it will be possible to emulate the
> > master abort scenario (When no device on the bus claims
> > the transaction).
> >
> > Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
> > ---
> >  hw/pci-host/piix.c        |  8 ++++++++
> >  hw/pci-host/q35.c         | 19 ++++++++++++++++---
> >  include/hw/pci-host/q35.h |  1 +
> 
> This is happening at the wrong layer -- you want this memory
> region to be created and managed in the PCI core code so that
> we get correct PCI-spec behaviour for all our PCI controllers,
> not just the two x86 ones you've changed here.pci_address_space
I saw that the memory regions are part of the Host state and
duplicated for each host type(like pci_address_space). 
Question, why are not pci_address_space and pci_hole present
in a core layer?

I followed the existing code; from what you are saying
I understand that also the existing memory regions 
like the one mentioned above should be moved in
the core layer, right?
Marcel 

> 
> -- PMM
>
Andreas Färber - Sept. 2, 2013, 3:48 p.m.
Am 02.09.2013 17:42, schrieb Marcel Apfelbaum:
> On Mon, 2013-09-02 at 15:39 +0100, Peter Maydell wrote:
>> On 2 September 2013 15:13, Marcel Apfelbaum <marcel.a@redhat.com> wrote:
>>> Added a memory region that has negative priority and
>>> extends over all the pci adddress space. This region will
>>> "catch" all the accesses to the unassigned pci
>>> addresses and it will be possible to emulate the
>>> master abort scenario (When no device on the bus claims
>>> the transaction).
>>>
>>> Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
>>> ---
>>>  hw/pci-host/piix.c        |  8 ++++++++
>>>  hw/pci-host/q35.c         | 19 ++++++++++++++++---
>>>  include/hw/pci-host/q35.h |  1 +
>>
>> This is happening at the wrong layer -- you want this memory
>> region to be created and managed in the PCI core code so that
>> we get correct PCI-spec behaviour for all our PCI controllers,
>> not just the two x86 ones you've changed here.pci_address_space
> I saw that the memory regions are part of the Host state and
> duplicated for each host type(like pci_address_space). 
> Question, why are not pci_address_space and pci_hole present
> in a core layer?

I would say, because that core layer didn't exist before I added it not
too long ago. My focus was on fixing bugs at the time and getting PReP
Raven PHB into shape for QOM.

> I followed the existing code; from what you are saying
> I understand that also the existing memory regions 
> like the one mentioned above should be moved in
> the core layer, right?

Consider it all Work In Progress :) and feel free to move more fields
and code there as you guys see fit.

Andreas
Peter Maydell - Sept. 2, 2013, 3:53 p.m.
On 2 September 2013 16:42, Marcel Apfelbaum <marcel.a@redhat.com> wrote:
> On Mon, 2013-09-02 at 15:39 +0100, Peter Maydell wrote:
>> This is happening at the wrong layer -- you want this memory
>> region to be created and managed in the PCI core code so that
>> we get correct PCI-spec behaviour for all our PCI controllers,
>> not just the two x86 ones you've changed here.pci_address_space
> I saw that the memory regions are part of the Host state and
> duplicated for each host type(like pci_address_space).
> Question, why are not pci_address_space and pci_hole present
> in a core layer?
>
> I followed the existing code; from what you are saying
> I understand that also the existing memory regions
> like the one mentioned above should be moved in
> the core layer, right?

Ideally, yes, I think so. However that's not particularly
a requirement for the changes you're trying to make here:
at the moment what happens is that the pci controller
creates the PCI memory and io memory regions (or cheats
by reusing the system memory space[*]), passes them to
the PCI core code (via pci_bus_new) and then they're
the PCI code's responsibility to manage. So in the PCI
code you can ignore where they came from when you're
deciding how to manage these containers (and in this case
what you do is just create your default region and map
it in to the container at a suitable priority).

[*] I'm pretty sure this is a bug in all platforms that do it.

-- PMM
Michael S. Tsirkin - Sept. 2, 2013, 3:57 p.m.
On Mon, Sep 02, 2013 at 06:42:33PM +0300, Marcel Apfelbaum wrote:
> On Mon, 2013-09-02 at 15:39 +0100, Peter Maydell wrote:
> > On 2 September 2013 15:13, Marcel Apfelbaum <marcel.a@redhat.com> wrote:
> > > Added a memory region that has negative priority and
> > > extends over all the pci adddress space. This region will
> > > "catch" all the accesses to the unassigned pci
> > > addresses and it will be possible to emulate the
> > > master abort scenario (When no device on the bus claims
> > > the transaction).
> > >
> > > Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
> > > ---
> > >  hw/pci-host/piix.c        |  8 ++++++++
> > >  hw/pci-host/q35.c         | 19 ++++++++++++++++---
> > >  include/hw/pci-host/q35.h |  1 +
> > 
> > This is happening at the wrong layer -- you want this memory
> > region to be created and managed in the PCI core code so that
> > we get correct PCI-spec behaviour for all our PCI controllers,
> > not just the two x86 ones you've changed here.pci_address_space
> I saw that the memory regions are part of the Host state and
> duplicated for each host type(like pci_address_space). 
> Question, why are not pci_address_space and pci_hole present
> in a core layer?


I think we can move them out to core.

> 
> I followed the existing code; from what you are saying
> I understand that also the existing memory regions 
> like the one mentioned above should be moved in
> the core layer, right?
> Marcel 

pci hole is a PC thing.

> > 
> > -- PMM
> > 
>
Peter Maydell - Sept. 2, 2013, 3:58 p.m.
On 2 September 2013 16:53, Peter Maydell <peter.maydell@linaro.org> wrote:
> On 2 September 2013 16:42, Marcel Apfelbaum <marcel.a@redhat.com> wrote:
>> Question, why are not pci_address_space and pci_hole present
>> in a core layer?
>>
>> I followed the existing code; from what you are saying
>> I understand that also the existing memory regions
>> like the one mentioned above should be moved in
>> the core layer, right?
>
> Ideally, yes, I think so.

...but I didn't notice you mentioned pci_hole there. That
is a PCI-controller specific concept (there's no such thing
for the versatilepb PCI controller, for instance) and so it
should remain at the PCI controller level, not the PCI
core code.

-- PMM
Michael S. Tsirkin - Sept. 2, 2013, 4 p.m.
On Mon, Sep 02, 2013 at 04:53:50PM +0100, Peter Maydell wrote:
> On 2 September 2013 16:42, Marcel Apfelbaum <marcel.a@redhat.com> wrote:
> > On Mon, 2013-09-02 at 15:39 +0100, Peter Maydell wrote:
> >> This is happening at the wrong layer -- you want this memory
> >> region to be created and managed in the PCI core code so that
> >> we get correct PCI-spec behaviour for all our PCI controllers,
> >> not just the two x86 ones you've changed here.pci_address_space
> > I saw that the memory regions are part of the Host state and
> > duplicated for each host type(like pci_address_space).
> > Question, why are not pci_address_space and pci_hole present
> > in a core layer?
> >
> > I followed the existing code; from what you are saying
> > I understand that also the existing memory regions
> > like the one mentioned above should be moved in
> > the core layer, right?
> 
> Ideally, yes, I think so. However that's not particularly
> a requirement for the changes you're trying to make here:
> at the moment what happens is that the pci controller
> creates the PCI memory and io memory regions (or cheats
> by reusing the system memory space[*]), passes them to
> the PCI core code (via pci_bus_new) and then they're
> the PCI code's responsibility to manage. So in the PCI
> code you can ignore where they came from when you're
> deciding how to manage these containers (and in this case
> what you do is just create your default region and map
> it in to the container at a suitable priority).
> 
> [*] I'm pretty sure this is a bug in all platforms that do it.
> 
> -- PMM

Well as usual this cheat originated with PIIX.
AFAIK PIIX actually has a shared bus for memory and PCI
so this is not a bug there, I think.
Marcel Apfelbaum - Sept. 2, 2013, 4:02 p.m.
On Mon, 2013-09-02 at 16:53 +0100, Peter Maydell wrote:
> On 2 September 2013 16:42, Marcel Apfelbaum <marcel.a@redhat.com> wrote:
> > On Mon, 2013-09-02 at 15:39 +0100, Peter Maydell wrote:
> >> This is happening at the wrong layer -- you want this memory
> >> region to be created and managed in the PCI core code so that
> >> we get correct PCI-spec behaviour for all our PCI controllers,
> >> not just the two x86 ones you've changed here.pci_address_space
> > I saw that the memory regions are part of the Host state and
> > duplicated for each host type(like pci_address_space).
> > Question, why are not pci_address_space and pci_hole present
> > in a core layer?
> >
> > I followed the existing code; from what you are saying
> > I understand that also the existing memory regions
> > like the one mentioned above should be moved in
> > the core layer, right?
> 
> Ideally, yes, I think so. However that's not particularly
> a requirement for the changes you're trying to make here:
> at the moment what happens is that the pci controller
> creates the PCI memory and io memory regions (or cheats
> by reusing the system memory space[*]), passes them to
> the PCI core code (via pci_bus_new) and then they're
> the PCI code's responsibility to manage. So in the PCI
> code you can ignore where they came from when you're
> deciding how to manage these containers (and in this case
> what you do is just create your default region and map
> it in to the container at a suitable priority).
Thanks, I am going to follow this path
Marcel

> [*] I'm pretty sure this is a bug in all platforms that do it.
> 
> -- PMM
>
Peter Maydell - Sept. 2, 2013, 4:05 p.m.
On 2 September 2013 17:00, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Mon, Sep 02, 2013 at 04:53:50PM +0100, Peter Maydell wrote:
>> at the moment what happens is that the pci controller
>> creates the PCI memory and io memory regions (or cheats
>> by reusing the system memory space[*]),

>> [*] I'm pretty sure this is a bug in all platforms that do it.

> Well as usual this cheat originated with PIIX.
> AFAIK PIIX actually has a shared bus for memory and PCI
> so this is not a bug there, I think.

It will be when you introduce this "return -1 for unassigned
addresses", though, since you only want that to happen
for PCI accesses, not system memory accesses, right?

thanks
-- PMM
Michael S. Tsirkin - Sept. 2, 2013, 4:17 p.m.
On Mon, Sep 02, 2013 at 05:05:10PM +0100, Peter Maydell wrote:
> On 2 September 2013 17:00, Michael S. Tsirkin <mst@redhat.com> wrote:
> > On Mon, Sep 02, 2013 at 04:53:50PM +0100, Peter Maydell wrote:
> >> at the moment what happens is that the pci controller
> >> creates the PCI memory and io memory regions (or cheats
> >> by reusing the system memory space[*]),
> 
> >> [*] I'm pretty sure this is a bug in all platforms that do it.
> 
> > Well as usual this cheat originated with PIIX.
> > AFAIK PIIX actually has a shared bus for memory and PCI
> > so this is not a bug there, I think.
> 
> It will be when you introduce this "return -1 for unassigned
> addresses", though, since you only want that to happen
> for PCI accesses, not system memory accesses, right?
> 
> thanks
> -- PMM

What happens with PIIX is that everything that is not
in system memory is PCI.
So there's no such thing as "unassigned system memory
address": all unassigned addresses are PCI addresses.

Patch

diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
index dc1718f..27b04a6 100644
--- a/hw/pci-host/piix.c
+++ b/hw/pci-host/piix.c
@@ -104,6 +104,7 @@  struct PCII440FXState {
     MemoryRegion *ram_memory;
     MemoryRegion pci_hole;
     MemoryRegion pci_hole_64bit;
+    MemoryRegion pci_unassigned_memory;
     PAMMemoryRegion pam_regions[13];
     MemoryRegion smram_region;
     uint8_t smm_enabled;
@@ -347,6 +348,13 @@  PCIBus *i440fx_init(PCII440FXState **pi440fx_state,
         i440fx->pci_info.w32.begin = 0xe0000000;
     }
 
+    memory_region_init_io(&f->pci_unassigned_memory, OBJECT(d),
+                          &pci_unassigned_mem_ops, d,
+                          "pci-hole-unassigned", pci_hole_size);
+    memory_region_add_subregion_overlap(f->pci_address_space, pci_hole_start,
+                                        &f->pci_unassigned_memory,
+                                        PCI_UNASSIGNED_MEM_PRIORITY);
+
     memory_region_init_alias(&f->pci_hole, OBJECT(d), "pci-hole", f->pci_address_space,
                              pci_hole_start, pci_hole_size);
     memory_region_add_subregion(f->system_memory, pci_hole_start, &f->pci_hole);
diff --git a/hw/pci-host/q35.c b/hw/pci-host/q35.c
index 12314d8..ee4a4f9 100644
--- a/hw/pci-host/q35.c
+++ b/hw/pci-host/q35.c
@@ -320,13 +320,26 @@  static int mch_init(PCIDevice *d)
 {
     int i;
     MCHPCIState *mch = MCH_PCI_DEVICE(d);
+    hwaddr pci_hole_start;
+    uint64_t pci_hole_size;
+
+    pci_hole_start = mch->below_4g_mem_size;
+    pci_hole_size = 0x100000000ULL - pci_hole_start;
+
+    memory_region_init_io(&mch->pci_unassigned_memory, OBJECT(d),
+                          &pci_unassigned_mem_ops, d,
+                          "pci-hole-unassigned", pci_hole_size);
+    memory_region_add_subregion_overlap(mch->pci_address_space, pci_hole_start,
+                                        &mch->pci_unassigned_memory,
+                                        PCI_UNASSIGNED_MEM_PRIORITY);
+
 
     /* setup pci memory regions */
     memory_region_init_alias(&mch->pci_hole, OBJECT(mch), "pci-hole",
                              mch->pci_address_space,
-                             mch->below_4g_mem_size,
-                             0x100000000ULL - mch->below_4g_mem_size);
-    memory_region_add_subregion(mch->system_memory, mch->below_4g_mem_size,
+                             pci_hole_start,
+                             pci_hole_size);
+    memory_region_add_subregion(mch->system_memory, pci_hole_start,
                                 &mch->pci_hole);
 
     pc_init_pci64_hole(&mch->pci_info, 0x100000000ULL + mch->above_4g_mem_size,
diff --git a/include/hw/pci-host/q35.h b/include/hw/pci-host/q35.h
index 6eb7ab6..c42fba3 100644
--- a/include/hw/pci-host/q35.h
+++ b/include/hw/pci-host/q35.h
@@ -55,6 +55,7 @@  typedef struct MCHPCIState {
     MemoryRegion smram_region;
     MemoryRegion pci_hole;
     MemoryRegion pci_hole_64bit;
+    MemoryRegion pci_unassigned_memory;
     PcPciInfo pci_info;
     uint8_t smm_enabled;
     ram_addr_t below_4g_mem_size;