diff mbox

hw/misc: slavepci_passthru driver

Message ID 20160118142819.GF26923@neat.it
State New
Headers show

Commit Message

Francesco Zuliani Jan. 18, 2016, 2:28 p.m. UTC
Hi there,

I'd like to submit this new pci driver ( hw/misc )for inclusion,
if you think it could be useful to other as well as ourself.

The driver "worked for our needs" BUT we haven't done extensive
testing and this is our first attempt to submit a patch so I kindly
ask for extra-forgiveness .

The "slavepci_passthru" driver is useful in the scenario described
below to implement a simplified passthru when the host CPU does not
support IOMMU and one is interested only in pci target-mode (slave
devices).

Embedded system cpu (e.g. Atom, AMD G-Series) often lack the VT-d
extensions (IOMMU) needed to be able to pass-thru pci peripherals to
the guest machine (i.e. the pci pass-thru feature cannot be used).

If one is only interested in using the pci board as a pci-target 
(slave device), this driver mmap(s) the host-pci-bars into the guest
within a virtual pci-device.

This is useful in our case for debugging via qemu gsbserver facility
(i.e. '-s' option in qemu) a system running barebone-executable .

Currently the driver assumes the custom pci card has four 32-bit bars
to be mapped (in current patch this is mandatory)

HowTo:
To use the new driver one shall:
- define two environment variables for assigning proper VID and DID to
  associate to the guest pci card
- give the host pci bar address to map in the guest. 

Example Usage:

Let us suppose that we have in the host a slave pci device with the
following 4 bars (i.e. output of lspci -v -s YOUR-CARD | grep Memory)
  Memory at db800000 (32-bit, non-prefetchable) [size=4K]
  Memory at db900000 (32-bit, non-prefetchable) [size=8K]
  Memory at dba00000 (32-bit, non-prefetchable) [size=4K]
  Memory at dbb00000 (32-bit, non-prefetchable) [size=4K]

We can map these bars in a guest-pci with VID=0xe33e DID=0x000a using

SLAVEPASSTHRU_VID="0xe33e" SLAVEPASSTHRU_DID="0xa" qemu-system-x86_64 \
  YOUR-SET-OF-FLAGS \
  -device slavepassthru,size1=4096,baseaddr1=0xdb900000,size2=8192,baseaddr2=0xdba00000,size3=4096,baseaddr3=0xdbd00000,size4=4096,baseaddr4=0xdbe00000

Please note that if your device has less than four bars you can give
the same size and baseaddress to the unused bars.

Thanks
Francesco Zuliani

Actual commit patch:

From 1371bc4e4681f43a2d02b91ec5d7b84f7ccb1f32 Mon Sep 17 00:00:00 2001
From: Francesco Zuliani <francesco.zuliani@neat.it>
Date: Mon, 18 Jan 2016 14:26:54 +0100
Subject: [PATCH] hw/misc: slave_pci_passthru driver
Added a slavepci_passthru hw/misc pci-device-driver.

It enables pass-thru in system missing IOMMU feature (e.g. Intel Atom
lacks Vt-d extension). It maps hosts target-mode pci-board bars onto
the guests virtual pci-device bars.

Signed-off-by: Francesco Zuliani <francesco.zuliani@neat.it>
---
 default-configs/pci.mak     |   1 +
 hw/misc/Makefile.objs       |   1 +
 hw/misc/slavepci_passthru.c | 453 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 455 insertions(+)
 create mode 100644 hw/misc/slavepci_passthru.c

Comments

Alex Williamson Jan. 18, 2016, 4:41 p.m. UTC | #1
On Mon, 2016-01-18 at 10:16 -0500, Marc-André Lureau wrote:
> Hi
> 
> ----- Original Message -----
> > Hi there,
> > 
> > I'd like to submit this new pci driver ( hw/misc )for inclusion,
> > if you think it could be useful to other as well as ourself.
> > 
> > The driver "worked for our needs" BUT we haven't done extensive
> > testing and this is our first attempt to submit a patch so I kindly
> > ask for extra-forgiveness .
> > 
> > The "slavepci_passthru" driver is useful in the scenario described
> > below to implement a simplified passthru when the host CPU does not
> > support IOMMU and one is interested only in pci target-mode (slave
> > devices).
> 
> Let's CC Alex, who worked on the most recent framework for something related to that (VFIO).
> 
> > 
> > Embedded system cpu (e.g. Atom, AMD G-Series) often lack the VT-d
> > extensions (IOMMU) needed to be able to pass-thru pci peripherals to
> > the guest machine (i.e. the pci pass-thru feature cannot be used).
> > 
> > If one is only interested in using the pci board as a pci-target
> > (slave device), this driver mmap(s) the host-pci-bars into the guest
> > within a virtual pci-device.

What exactly do you mean by pci-target/slave device?  Does this mean
that the device is not DMA capable, ie. cannot enable BusMaster?

> > This is useful in our case for debugging via qemu gsbserver facility
> > (i.e. '-s' option in qemu) a system running barebone-executable .
> > 
> > Currently the driver assumes the custom pci card has four 32-bit bars
> > to be mapped (in current patch this is mandatory)
> > 
> > HowTo:
> > To use the new driver one shall:
> > - define two environment variables for assigning proper VID and DID to
> >   associate to the guest pci card
> > - give the host pci bar address to map in the guest.
> > 
> > Example Usage:
> > 
> > Let us suppose that we have in the host a slave pci device with the
> > following 4 bars (i.e. output of lspci -v -s YOUR-CARD | grep Memory)
> >   Memory at db800000 (32-bit, non-prefetchable) [size=4K]
> >   Memory at db900000 (32-bit, non-prefetchable) [size=8K]
> >   Memory at dba00000 (32-bit, non-prefetchable) [size=4K]
> >   Memory at dbb00000 (32-bit, non-prefetchable) [size=4K]
> > 
> > We can map these bars in a guest-pci with VID=0xe33e DID=0x000a using
> > 
> > SLAVEPASSTHRU_VID="0xe33e" SLAVEPASSTHRU_DID="0xa" qemu-system-x86_64 \
> >   YOUR-SET-OF-FLAGS \
> >   -device
> >   slavepassthru,size1=4096,baseaddr1=0xdb900000,size2=8192,baseaddr2=0xdba00000,size3=4096,baseaddr3=0xdbd00000,size4=4096,baseaddr4=0xdbe00000
> > 
> > Please note that if your device has less than four bars you can give
> > the same size and baseaddress to the unused bars.

Those are some pretty serious usage restrictions and using /dev/mem is
really not practical.  The resource files in pci-sysfs would even be a
better option.  I didn't see how IO and MMIO BARs get enabled on the
physical device or whether you support any kind of interrupt scheme.  I
had never really intended QEMU use of this, but you might want to
consider vfio no-iommu mode:

http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/vfio/vfio.c?id=03a76b60f8ba27974e2d252bc555d2c103420e15

Using this taints the kernel, but maybe that's nothing you mind if
you're already letting QEMU access /dev/mem.  The QEMU vfio-pci driver
would need to be modified to use the new device and of course it
wouldn't have IOMMU translation capabilities.  That means that the
BusMaster bit should protected and MSI/X capabilities should be hidden
from the VM.  It seems more flexible and featureful than what you have
here.  Thanks,

Alex
Francesco Zuliani Jan. 19, 2016, 10:30 a.m. UTC | #2
Hi Alex,


On 01/18/2016 05:41 PM, Alex Williamson wrote:
> On Mon, 2016-01-18 at 10:16 -0500, Marc-André Lureau wrote:
>> Hi
>>
>> ----- Original Message -----
>>> Hi there,
>>>
>>> I'd like to submit this new pci driver ( hw/misc )for inclusion,
>>> if you think it could be useful to other as well as ourself.
>>>
>>> The driver "worked for our needs" BUT we haven't done extensive
>>> testing and this is our first attempt to submit a patch so I kindly
>>> ask for extra-forgiveness .
>>>
>>> The "slavepci_passthru" driver is useful in the scenario described
>>> below to implement a simplified passthru when the host CPU does not
>>> support IOMMU and one is interested only in pci target-mode (slave
>>> devices).
>> Let's CC Alex, who worked on the most recent framework for something related to that (VFIO).
>>
>>> Embedded system cpu (e.g. Atom, AMD G-Series) often lack the VT-d
>>> extensions (IOMMU) needed to be able to pass-thru pci peripherals to
>>> the guest machine (i.e. the pci pass-thru feature cannot be used).
>>>
>>> If one is only interested in using the pci board as a pci-target
>>> (slave device), this driver mmap(s) the host-pci-bars into the guest
>>> within a virtual pci-device.
> What exactly do you mean by pci-target/slave device?  Does this mean
> that the device is not DMA capable, ie. cannot enable BusMaster?

Yes, exactly. Our approach  can be used ONLY if one is NOT interested in 
DMA-Capability (i.e. it is not possible to enable BusMaster)
>>> This is useful in our case for debugging via qemu gsbserver facility
>>> (i.e. '-s' option in qemu) a system running barebone-executable .
>>>
>>> Currently the driver assumes the custom pci card has four 32-bit bars
>>> to be mapped (in current patch this is mandatory)
>>>
>>> HowTo:
>>> To use the new driver one shall:
>>> - define two environment variables for assigning proper VID and DID to
>>>    associate to the guest pci card
>>> - give the host pci bar address to map in the guest.
>>>
>>> Example Usage:
>>>
>>> Let us suppose that we have in the host a slave pci device with the
>>> following 4 bars (i.e. output of lspci -v -s YOUR-CARD | grep Memory)
>>>    Memory at db800000 (32-bit, non-prefetchable) [size=4K]
>>>    Memory at db900000 (32-bit, non-prefetchable) [size=8K]
>>>    Memory at dba00000 (32-bit, non-prefetchable) [size=4K]
>>>    Memory at dbb00000 (32-bit, non-prefetchable) [size=4K]
>>>
>>> We can map these bars in a guest-pci with VID=0xe33e DID=0x000a using
>>>
>>> SLAVEPASSTHRU_VID="0xe33e" SLAVEPASSTHRU_DID="0xa" qemu-system-x86_64 \
>>>    YOUR-SET-OF-FLAGS \
>>>    -device
>>>    slavepassthru,size1=4096,baseaddr1=0xdb900000,size2=8192,baseaddr2=0xdba00000,size3=4096,baseaddr3=0xdbd00000,size4=4096,baseaddr4=0xdbe00000
>>>
>>> Please note that if your device has less than four bars you can give
>>> the same size and baseaddress to the unused bars.
> Those are some pretty serious usage restrictions and using /dev/mem is
> really not practical.  The resource files in pci-sysfs would even be a
> better option.
our was a quick hack to fulfill our needs, the approach via sysfs is
of course the right one and we would implement it if this patch is of 
interest.

> I didn't see how IO and MMIO BARs get enabled on the
> physical device or whether you support any kind of interrupt scheme.
In our case the IO space is not used.
The MMIO space is already enabled.

Our custom board does not have any interrupt and our quick hack
did not implement it.

>    I
> had never really intended QEMU use of this, but you might want to
> consider vfio no-iommu mode:
>
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/vfio/vfio.c?id=03a76b60f8ba27974e2d252bc555d2c103420e15
>
> Using this taints the kernel, but maybe that's nothing you mind if
> you're already letting QEMU access /dev/mem.  The QEMU vfio-pci driver
> would need to be modified to use the new device and of course it
> wouldn't have IOMMU translation capabilities.  That means that the
> BusMaster bit should protected and MSI/X capabilities should be hidden
> from the VM.  It seems more flexible and featureful than what you have
> here.  Thanks,

I was not aware of this interesting patch, I will study it to see if
it fits our use case.

Just for information you mean "taint" in that "security" is broken, not
licensing issues, am I right?

Thanks a lot for your time

Francesco Zuliani

> Alex
Alex Williamson Jan. 19, 2016, 3:51 p.m. UTC | #3
On Tue, 2016-01-19 at 11:30 +0100, Francesco Zuliani wrote:
> Hi Alex,
> 
> 
> On 01/18/2016 05:41 PM, Alex Williamson wrote:
> > On Mon, 2016-01-18 at 10:16 -0500, Marc-André Lureau wrote:
> > > Hi
> > > 
> > > ----- Original Message -----
> > > > Hi there,
> > > > 
> > > > I'd like to submit this new pci driver ( hw/misc )for inclusion,
> > > > if you think it could be useful to other as well as ourself.
> > > > 
> > > > The driver "worked for our needs" BUT we haven't done extensive
> > > > testing and this is our first attempt to submit a patch so I kindly
> > > > ask for extra-forgiveness .
> > > > 
> > > > The "slavepci_passthru" driver is useful in the scenario described
> > > > below to implement a simplified passthru when the host CPU does not
> > > > support IOMMU and one is interested only in pci target-mode (slave
> > > > devices).
> > > Let's CC Alex, who worked on the most recent framework for something related to that (VFIO).
> > > 
> > > > Embedded system cpu (e.g. Atom, AMD G-Series) often lack the VT-d
> > > > extensions (IOMMU) needed to be able to pass-thru pci peripherals to
> > > > the guest machine (i.e. the pci pass-thru feature cannot be used).
> > > > 
> > > > If one is only interested in using the pci board as a pci-target
> > > > (slave device), this driver mmap(s) the host-pci-bars into the guest
> > > > within a virtual pci-device.
> > What exactly do you mean by pci-target/slave device?  Does this mean
> > that the device is not DMA capable, ie. cannot enable BusMaster?
> 
> Yes, exactly. Our approach  can be used ONLY if one is NOT interested in 
> DMA-Capability (i.e. it is not possible to enable BusMaster)
> > > > This is useful in our case for debugging via qemu gsbserver facility
> > > > (i.e. '-s' option in qemu) a system running barebone-executable .
> > > > 
> > > > Currently the driver assumes the custom pci card has four 32-bit bars
> > > > to be mapped (in current patch this is mandatory)
> > > > 
> > > > HowTo:
> > > > To use the new driver one shall:
> > > > - define two environment variables for assigning proper VID and DID to
> > > >    associate to the guest pci card
> > > > - give the host pci bar address to map in the guest.
> > > > 
> > > > Example Usage:
> > > > 
> > > > Let us suppose that we have in the host a slave pci device with the
> > > > following 4 bars (i.e. output of lspci -v -s YOUR-CARD | grep Memory)
> > > >    Memory at db800000 (32-bit, non-prefetchable) [size=4K]
> > > >    Memory at db900000 (32-bit, non-prefetchable) [size=8K]
> > > >    Memory at dba00000 (32-bit, non-prefetchable) [size=4K]
> > > >    Memory at dbb00000 (32-bit, non-prefetchable) [size=4K]
> > > > 
> > > > We can map these bars in a guest-pci with VID=0xe33e DID=0x000a using
> > > > 
> > > > SLAVEPASSTHRU_VID="0xe33e" SLAVEPASSTHRU_DID="0xa" qemu-system-x86_64 \
> > > >    YOUR-SET-OF-FLAGS \
> > > >    -device
> > > >    slavepassthru,size1=4096,baseaddr1=0xdb900000,size2=8192,baseaddr2=0xdba00000,size3=4096,baseaddr3=0xdbd00000,size4=4096,baseaddr4=0xdbe00000
> > > > 
> > > > Please note that if your device has less than four bars you can give
> > > > the same size and baseaddress to the unused bars.
> > Those are some pretty serious usage restrictions and using /dev/mem is
> > really not practical.  The resource files in pci-sysfs would even be a
> > better option.
> our was a quick hack to fulfill our needs, the approach via sysfs is
> of course the right one and we would implement it if this patch is of 
> interest.
> 
> > I didn't see how IO and MMIO BARs get enabled on the
> > physical device or whether you support any kind of interrupt scheme.
> In our case the IO space is not used.
> The MMIO space is already enabled.
> 
> Our custom board does not have any interrupt and our quick hack
> did not implement it.
> >    I
> > had never really intended QEMU use of this, but you might want to
> > consider vfio no-iommu mode:
> > 
> > http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/vfio/vfio.c?id=03a76b60f8ba27974e2d252bc555d2c103420e15
> > 
> > Using this taints the kernel, but maybe that's nothing you mind if
> > you're already letting QEMU access /dev/mem.  The QEMU vfio-pci driver
> > would need to be modified to use the new device and of course it
> > wouldn't have IOMMU translation capabilities.  That means that the
> > BusMaster bit should protected and MSI/X capabilities should be hidden
> > from the VM.  It seems more flexible and featureful than what you have
> > here.  Thanks,
> 
> I was not aware of this interesting patch, I will study it to see if
> it fits our use case.
> 
> Just for information you mean "taint" in that "security" is broken, not
> licensing issues, am I right?

Yes, it's only tainting for security, the driver is part of the
standard Linux kernel.  There's really no way to guarantee that we can
prevent the user from enabling BusMaster on a device capable of DMA,
even if we trapped access to that config space bit, devices often have
back doors to PCI config space, so it's best just to assume DMA is
possible and mark the host kernel as vulnerable.  Thanks,

Alex
diff mbox

Patch

diff --git a/default-configs/pci.mak b/default-configs/pci.mak
index f250119..699a5a5 100644
--- a/default-configs/pci.mak
+++ b/default-configs/pci.mak
@@ -36,4 +36,5 @@  CONFIG_EDU=y
 CONFIG_VGA=y
 CONFIG_VGA_PCI=y
 CONFIG_IVSHMEM=$(CONFIG_POSIX)
+CONFIG_SLAVEPCIPASSTHRU=$(CONFIG_POSIX)
 CONFIG_ROCKER=y
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
index d4765c2..e346a15 100644
--- a/hw/misc/Makefile.objs
+++ b/hw/misc/Makefile.objs
@@ -20,6 +20,7 @@  common-obj-$(CONFIG_PUV3) += puv3_pm.o
 common-obj-$(CONFIG_MACIO) += macio/
 
 obj-$(CONFIG_IVSHMEM) += ivshmem.o
+obj-$(CONFIG_SLAVEPCIPASSTHRU) += slavepci_passthru.o
 
 obj-$(CONFIG_REALVIEW) += arm_sysctl.o
 obj-$(CONFIG_NSERIES) += cbus.o
diff --git a/hw/misc/slavepci_passthru.c b/hw/misc/slavepci_passthru.c
new file mode 100644
index 0000000..ee709b7
--- /dev/null
+++ b/hw/misc/slavepci_passthru.c
@@ -0,0 +1,453 @@ 
+/*
+ * Host Device PCI-Card Slave Pass-Thru: based on ivshmem "qemu-device"
+ * 
+ *
+ * Author:
+ *      By Francesco Zuliani AT Neat S.r.l. <francesco.zuliani@neat.it>
+ *
+ * Based On: ivshmem.c
+ *          Original Author Cam Macdonell 
+ *
+ * This code is licensed under the GNU GPL v2.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "hw/hw.h"
+#include "hw/i386/pc.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/msix.h"
+#include "sysemu/kvm.h"
+#include "migration/migration.h"
+#include "qemu/error-report.h"
+#include "qemu/event_notifier.h"
+#include "sysemu/char.h"
+
+#include <stdio.h>
+#include <sys/mman.h>
+#include <sys/types.h>
+#include <limits.h>
+
+#define PCI_VENDOR_ID_SLAVEPASSTHRU_DEFAULT PCI_VENDOR_ID_REDHAT_QUMRANET
+#define PCI_DEVICE_ID_SLAVEPASSTHRU_DEFAULT 0x2222
+
+#define STRINGIFY(a)  #a
+
+//#define DEBUG_SLAVEPASSTHRU
+#ifdef DEBUG_SLAVEPASSTHRU
+#define SLAVEPASSTHRU_DPRINTF(fmt, ...)        \
+    do {printf("SLAVEPASSTHRU: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define SLAVEPASSTHRU_DPRINTF(fmt, ...)
+#endif
+
+#define TYPE_SLAVEPASSTHRU "slavepassthru"
+#define SLAVEPASSTHRU(obj) \
+    OBJECT_CHECK(SlavepassthruState, (obj), TYPE_SLAVEPASSTHRU)
+
+typedef struct SlavepassthruState {
+    /*< private >*/
+    PCIDevice parent_obj;
+    /*< public >*/
+  
+    /* We might need to register the BAR before we actually have the memory.
+     * So prepare a container MemoryRegion for the BAR immediately and
+     * add a subregion when we have the memory.
+     */
+    MemoryRegion bar1; /* Bar-region */
+    MemoryRegion bar2; /* Bar-region */
+    MemoryRegion bar3; /* Bar-region */
+    MemoryRegion bar4; /* Bar-region */
+    MemoryRegion bar5; /* Bar-region */
+    MemoryRegion bar6; /* Bar-region */
+
+    MemoryRegion slavepassthru1; /* Sub-region */
+    MemoryRegion slavepassthru2; /* Sub-region */
+    MemoryRegion slavepassthru3; /* Sub-region */
+    MemoryRegion slavepassthru4; /* Sub-region */
+    MemoryRegion slavepassthru5; /* Sub-region */
+    MemoryRegion slavepassthru6; /* Sub-region */
+
+    uint64_t slavepassthru_size1;
+    uint64_t slavepassthru_size2;
+    uint64_t slavepassthru_size3;
+    uint64_t slavepassthru_size4;
+    uint64_t slavepassthru_size5;
+    uint64_t slavepassthru_size6;
+
+    uint64_t slavepassthru_baseaddr1;
+    uint64_t slavepassthru_baseaddr2;
+    uint64_t slavepassthru_baseaddr3;
+    uint64_t slavepassthru_baseaddr4;
+    uint64_t slavepassthru_baseaddr5;
+    uint64_t slavepassthru_baseaddr6;
+
+    int mmap_fd1; /* mmap bar1 file descriptor */
+    int mmap_fd2; /* mmap bar2 file descriptor */
+    int mmap_fd3; /* mmap bar3 file descriptor */
+    int mmap_fd4; /* mmap bar4 file descriptor */
+    int mmap_fd5; /* mmap bar5 file descriptor */
+    int mmap_fd6; /* mmap bar6 file descriptor */
+
+    uint32_t slavepassthru_attr;
+
+    uint32_t slavepassthru_64bit;
+
+    char * sizearg1;
+    char * sizearg2;
+    char * sizearg3;
+    char * sizearg4;
+    char * sizearg5;
+    char * sizearg6;
+    char * baseaddrarg1;
+    char * baseaddrarg2;
+    char * baseaddrarg3;
+    char * baseaddrarg4;
+    char * baseaddrarg5;
+    char * baseaddrarg6;
+
+} SlavepassthruState;
+
+static inline bool is_power_of_two(uint64_t x) {
+    return (x & (x - 1)) == 0;
+}
+
+/* create the shared memory BAR when we are not using the server, so we can
+ * create the BAR and map the memory immediately */
+static void create_shared_memory_BAR(SlavepassthruState *s, int fd, int bar) {
+
+    void   *ptr=NULL;
+    void   *region=NULL ;
+    void   *subregion=NULL ;
+    off_t  baseaddr=NULL ;
+    size_t size=0 ;
+    char   name[255];
+    
+    if ( bar == 0 ) {
+      s->mmap_fd1 = fd;
+      region      = &s->bar1 ;
+      subregion   = &s->slavepassthru1 ;
+      baseaddr    = s->slavepassthru_baseaddr1;
+      size        = s->slavepassthru_size1 ;
+    } else if ( bar == 1 ) {
+      s->mmap_fd2 = fd;
+      region      = &s->bar2 ;
+      subregion   = &s->slavepassthru2 ;
+      baseaddr    = s->slavepassthru_baseaddr2;
+      size        = s->slavepassthru_size2 ;
+    } else if ( bar == 2 ) {
+      s->mmap_fd3 = fd;
+      region      = &s->bar3 ;
+      subregion   = &s->slavepassthru3 ;
+      baseaddr    = s->slavepassthru_baseaddr3;
+      size        = s->slavepassthru_size3 ;
+    } else if ( bar == 3 ) {
+      s->mmap_fd4 = fd;
+      region      = &s->bar4 ;
+      subregion   = &s->slavepassthru4 ;
+      baseaddr    = s->slavepassthru_baseaddr4;
+      size        = s->slavepassthru_size4 ;
+    } else if ( bar == 4 ) {
+      s->mmap_fd5 = fd;
+      region      = &s->bar5 ;
+      subregion   = &s->slavepassthru5 ;
+      baseaddr    = s->slavepassthru_baseaddr5;
+      size        = s->slavepassthru_size5 ;
+    } else if ( bar == 5 ) {
+      s->mmap_fd6 = fd;
+      region      = &s->bar6 ;
+      subregion   = &s->slavepassthru6 ;
+      baseaddr    = s->slavepassthru_baseaddr6;
+      size        = s->slavepassthru_size6 ;
+    } else {
+      printf("BAD BAR [0-5] CURR: %d\n", bar);
+      exit(1);
+    }
+
+    snprintf(name, 255, "slavepassthru.bar%d", bar);
+    
+    errno=0;
+    ptr = mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, baseaddr);
+
+    if (errno != 0 ) {
+      printf("DEBUG BAR %d %s\n", bar, name);
+      printf("DEBUG MMAP %p\n", ptr);
+      printf("DEBUG SIZE %ld\n", size);
+      printf("DEBUG ADDR %lx\n", baseaddr);
+      printf("DEBUG FD   %d\n", fd);
+      exit(1);
+    } else {
+      memory_region_init_ram_ptr( subregion, OBJECT(s), name, size, ptr);
+      memory_region_add_subregion( region, 0, subregion);
+      /* region for shared memory */
+      pci_register_bar(PCI_DEVICE(s), bar, s->slavepassthru_attr, region);
+    }
+}
+
+
+static void slavepassthru_reset(DeviceState *d)
+{
+    SlavepassthruState *s = SLAVEPASSTHRU(d);
+
+    slavepassthru_use_msix(s);
+}
+
+static uint64_t slavepassthru_get_size(char *src) {
+
+    uint64_t value;
+    char *ptr;
+
+    value = strtoull(src, &ptr, 10);
+
+    switch (*ptr) {
+        case 0:
+            break;
+        case 'K': case 'k':
+            value <<= 10;
+            break;
+        case 'M': case 'm':
+            value <<= 20;
+            break;
+        case 'G': case 'g':
+            value <<= 30;
+            break;
+        default:
+            error_report("invalid ram size: %s", src);
+            exit(1);
+    }
+
+    /* BARs must be a power of 2 */
+    if (!is_power_of_two(value)) {
+        error_report("grr size must be power of 2");
+	exit(1);
+    }
+
+    return value;
+}
+
+
+static uint64_t slavepassthru_get_hex(char * src)
+{
+
+    uint64_t value;
+    char *ptr;
+    
+    value = strtoull(src, &ptr, 16);
+
+    return value;
+}
+
+static uint64_t slavepassthru_get_baseaddr(char * src)
+{
+  return slavepassthru_get_hex(src);
+}
+
+static void slavepassthru_write_config(PCIDevice *pci_dev, uint32_t address,
+				 uint32_t val, int len)
+{
+    pci_default_write_config(pci_dev, address, val, len);
+}
+
+static int pci_slavepassthru_init(PCIDevice *dev)
+{
+    SlavepassthruState *s = SLAVEPASSTHRU(dev);
+    uint8_t *pci_conf;
+
+    if (s->sizearg1 == NULL ||
+	s->sizearg2 == NULL ||
+	s->sizearg3 == NULL ||
+	s->sizearg4 == NULL ||
+	s->sizearg5 == NULL ||
+	s->sizearg6 == NULL )
+      {
+        error_report("6 sizes mandatory");
+        exit(1);
+      }
+    else
+      {
+	s->slavepassthru_size1 = slavepassthru_get_size(s->sizearg1);
+	s->slavepassthru_size2 = slavepassthru_get_size(s->sizearg2);
+	s->slavepassthru_size3 = slavepassthru_get_size(s->sizearg3);
+	s->slavepassthru_size4 = slavepassthru_get_size(s->sizearg4);
+	s->slavepassthru_size5 = slavepassthru_get_size(s->sizearg5);
+	s->slavepassthru_size6 = slavepassthru_get_size(s->sizearg6);
+      }
+
+    if (s->baseaddrarg1 == NULL ||
+	s->baseaddrarg2 == NULL ||
+	s->baseaddrarg3 == NULL ||
+	s->baseaddrarg4 == NULL ||
+	s->baseaddrarg5 == NULL ||
+	s->baseaddrarg6 == NULL
+	)
+      {
+        error_report("6 baseaddr mandatory");
+        exit(1);
+      }
+    else
+      {
+	s->slavepassthru_baseaddr1 = slavepassthru_get_baseaddr(s->baseaddrarg1);
+	s->slavepassthru_baseaddr2 = slavepassthru_get_baseaddr(s->baseaddrarg2);
+	s->slavepassthru_baseaddr3 = slavepassthru_get_baseaddr(s->baseaddrarg3);
+	s->slavepassthru_baseaddr4 = slavepassthru_get_baseaddr(s->baseaddrarg4);
+	s->slavepassthru_baseaddr5 = slavepassthru_get_baseaddr(s->baseaddrarg5);
+	s->slavepassthru_baseaddr6 = slavepassthru_get_baseaddr(s->baseaddrarg6);
+      }
+
+    pci_conf = dev->config;
+    pci_conf[PCI_COMMAND] = PCI_COMMAND_IO | PCI_COMMAND_MEMORY;
+
+    pci_config_set_interrupt_pin(pci_conf, 1);
+
+    s->mmap_fd1 = 0;
+    s->mmap_fd2 = 0;
+    s->mmap_fd3 = 0;
+    s->mmap_fd4 = 0;
+    s->mmap_fd5 = 0;
+    s->mmap_fd6 = 0;
+
+    memory_region_init(&s->bar1, OBJECT(s), "slavepassthru-bar1-container", s->slavepassthru_size1);
+    memory_region_init(&s->bar2, OBJECT(s), "slavepassthru-bar2-container", s->slavepassthru_size2);
+    memory_region_init(&s->bar3, OBJECT(s), "slavepassthru-bar3-container", s->slavepassthru_size3);
+    memory_region_init(&s->bar4, OBJECT(s), "slavepassthru-bar4-container", s->slavepassthru_size4);
+    memory_region_init(&s->bar5, OBJECT(s), "slavepassthru-bar5-container", s->slavepassthru_size5);
+    memory_region_init(&s->bar6, OBJECT(s), "slavepassthru-bar6-container", s->slavepassthru_size6);
+
+
+    /* PCI Card usually not prefectch-able */
+    s->slavepassthru_attr = PCI_BASE_ADDRESS_SPACE_MEMORY ;
+
+    if (s->slavepassthru_64bit) {
+        s->slavepassthru_attr |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+    }
+
+    {
+        /* just map the file immediately, we're not using a server */
+        int fd1=0;
+        int fd2=0;
+        int fd3=0;
+        int fd4=0;
+        int fd5=0;
+        int fd6=0;
+
+        /* try opening with O_EXCL and if it succeeds zero the memory
+         * by truncating to 0 */
+	if((fd1 = open("/dev/mem", O_RDWR)) < 0) {
+            error_report("could not open /dev/mem");
+            exit(1);
+        }
+
+	if((fd2 = open("/dev/mem", O_RDWR)) < 0) {
+            error_report("could not open /dev/mem");
+            exit(1);
+        }
+
+	if((fd3 = open("/dev/mem", O_RDWR)) < 0) {
+            error_report("could not open /dev/mem");
+            exit(1);
+        }
+
+	if((fd4 = open("/dev/mem", O_RDWR)) < 0) {
+            error_report("could not open /dev/mem");
+            exit(1);
+        }
+
+	if((fd5 = open("/dev/mem", O_RDWR)) < 0) {
+            error_report("could not open /dev/mem");
+            exit(1);
+        }
+
+	if((fd6 = open("/dev/mem", O_RDWR)) < 0) {
+            error_report("could not open /dev/mem");
+            exit(1);
+        }
+
+        create_shared_memory_BAR(s, fd1, 0);
+        create_shared_memory_BAR(s, fd2, 1);
+        create_shared_memory_BAR(s, fd3, 2);
+        create_shared_memory_BAR(s, fd4, 3);
+        create_shared_memory_BAR(s, fd5, 3);
+        create_shared_memory_BAR(s, fd6, 3);
+    }
+
+    dev->config_write = slavepassthru_write_config;
+
+    return 0;
+}
+
+static void pci_slavepassthru_uninit(PCIDevice *dev)
+{
+    SlavepassthruState *s = SLAVEPASSTHRU(dev);
+
+    memory_region_del_subregion(&s->bar1, &s->slavepassthru1);
+    memory_region_del_subregion(&s->bar2, &s->slavepassthru2);
+    memory_region_del_subregion(&s->bar3, &s->slavepassthru3);
+    memory_region_del_subregion(&s->bar4, &s->slavepassthru4);
+    memory_region_del_subregion(&s->bar5, &s->slavepassthru5);
+    memory_region_del_subregion(&s->bar6, &s->slavepassthru6);
+    close(s->mmap_fd1);
+    close(s->mmap_fd2);
+    close(s->mmap_fd3);
+    close(s->mmap_fd4);
+    close(s->mmap_fd5);
+    close(s->mmap_fd6);
+}
+
+static Property slavepassthru_properties[] = {
+    DEFINE_PROP_STRING("size1", SlavepassthruState, sizearg1),
+    DEFINE_PROP_STRING("baseaddr1", SlavepassthruState, baseaddrarg1),
+    DEFINE_PROP_STRING("size2", SlavepassthruState, sizearg2),
+    DEFINE_PROP_STRING("baseaddr2", SlavepassthruState, baseaddrarg2),
+    DEFINE_PROP_STRING("size3", SlavepassthruState, sizearg3),
+    DEFINE_PROP_STRING("baseaddr3", SlavepassthruState, baseaddrarg3),
+    DEFINE_PROP_STRING("size4", SlavepassthruState, sizearg4),
+    DEFINE_PROP_STRING("baseaddr4", SlavepassthruState, baseaddrarg4),
+    DEFINE_PROP_STRING("size5", SlavepassthruState, sizearg5),
+    DEFINE_PROP_STRING("baseaddr5", SlavepassthruState, baseaddrarg5),
+    DEFINE_PROP_STRING("size6", SlavepassthruState, sizearg6),
+    DEFINE_PROP_STRING("baseaddr6", SlavepassthruState, baseaddrarg6),
+
+    DEFINE_PROP_UINT32("use64", SlavepassthruState, slavepassthru_64bit, 0),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void slavepassthru_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
+    char *vid;
+    char *did;
+
+    vid=getenv("SLAVEPASSTHRU_VID");
+    did=getenv("SLAVEPASSTHRU_DID");
+
+    if (vid == NULL || did == NULL) {
+      printf("WARNING: Environment variable SLAVEPASSTHRU_VID e/o SLAVEPASSTHRU_DID are not assigned using DEFAULTS");
+      vid=strdup(STRINGIFY(PCI_VENDOR_ID_SLAVEPASSTHRU_DEFAULT));
+      did=strdup(STRINGIFY(PCI_DEVICE_ID_SLAVEPASSTHRU_DEFAULT));
+    }
+    
+    k->init = pci_slavepassthru_init;
+    k->exit = pci_slavepassthru_uninit;
+    k->vendor_id = slavepassthru_get_hex(vid);
+    k->device_id = slavepassthru_get_hex(did);
+    k->class_id = PCI_CLASS_MEMORY_RAM;
+    dc->reset = slavepassthru_reset;
+    dc->props = slavepassthru_properties;
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+}
+
+static const TypeInfo slavepassthru_info = {
+    .name          = TYPE_SLAVEPASSTHRU,
+    .parent        = TYPE_PCI_DEVICE,
+    .instance_size = sizeof(SlavepassthruState),
+    .class_init    = slavepassthru_class_init,
+};
+
+static void slavepassthru_register_types(void)
+{
+    type_register_static(&slavepassthru_info);
+}
+
+type_init(slavepassthru_register_types)