[RFC,1/5] hw/vfio: Add function for getting reserved_region of device iommu group

Message ID 1510622154-17224-2-git-send-email-zhuyijun@huawei.com
State New
Headers show
Series
  • [RFC,1/5] hw/vfio: Add function for getting reserved_region of device iommu group
Related show

Commit Message

Zhu Yijun Nov. 14, 2017, 1:15 a.m.
From: Zhu Yijun <zhuyijun@huawei.com>

With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved window and
PCI reserved window which has to be excluded from Guest iova allocations.

However, If it falls within the Qemu default virtual memory address space,
then reserved regions may get allocated for a Guest VF DMA iova and it will
fail.

So get those reserved regions in this patch and create some holes in the Qemu
ram address in next patchset.

Signed-off-by: Zhu Yijun <zhuyijun@huawei.com>
---
 hw/vfio/common.c              | 67 +++++++++++++++++++++++++++++++++++++++++++
 hw/vfio/pci.c                 |  2 ++
 hw/vfio/platform.c            |  2 ++
 include/exec/memory.h         |  7 +++++
 include/hw/vfio/vfio-common.h |  3 ++
 5 files changed, 81 insertions(+)

Comments

Alex Williamson Nov. 14, 2017, 3:47 p.m. | #1
On Tue, 14 Nov 2017 09:15:50 +0800
<zhuyijun@huawei.com> wrote:

> From: Zhu Yijun <zhuyijun@huawei.com>
> 
> With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved window and
> PCI reserved window which has to be excluded from Guest iova allocations.
> 
> However, If it falls within the Qemu default virtual memory address space,
> then reserved regions may get allocated for a Guest VF DMA iova and it will
> fail.
> 
> So get those reserved regions in this patch and create some holes in the Qemu
> ram address in next patchset.
> 
> Signed-off-by: Zhu Yijun <zhuyijun@huawei.com>
> ---
>  hw/vfio/common.c              | 67 +++++++++++++++++++++++++++++++++++++++++++
>  hw/vfio/pci.c                 |  2 ++
>  hw/vfio/platform.c            |  2 ++
>  include/exec/memory.h         |  7 +++++
>  include/hw/vfio/vfio-common.h |  3 ++
>  5 files changed, 81 insertions(+)

I generally prefer the vfio interface to be more self sufficient, if
there are regions the IOMMU cannot map, we should be describing those
via capabilities on the container through the vfio interface.  If we're
just scraping together things from sysfs, the user can just as easily
do that and provide an explicit memory map for the VM taking the
devices into account.  Of course having a device associated with a
restricted memory map that conflicts with the default memory map for
the VM implies that you can never support hot-add of such devices.
Please cc me on vfio related patches.  Thanks,

Alex
Shameerali Kolothum Thodi Nov. 15, 2017, 9:49 a.m. | #2
Hi Alex,

> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Tuesday, November 14, 2017 3:48 PM
> To: Zhuyijun <zhuyijun@huawei.com>
> Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org;
> eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali Kolothum
> Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong
> <zhaoshenglong@huawei.com>
> Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting
> reserved_region of device iommu group
> 
> On Tue, 14 Nov 2017 09:15:50 +0800
> <zhuyijun@huawei.com> wrote:
> 
> > From: Zhu Yijun <zhuyijun@huawei.com>
> >
> > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved
> > window and PCI reserved window which has to be excluded from Guest iova
> allocations.
> >
> > However, If it falls within the Qemu default virtual memory address
> > space, then reserved regions may get allocated for a Guest VF DMA iova
> > and it will fail.
> >
> > So get those reserved regions in this patch and create some holes in
> > the Qemu ram address in next patchset.
> >
> > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com>
> > ---
> >  hw/vfio/common.c              | 67
> +++++++++++++++++++++++++++++++++++++++++++
> >  hw/vfio/pci.c                 |  2 ++
> >  hw/vfio/platform.c            |  2 ++
> >  include/exec/memory.h         |  7 +++++
> >  include/hw/vfio/vfio-common.h |  3 ++
> >  5 files changed, 81 insertions(+)
> 
> I generally prefer the vfio interface to be more self sufficient, if there are
> regions the IOMMU cannot map, we should be describing those via capabilities
> on the container through the vfio interface.  If we're just scraping together
> things from sysfs, the user can just as easily do that and provide an explicit
> memory map for the VM taking the devices into account. 

Ok. I was under the impression that the purpose of introducing the 
/sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions 
that are reserved(MSI or non-mappable) for Qemu or other apps to
make use of.  I think this was introduced as part of the "KVM/MSI passthrough
support on ARM" patch series. And if I remember correctly, Eric had 
an approach where the user space can retrieve all the reserved regions through
the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with the 
sysfs interface.

May be I am missing something here.

> Of course having a
> device associated with a restricted memory map that conflicts with the default
> memory map for the VM implies that you can never support hot-add of such
> devices.

True.  Hot-add and migration will have issues on these platforms.

Thanks,
Shameer
Alex Williamson Nov. 15, 2017, 6:25 p.m. | #3
On Wed, 15 Nov 2017 09:49:41 +0000
Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote:

> Hi Alex,
> 
> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Tuesday, November 14, 2017 3:48 PM
> > To: Zhuyijun <zhuyijun@huawei.com>
> > Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org;
> > eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali Kolothum
> > Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong
> > <zhaoshenglong@huawei.com>
> > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting
> > reserved_region of device iommu group
> > 
> > On Tue, 14 Nov 2017 09:15:50 +0800
> > <zhuyijun@huawei.com> wrote:
> >   
> > > From: Zhu Yijun <zhuyijun@huawei.com>
> > >
> > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved
> > > window and PCI reserved window which has to be excluded from Guest iova  
> > allocations.  
> > >
> > > However, If it falls within the Qemu default virtual memory address
> > > space, then reserved regions may get allocated for a Guest VF DMA iova
> > > and it will fail.
> > >
> > > So get those reserved regions in this patch and create some holes in
> > > the Qemu ram address in next patchset.
> > >
> > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com>
> > > ---
> > >  hw/vfio/common.c              | 67  
> > +++++++++++++++++++++++++++++++++++++++++++  
> > >  hw/vfio/pci.c                 |  2 ++
> > >  hw/vfio/platform.c            |  2 ++
> > >  include/exec/memory.h         |  7 +++++
> > >  include/hw/vfio/vfio-common.h |  3 ++
> > >  5 files changed, 81 insertions(+)  
> > 
> > I generally prefer the vfio interface to be more self sufficient, if there are
> > regions the IOMMU cannot map, we should be describing those via capabilities
> > on the container through the vfio interface.  If we're just scraping together
> > things from sysfs, the user can just as easily do that and provide an explicit
> > memory map for the VM taking the devices into account.   
> 
> Ok. I was under the impression that the purpose of introducing the 
> /sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions 
> that are reserved(MSI or non-mappable) for Qemu or other apps to
> make use of.  I think this was introduced as part of the "KVM/MSI passthrough
> support on ARM" patch series. And if I remember correctly, Eric had 
> an approach where the user space can retrieve all the reserved regions through
> the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with the 
> sysfs interface.
> 
> May be I am missing something here.

And sysfs is a good interface if the user wants to use it to configure
the VM in a way that's compatible with a device.  For instance, in your
case, a user could evaluate these reserved regions across all devices in
a system, or even across an entire cluster, and instantiate the VM with
a memory map compatible with hotplugging any of those evaluated
devices (QEMU implementation of allowing the user to do this TBD).
Having the vfio device evaluate these reserved regions only helps in
the cold-plug case.  So the proposed solution is limited in scope and
doesn't address similar needs on other platforms.  There is value to
verifying that a device's IOVA space is compatible with a VM memory map
and modifying the memory map on cold-plug or rejecting the device on
hot-plug, but isn't that why we have an ioctl within vfio to expose
information about the IOMMU?  Why take the path of allowing QEMU to
rummage through sysfs files outside of vfio, implying additional
security and access concerns, rather than filling the gap within the
vifo API?  Thanks,

Alex
Shameerali Kolothum Thodi Nov. 20, 2017, 11:58 a.m. | #4
> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Wednesday, November 15, 2017 6:25 PM
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> Cc: Zhuyijun <zhuyijun@huawei.com>; qemu-arm@nongnu.org; qemu-
> devel@nongnu.org; eric.auger@redhat.com; peter.maydell@linaro.org;
> Zhaoshenglong <zhaoshenglong@huawei.com>
> Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting
> reserved_region of device iommu group
> 
> On Wed, 15 Nov 2017 09:49:41 +0000
> Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote:
> 
> > Hi Alex,
> >
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > Sent: Tuesday, November 14, 2017 3:48 PM
> > > To: Zhuyijun <zhuyijun@huawei.com>
> > > Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org;
> > > eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali Kolothum
> > > Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong
> > > <zhaoshenglong@huawei.com>
> > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting
> > > reserved_region of device iommu group
> > >
> > > On Tue, 14 Nov 2017 09:15:50 +0800
> > > <zhuyijun@huawei.com> wrote:
> > >
> > > > From: Zhu Yijun <zhuyijun@huawei.com>
> > > >
> > > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved
> > > > window and PCI reserved window which has to be excluded from Guest
> iova
> > > allocations.
> > > >
> > > > However, If it falls within the Qemu default virtual memory address
> > > > space, then reserved regions may get allocated for a Guest VF DMA iova
> > > > and it will fail.
> > > >
> > > > So get those reserved regions in this patch and create some holes in
> > > > the Qemu ram address in next patchset.
> > > >
> > > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com>
> > > > ---
> > > >  hw/vfio/common.c              | 67
> > > +++++++++++++++++++++++++++++++++++++++++++
> > > >  hw/vfio/pci.c                 |  2 ++
> > > >  hw/vfio/platform.c            |  2 ++
> > > >  include/exec/memory.h         |  7 +++++
> > > >  include/hw/vfio/vfio-common.h |  3 ++
> > > >  5 files changed, 81 insertions(+)
> > >
> > > I generally prefer the vfio interface to be more self sufficient, if there are
> > > regions the IOMMU cannot map, we should be describing those via
> capabilities
> > > on the container through the vfio interface.  If we're just scraping together
> > > things from sysfs, the user can just as easily do that and provide an explicit
> > > memory map for the VM taking the devices into account.
> >
> > Ok. I was under the impression that the purpose of introducing the
> > /sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions
> > that are reserved(MSI or non-mappable) for Qemu or other apps to
> > make use of.  I think this was introduced as part of the "KVM/MSI passthrough
> > support on ARM" patch series. And if I remember correctly, Eric had
> > an approach where the user space can retrieve all the reserved regions
> through
> > the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with the
> > sysfs interface.
> >
> > May be I am missing something here.
> 
> And sysfs is a good interface if the user wants to use it to configure
> the VM in a way that's compatible with a device.  For instance, in your
> case, a user could evaluate these reserved regions across all devices in
> a system, or even across an entire cluster, and instantiate the VM with
> a memory map compatible with hotplugging any of those evaluated
> devices (QEMU implementation of allowing the user to do this TBD).
> Having the vfio device evaluate these reserved regions only helps in
> the cold-plug case.  So the proposed solution is limited in scope and
> doesn't address similar needs on other platforms.  There is value to
> verifying that a device's IOVA space is compatible with a VM memory map
> and modifying the memory map on cold-plug or rejecting the device on
> hot-plug, but isn't that why we have an ioctl within vfio to expose
> information about the IOMMU?  Why take the path of allowing QEMU to
> rummage through sysfs files outside of vfio, implying additional
> security and access concerns, rather than filling the gap within the
> vifo API?  

Thanks Alex for the explanation. 

I came across this patch[1] from Eric where he introduced the IOCTL interface to
retrieve the reserved regions. It looks like this can be reworked to accommodate 
the above requirement.

Hi Eric,

Please let us know if you have any plans to respin this patch or else we can take a
look at this and do the rework if it is Ok.

Thanks,
Shameer 

1. https://lists.linuxfoundation.org/pipermail/iommu/2016-November/019002.html
Alex Williamson Nov. 20, 2017, 3:57 p.m. | #5
On Mon, 20 Nov 2017 11:58:43 +0000
Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote:

> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Wednesday, November 15, 2017 6:25 PM
> > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> > Cc: Zhuyijun <zhuyijun@huawei.com>; qemu-arm@nongnu.org; qemu-
> > devel@nongnu.org; eric.auger@redhat.com; peter.maydell@linaro.org;
> > Zhaoshenglong <zhaoshenglong@huawei.com>
> > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting
> > reserved_region of device iommu group
> > 
> > On Wed, 15 Nov 2017 09:49:41 +0000
> > Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote:
> >   
> > > Hi Alex,
> > >  
> > > > -----Original Message-----
> > > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > > Sent: Tuesday, November 14, 2017 3:48 PM
> > > > To: Zhuyijun <zhuyijun@huawei.com>
> > > > Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org;
> > > > eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali Kolothum
> > > > Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong
> > > > <zhaoshenglong@huawei.com>
> > > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting
> > > > reserved_region of device iommu group
> > > >
> > > > On Tue, 14 Nov 2017 09:15:50 +0800
> > > > <zhuyijun@huawei.com> wrote:
> > > >  
> > > > > From: Zhu Yijun <zhuyijun@huawei.com>
> > > > >
> > > > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved
> > > > > window and PCI reserved window which has to be excluded from Guest  
> > iova  
> > > > allocations.  
> > > > >
> > > > > However, If it falls within the Qemu default virtual memory address
> > > > > space, then reserved regions may get allocated for a Guest VF DMA iova
> > > > > and it will fail.
> > > > >
> > > > > So get those reserved regions in this patch and create some holes in
> > > > > the Qemu ram address in next patchset.
> > > > >
> > > > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com>
> > > > > ---
> > > > >  hw/vfio/common.c              | 67  
> > > > +++++++++++++++++++++++++++++++++++++++++++  
> > > > >  hw/vfio/pci.c                 |  2 ++
> > > > >  hw/vfio/platform.c            |  2 ++
> > > > >  include/exec/memory.h         |  7 +++++
> > > > >  include/hw/vfio/vfio-common.h |  3 ++
> > > > >  5 files changed, 81 insertions(+)  
> > > >
> > > > I generally prefer the vfio interface to be more self sufficient, if there are
> > > > regions the IOMMU cannot map, we should be describing those via  
> > capabilities  
> > > > on the container through the vfio interface.  If we're just scraping together
> > > > things from sysfs, the user can just as easily do that and provide an explicit
> > > > memory map for the VM taking the devices into account.  
> > >
> > > Ok. I was under the impression that the purpose of introducing the
> > > /sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions
> > > that are reserved(MSI or non-mappable) for Qemu or other apps to
> > > make use of.  I think this was introduced as part of the "KVM/MSI passthrough
> > > support on ARM" patch series. And if I remember correctly, Eric had
> > > an approach where the user space can retrieve all the reserved regions  
> > through  
> > > the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with the
> > > sysfs interface.
> > >
> > > May be I am missing something here.  
> > 
> > And sysfs is a good interface if the user wants to use it to configure
> > the VM in a way that's compatible with a device.  For instance, in your
> > case, a user could evaluate these reserved regions across all devices in
> > a system, or even across an entire cluster, and instantiate the VM with
> > a memory map compatible with hotplugging any of those evaluated
> > devices (QEMU implementation of allowing the user to do this TBD).
> > Having the vfio device evaluate these reserved regions only helps in
> > the cold-plug case.  So the proposed solution is limited in scope and
> > doesn't address similar needs on other platforms.  There is value to
> > verifying that a device's IOVA space is compatible with a VM memory map
> > and modifying the memory map on cold-plug or rejecting the device on
> > hot-plug, but isn't that why we have an ioctl within vfio to expose
> > information about the IOMMU?  Why take the path of allowing QEMU to
> > rummage through sysfs files outside of vfio, implying additional
> > security and access concerns, rather than filling the gap within the
> > vifo API?    
> 
> Thanks Alex for the explanation. 
> 
> I came across this patch[1] from Eric where he introduced the IOCTL interface to
> retrieve the reserved regions. It looks like this can be reworked to accommodate 
> the above requirement.

I don't think we need a new ioctl for this, nor do I think that
describing the holes is the correct approach.  The existing
VFIO_IOMMU_GET_INFO ioctl can be extended to support capability chains,
as we've done for VFIO_DEVICE_GET_REGION_INFO.  IMO, we should try to
describe the available fixed IOVA regions which are available for
mapping rather than describing various holes within the address space
which are unavailable.  The latter method always fails to describe the
end of the mappable IOVA space and gets bogged down in trying to
classify the types of holes that might exist.  Thanks,

Alex
Shameerali Kolothum Thodi Nov. 20, 2017, 4:31 p.m. | #6
> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Monday, November 20, 2017 3:58 PM
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> Cc: eric.auger@redhat.com; Zhuyijun <zhuyijun@huawei.com>; qemu-
> arm@nongnu.org; qemu-devel@nongnu.org; peter.maydell@linaro.org;
> Zhaoshenglong <zhaoshenglong@huawei.com>; Linuxarm
> <linuxarm@huawei.com>
> Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting
> reserved_region of device iommu group
> 
> On Mon, 20 Nov 2017 11:58:43 +0000
> Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote:
> 
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > Sent: Wednesday, November 15, 2017 6:25 PM
> > > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> > > Cc: Zhuyijun <zhuyijun@huawei.com>; qemu-arm@nongnu.org; qemu-
> > > devel@nongnu.org; eric.auger@redhat.com; peter.maydell@linaro.org;
> > > Zhaoshenglong <zhaoshenglong@huawei.com>
> > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting
> > > reserved_region of device iommu group
> > >
> > > On Wed, 15 Nov 2017 09:49:41 +0000
> > > Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> wrote:
> > >
> > > > Hi Alex,
> > > >
> > > > > -----Original Message-----
> > > > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > > > Sent: Tuesday, November 14, 2017 3:48 PM
> > > > > To: Zhuyijun <zhuyijun@huawei.com>
> > > > > Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org;
> > > > > eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali
> Kolothum
> > > > > Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong
> > > > > <zhaoshenglong@huawei.com>
> > > > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting
> > > > > reserved_region of device iommu group
> > > > >
> > > > > On Tue, 14 Nov 2017 09:15:50 +0800
> > > > > <zhuyijun@huawei.com> wrote:
> > > > >
> > > > > > From: Zhu Yijun <zhuyijun@huawei.com>
> > > > > >
> > > > > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved
> > > > > > window and PCI reserved window which has to be excluded from
> Guest
> > > iova
> > > > > allocations.
> > > > > >
> > > > > > However, If it falls within the Qemu default virtual memory address
> > > > > > space, then reserved regions may get allocated for a Guest VF DMA
> iova
> > > > > > and it will fail.
> > > > > >
> > > > > > So get those reserved regions in this patch and create some holes in
> > > > > > the Qemu ram address in next patchset.
> > > > > >
> > > > > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com>
> > > > > > ---
> > > > > >  hw/vfio/common.c              | 67
> > > > > +++++++++++++++++++++++++++++++++++++++++++
> > > > > >  hw/vfio/pci.c                 |  2 ++
> > > > > >  hw/vfio/platform.c            |  2 ++
> > > > > >  include/exec/memory.h         |  7 +++++
> > > > > >  include/hw/vfio/vfio-common.h |  3 ++
> > > > > >  5 files changed, 81 insertions(+)
> > > > >
> > > > > I generally prefer the vfio interface to be more self sufficient, if there
> are
> > > > > regions the IOMMU cannot map, we should be describing those via
> > > capabilities
> > > > > on the container through the vfio interface.  If we're just scraping
> together
> > > > > things from sysfs, the user can just as easily do that and provide an
> explicit
> > > > > memory map for the VM taking the devices into account.
> > > >
> > > > Ok. I was under the impression that the purpose of introducing the
> > > > /sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions
> > > > that are reserved(MSI or non-mappable) for Qemu or other apps to
> > > > make use of.  I think this was introduced as part of the "KVM/MSI
> passthrough
> > > > support on ARM" patch series. And if I remember correctly, Eric had
> > > > an approach where the user space can retrieve all the reserved regions
> > > through
> > > > the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with
> the
> > > > sysfs interface.
> > > >
> > > > May be I am missing something here.
> > >
> > > And sysfs is a good interface if the user wants to use it to configure
> > > the VM in a way that's compatible with a device.  For instance, in your
> > > case, a user could evaluate these reserved regions across all devices in
> > > a system, or even across an entire cluster, and instantiate the VM with
> > > a memory map compatible with hotplugging any of those evaluated
> > > devices (QEMU implementation of allowing the user to do this TBD).
> > > Having the vfio device evaluate these reserved regions only helps in
> > > the cold-plug case.  So the proposed solution is limited in scope and
> > > doesn't address similar needs on other platforms.  There is value to
> > > verifying that a device's IOVA space is compatible with a VM memory map
> > > and modifying the memory map on cold-plug or rejecting the device on
> > > hot-plug, but isn't that why we have an ioctl within vfio to expose
> > > information about the IOMMU?  Why take the path of allowing QEMU to
> > > rummage through sysfs files outside of vfio, implying additional
> > > security and access concerns, rather than filling the gap within the
> > > vifo API?
> >
> > Thanks Alex for the explanation.
> >
> > I came across this patch[1] from Eric where he introduced the IOCTL
> interface to
> > retrieve the reserved regions. It looks like this can be reworked to
> accommodate
> > the above requirement.
> 
> I don't think we need a new ioctl for this, nor do I think that
> describing the holes is the correct approach.  The existing
> VFIO_IOMMU_GET_INFO ioctl can be extended to support capability chains,
> as we've done for VFIO_DEVICE_GET_REGION_INFO. 

Right, as far as I can see the above mentioned patch is doing exactly the same, 
extending the VFIO_IOMMU_GET_INFO ioctl with capability chain.

> IMO, we should try to
> describe the available fixed IOVA regions which are available for
> mapping rather than describing various holes within the address space
> which are unavailable.  The latter method always fails to describe the
> end of the mappable IOVA space and gets bogged down in trying to
> classify the types of holes that might exist.  Thanks,

Ok. I guess that is to take care iommu max address width case.

Thanks,
Shameer

Patch

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 7b2924c..01bdbbd 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -40,6 +40,8 @@  struct vfio_group_head vfio_group_list =
     QLIST_HEAD_INITIALIZER(vfio_group_list);
 struct vfio_as_head vfio_address_spaces =
     QLIST_HEAD_INITIALIZER(vfio_address_spaces);
+struct reserved_ram_head reserved_ram_regions =
+    QLIST_HEAD_INITIALIZER(reserved_ram_regions);
 
 #ifdef CONFIG_KVM
 /*
@@ -52,6 +54,71 @@  struct vfio_as_head vfio_address_spaces =
 static int vfio_kvm_device_fd = -1;
 #endif
 
+void update_reserved_regions(hwaddr addr, hwaddr size)
+{
+    RAMRegion *reg, *new;
+
+    new = g_new(RAMRegion, 1);
+    new->base = addr;
+    new->size = size;
+
+    if (QLIST_EMPTY(&reserved_ram_regions)) {
+        QLIST_INSERT_HEAD(&reserved_ram_regions, new, next);
+        return;
+    }
+
+    /* make the base of reserved_ram_regions increase */
+    QLIST_FOREACH(reg, &reserved_ram_regions, next) {
+        if (addr > (reg->base + reg->size - 1)) {
+            if (!QLIST_NEXT(reg, next)) {
+                QLIST_INSERT_AFTER(reg, new, next);
+                break;
+            }
+            continue;
+        } else if (addr >= reg->base && addr <= (reg->base + reg->size - 1)) {
+            /* discard the duplicate entry */
+            if (addr == reg->base && size == reg->size) {
+                g_free(new);
+                break;
+            } else {
+                QLIST_INSERT_AFTER(reg, new, next);
+                break;
+            }
+        } else {
+            QLIST_INSERT_BEFORE(reg, new, next);
+            break;
+        }
+    }
+}
+
+void vfio_get_iommu_group_reserved_region(char *group_path)
+{
+    char *filename;
+    FILE *fp;
+    hwaddr start, end;
+    char str[10];
+    struct stat st;
+
+    filename = g_strdup_printf("%s/iommu_group/reserved_regions", group_path);
+    if (stat(filename, &st) < 0) {
+        g_free(filename);
+        return;
+    }
+
+    fp = fopen(filename, "r");
+    if (!fp) {
+        g_free(filename);
+        return;
+    }
+
+    while (fscanf(fp, "%"PRIx64" %"PRIx64" %s", &start, &end, str) != EOF) {
+        update_reserved_regions(start, (end - start + 1));
+    }
+
+    g_free(filename);
+    fclose(fp);
+}
+
 /*
  * Common VFIO interrupt disable
  */
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index c977ee3..9bffb38 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -2674,6 +2674,8 @@  static void vfio_realize(PCIDevice *pdev, Error **errp)
     vdev->vbasedev.type = VFIO_DEVICE_TYPE_PCI;
     vdev->vbasedev.dev = &vdev->pdev.qdev;
 
+    vfio_get_iommu_group_reserved_region(vdev->vbasedev.sysfsdev);
+
     tmp = g_strdup_printf("%s/iommu_group", vdev->vbasedev.sysfsdev);
     len = readlink(tmp, group_path, sizeof(group_path));
     g_free(tmp);
diff --git a/hw/vfio/platform.c b/hw/vfio/platform.c
index da84abf..d5bbc3a 100644
--- a/hw/vfio/platform.c
+++ b/hw/vfio/platform.c
@@ -578,6 +578,8 @@  static int vfio_base_device_init(VFIODevice *vbasedev, Error **errp)
         return -errno;
     }
 
+    vfio_get_iommu_group_reserved_region(vbasedev->sysfsdev);
+
     tmp = g_strdup_printf("%s/iommu_group", vbasedev->sysfsdev);
     len = readlink(tmp, group_path, sizeof(group_path));
     g_free(tmp);
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 5ed4042..2bcc83b 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -46,6 +46,13 @@ 
         OBJECT_GET_CLASS(IOMMUMemoryRegionClass, (obj), \
                          TYPE_IOMMU_MEMORY_REGION)
 
+/* Scattered RAM memory region struct */
+typedef struct RAMRegion {
+    hwaddr base;
+    hwaddr size;
+    QLIST_ENTRY(RAMRegion) next;
+} RAMRegion;
+
 typedef struct MemoryRegionOps MemoryRegionOps;
 typedef struct MemoryRegionMmio MemoryRegionMmio;
 
diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
index f3a2ac9..3483ca6 100644
--- a/include/hw/vfio/vfio-common.h
+++ b/include/hw/vfio/vfio-common.h
@@ -161,10 +161,13 @@  VFIOGroup *vfio_get_group(int groupid, AddressSpace *as, Error **errp);
 void vfio_put_group(VFIOGroup *group);
 int vfio_get_device(VFIOGroup *group, const char *name,
                     VFIODevice *vbasedev, Error **errp);
+void update_reserved_regions(hwaddr addr, hwaddr size);
+void vfio_get_iommu_group_reserved_region(char *group_path);
 
 extern const MemoryRegionOps vfio_region_ops;
 extern QLIST_HEAD(vfio_group_head, VFIOGroup) vfio_group_list;
 extern QLIST_HEAD(vfio_as_head, VFIOAddressSpace) vfio_address_spaces;
+extern QLIST_HEAD(reserved_ram_head, RAMRegion) reserved_ram_regions;
 
 #ifdef CONFIG_LINUX
 int vfio_get_region_info(VFIODevice *vbasedev, int index,