Message ID | 20180821043343.7514-12-david@gibson.dropbear.id.au |
---|---|
State | New |
Headers | show |
Series | [PULL,01/26] spapr_cpu_core: vmstate_[un]register per-CPU data from (un)realizefn | expand |
On Tue, 21 Aug 2018 at 05:33, David Gibson <david@gibson.dropbear.id.au> wrote: > > From: Alexey Kardashevskiy <aik@ozlabs.ru> > > At the moment the PPC64/pseries guest only supports 4K/64K/16M IOMMU > pages and POWER8 CPU supports the exact same set of page size so > so far things worked fine. > > However POWER9 supports different set of sizes - 4K/64K/2M/1G and > the last two - 2M and 1G - are not even allowed in the paravirt interface > (RTAS DDW) so we always end up using 64K IOMMU pages, although we could > back guest's 16MB IOMMU pages with 2MB pages on the host. > > This stores the supported host IOMMU page sizes in VFIOContainer and uses > this later when creating a new DMA window. This uses the system page size > (64k normally, 2M/16M/1G if hugepages used) as the upper limit of > the IOMMU pagesize. > > This changes the type of @pagesize to uint64_t as this is what > memory_region_iommu_get_min_page_size() returns and clz64() takes. > > There should be no behavioral changes on platforms other than pseries. > The guest will keep using the IOMMU page size selected by the PHB pagesize > property as this only changes the underlying hardware TCE table > granularity. Hi; Coverity has raised an issue (CID 1421903) on this code and I'm not sure if it's correct or not. > @@ -144,9 +145,27 @@ int vfio_spapr_create_window(VFIOContainer *container, > { > int ret; > IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr); > - unsigned pagesize = memory_region_iommu_get_min_page_size(iommu_mr); > + uint64_t pagesize = memory_region_iommu_get_min_page_size(iommu_mr); > unsigned entries, pages; > struct vfio_iommu_spapr_tce_create create = { .argsz = sizeof(create) }; > + long systempagesize = qemu_getrampagesize(); > + > + /* > + * The host might not support the guest supported IOMMU page size, > + * so we will use smaller physical IOMMU pages to back them. > + */ > + if (pagesize > systempagesize) { > + pagesize = systempagesize; > + } > + pagesize = 1ULL << (63 - clz64(container->pgsizes & > + (pagesize | (pagesize - 1)))); If the argument to clz64() is zero then it will return 64, and then we will try to do a shift by -1, which is undefined behaviour. Can the expression ever be zero? It's not immediately obvious to me that it can't be, so my suggestion would be that if it is impossible then an assert() of that would be helpful, and if it is possible then the code needs to avoid the illegal shift. > + if (!pagesize) { > + error_report("Host doesn't support page size 0x%"PRIx64 > + ", the supported mask is 0x%lx", > + memory_region_iommu_get_min_page_size(iommu_mr), > + container->pgsizes); > + return -EINVAL; > + } thanks -- PMM
On 23/03/2020 21:55, Peter Maydell wrote: > On Tue, 21 Aug 2018 at 05:33, David Gibson <david@gibson.dropbear.id.au> wrote: >> >> From: Alexey Kardashevskiy <aik@ozlabs.ru> >> >> At the moment the PPC64/pseries guest only supports 4K/64K/16M IOMMU >> pages and POWER8 CPU supports the exact same set of page size so >> so far things worked fine. >> >> However POWER9 supports different set of sizes - 4K/64K/2M/1G and >> the last two - 2M and 1G - are not even allowed in the paravirt interface >> (RTAS DDW) so we always end up using 64K IOMMU pages, although we could >> back guest's 16MB IOMMU pages with 2MB pages on the host. >> >> This stores the supported host IOMMU page sizes in VFIOContainer and uses >> this later when creating a new DMA window. This uses the system page size >> (64k normally, 2M/16M/1G if hugepages used) as the upper limit of >> the IOMMU pagesize. >> >> This changes the type of @pagesize to uint64_t as this is what >> memory_region_iommu_get_min_page_size() returns and clz64() takes. >> >> There should be no behavioral changes on platforms other than pseries. >> The guest will keep using the IOMMU page size selected by the PHB pagesize >> property as this only changes the underlying hardware TCE table >> granularity. > > Hi; Coverity has raised an issue (CID 1421903) on this code and > I'm not sure if it's correct or not. > > >> @@ -144,9 +145,27 @@ int vfio_spapr_create_window(VFIOContainer *container, >> { >> int ret; >> IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr); >> - unsigned pagesize = memory_region_iommu_get_min_page_size(iommu_mr); >> + uint64_t pagesize = memory_region_iommu_get_min_page_size(iommu_mr); >> unsigned entries, pages; >> struct vfio_iommu_spapr_tce_create create = { .argsz = sizeof(create) }; >> + long systempagesize = qemu_getrampagesize(); >> + >> + /* >> + * The host might not support the guest supported IOMMU page size, >> + * so we will use smaller physical IOMMU pages to back them. >> + */ >> + if (pagesize > systempagesize) { >> + pagesize = systempagesize; >> + } pagesize cannot be zero here (I checked possible code paths)... >> + pagesize = 1ULL << (63 - clz64(container->pgsizes & >> + (pagesize | (pagesize - 1)))); >> If the argument to clz64() is zero then it will return 64, and > then we will try to do a shift by -1, which is undefined > behaviour. ... but the clz64() argument can if lets say container->pgsizes=1<<30 (comes from VFIO) and pagesize=1<<16 (decided by QEMU or guest). I'll sent a patch to handle clz64()=>64. Thanks, > Can the expression ever be zero? It's not immediately obvious to me > that it can't be, so my suggestion would be that if it is > impossible then an assert() of that would be helpful, and if it > is possible then the code needs to avoid the illegal shift. >> + if (!pagesize) { >> + error_report("Host doesn't support page size 0x%"PRIx64 >> + ", the supported mask is 0x%lx", >> + memory_region_iommu_get_min_page_size(iommu_mr), >> + container->pgsizes); >> + return -EINVAL; >> + } > > thanks > -- PMM >
On Tue, Mar 24, 2020 at 03:08:22PM +1100, Alexey Kardashevskiy wrote: > > > On 23/03/2020 21:55, Peter Maydell wrote: > > On Tue, 21 Aug 2018 at 05:33, David Gibson <david@gibson.dropbear.id.au> wrote: > >> > >> From: Alexey Kardashevskiy <aik@ozlabs.ru> > >> > >> At the moment the PPC64/pseries guest only supports 4K/64K/16M IOMMU > >> pages and POWER8 CPU supports the exact same set of page size so > >> so far things worked fine. > >> > >> However POWER9 supports different set of sizes - 4K/64K/2M/1G and > >> the last two - 2M and 1G - are not even allowed in the paravirt interface > >> (RTAS DDW) so we always end up using 64K IOMMU pages, although we could > >> back guest's 16MB IOMMU pages with 2MB pages on the host. > >> > >> This stores the supported host IOMMU page sizes in VFIOContainer and uses > >> this later when creating a new DMA window. This uses the system page size > >> (64k normally, 2M/16M/1G if hugepages used) as the upper limit of > >> the IOMMU pagesize. > >> > >> This changes the type of @pagesize to uint64_t as this is what > >> memory_region_iommu_get_min_page_size() returns and clz64() takes. > >> > >> There should be no behavioral changes on platforms other than pseries. > >> The guest will keep using the IOMMU page size selected by the PHB pagesize > >> property as this only changes the underlying hardware TCE table > >> granularity. > > > > Hi; Coverity has raised an issue (CID 1421903) on this code and > > I'm not sure if it's correct or not. > > > > > >> @@ -144,9 +145,27 @@ int vfio_spapr_create_window(VFIOContainer *container, > >> { > >> int ret; > >> IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr); > >> - unsigned pagesize = memory_region_iommu_get_min_page_size(iommu_mr); > >> + uint64_t pagesize = memory_region_iommu_get_min_page_size(iommu_mr); > >> unsigned entries, pages; > >> struct vfio_iommu_spapr_tce_create create = { .argsz = sizeof(create) }; > >> + long systempagesize = qemu_getrampagesize(); > >> + > >> + /* > >> + * The host might not support the guest supported IOMMU page size, > >> + * so we will use smaller physical IOMMU pages to back them. > >> + */ > >> + if (pagesize > systempagesize) { > >> + pagesize = systempagesize; > >> + } > > pagesize cannot be zero here (I checked possible code paths)... > > > > >> + pagesize = 1ULL << (63 - clz64(container->pgsizes & > >> + (pagesize | (pagesize - 1)))); > >> If the argument to clz64() is zero then it will return 64, and > > then we will try to do a shift by -1, which is undefined > > behaviour. > > ... but the clz64() argument can if lets say container->pgsizes=1<<30 > (comes from VFIO) and pagesize=1<<16 (decided by QEMU or guest). > > > I'll sent a patch to handle clz64()=>64. Thanks, Thanks, Alexey. Peter, I don't think this is urgent however - it's really unlikely in practice. > > > > Can the expression ever be zero? It's not immediately obvious to me > > that it can't be, so my suggestion would be that if it is > > impossible then an assert() of that would be helpful, and if it > > is possible then the code needs to avoid the illegal shift. > > >> + if (!pagesize) { > >> + error_report("Host doesn't support page size 0x%"PRIx64 > >> + ", the supported mask is 0x%lx", > >> + memory_region_iommu_get_min_page_size(iommu_mr), > >> + container->pgsizes); > >> + return -EINVAL; > >> + } > > > > thanks > > -- PMM > > >
diff --git a/hw/vfio/common.c b/hw/vfio/common.c index cd1f4af18a..3f31f80b12 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -1136,6 +1136,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, info.iova_pgsizes = 4096; } vfio_host_win_add(container, 0, (hwaddr)-1, info.iova_pgsizes); + container->pgsizes = info.iova_pgsizes; } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU) || ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU)) { struct vfio_iommu_spapr_tce_info info; @@ -1200,6 +1201,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, } if (v2) { + container->pgsizes = info.ddw.pgsizes; /* * There is a default window in just created container. * To make region_add/del simpler, we better remove this @@ -1214,6 +1216,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, } } else { /* The default table uses 4K pages */ + container->pgsizes = 0x1000; vfio_host_win_add(container, info.dma32_window_start, info.dma32_window_start + info.dma32_window_size - 1, diff --git a/hw/vfio/spapr.c b/hw/vfio/spapr.c index 259397c002..becf71a3fc 100644 --- a/hw/vfio/spapr.c +++ b/hw/vfio/spapr.c @@ -15,6 +15,7 @@ #include "hw/vfio/vfio-common.h" #include "hw/hw.h" +#include "exec/ram_addr.h" #include "qemu/error-report.h" #include "trace.h" @@ -144,9 +145,27 @@ int vfio_spapr_create_window(VFIOContainer *container, { int ret; IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr); - unsigned pagesize = memory_region_iommu_get_min_page_size(iommu_mr); + uint64_t pagesize = memory_region_iommu_get_min_page_size(iommu_mr); unsigned entries, pages; struct vfio_iommu_spapr_tce_create create = { .argsz = sizeof(create) }; + long systempagesize = qemu_getrampagesize(); + + /* + * The host might not support the guest supported IOMMU page size, + * so we will use smaller physical IOMMU pages to back them. + */ + if (pagesize > systempagesize) { + pagesize = systempagesize; + } + pagesize = 1ULL << (63 - clz64(container->pgsizes & + (pagesize | (pagesize - 1)))); + if (!pagesize) { + error_report("Host doesn't support page size 0x%"PRIx64 + ", the supported mask is 0x%lx", + memory_region_iommu_get_min_page_size(iommu_mr), + container->pgsizes); + return -EINVAL; + } /* * FIXME: For VFIO iommu types which have KVM acceleration to diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 15ea6c26fd..821def0565 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -73,6 +73,7 @@ typedef struct VFIOContainer { unsigned iommu_type; int error; bool initialized; + unsigned long pgsizes; /* * This assumes the host IOMMU can support only a single * contiguous IOVA window. We may need to generalize that in