Message ID | 20210121110540.33704-12-david@redhat.com |
---|---|
State | New |
Headers | show |
Series | virtio-mem: vfio support | expand |
On Thu, 21 Jan 2021 12:05:40 +0100 David Hildenbrand <david@redhat.com> wrote: > We support coordinated discarding of RAM using the RamDiscardMgr for > the VFIO_TYPE1 iommus. Let's unlock support for coordinated discards, > keeping uncoordinated discards (e.g., via virtio-balloon) disabled if > possible. > > This unlocks virtio-mem + vfio on x86-64. Note that vfio used via "nvme://" > by the block layer has to be implemented/unlocked separately. For now, > virtio-mem only supports x86-64; we don't restrict RamDiscardMgr to x86-64, > though: arm64 and s390x are supposed to work as well, and we'll test > once unlocking virtio-mem support. The spapr IOMMUs will need special > care, to be tackled later, e.g.., once supporting virtio-mem. > > Note: The block size of a virtio-mem device has to be set to sane sizes, > depending on the maximum hotplug size - to not run out of vfio mappings. > The default virtio-mem block size is usually in the range of a couple of > MBs. The maximum number of mapping is 64k, shared with other users. > Assume you want to hotplug 256GB using virtio-mem - the block size would > have to be set to at least 8 MiB (resulting in 32768 separate mappings). > > Cc: Paolo Bonzini <pbonzini@redhat.com> > Cc: "Michael S. Tsirkin" <mst@redhat.com> > Cc: Alex Williamson <alex.williamson@redhat.com> > Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> > Cc: Igor Mammedov <imammedo@redhat.com> > Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com> > Cc: Peter Xu <peterx@redhat.com> > Cc: Auger Eric <eric.auger@redhat.com> > Cc: Wei Yang <richard.weiyang@linux.alibaba.com> > Cc: teawater <teawaterz@linux.alibaba.com> > Cc: Marek Kedzierski <mkedzier@redhat.com> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > hw/vfio/common.c | 63 +++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 51 insertions(+), 12 deletions(-) Acked-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> > > diff --git a/hw/vfio/common.c b/hw/vfio/common.c > index 15ecd05a4b..d879b8ab92 100644 > --- a/hw/vfio/common.c > +++ b/hw/vfio/common.c > @@ -135,6 +135,27 @@ static const char *index_to_str(VFIODevice *vbasedev, int index) > } > } > > +static int vfio_ram_block_discard_disable(VFIOContainer *container, bool state) > +{ > + switch (container->iommu_type) { > + case VFIO_TYPE1v2_IOMMU: > + case VFIO_TYPE1_IOMMU: > + /* We support coordinated discarding of RAM via the RamDiscardMgr. */ > + return ram_block_uncoordinated_discard_disable(state); > + default: > + /* > + * VFIO_SPAPR_TCE_IOMMU most probably works just fine with > + * RamDiscardMgr, however, it is completely untested. > + * > + * VFIO_SPAPR_TCE_v2_IOMMU with "DMA memory preregistering" does > + * completely the opposite of managing mapping/pinning dynamically as > + * required by RamDiscardMgr. We would have to special-case sections > + * with a RamDiscardMgr. > + */ > + return ram_block_discard_disable(state); > + } > +} > + > int vfio_set_irq_signaling(VFIODevice *vbasedev, int index, int subindex, > int action, int fd, Error **errp) > { > @@ -1979,15 +2000,25 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, > * new memory, it will not yet set ram_block_discard_set_required() and > * therefore, neither stops us here or deals with the sudden memory > * consumption of inflated memory. > + * > + * We do support discarding of memory coordinated via the RamDiscardMgr > + * with some IOMMU types. vfio_ram_block_discard_disable() handles the > + * details once we know which type of IOMMU we are using. > */ > - ret = ram_block_discard_disable(true); > - if (ret) { > - error_setg_errno(errp, -ret, "Cannot set discarding of RAM broken"); > - return ret; > - } > > QLIST_FOREACH(container, &space->containers, next) { > if (!ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &container->fd)) { > + ret = vfio_ram_block_discard_disable(container, true); > + if (ret) { > + error_setg_errno(errp, -ret, > + "Cannot set discarding of RAM broken"); > + if (ioctl(group->fd, VFIO_GROUP_UNSET_CONTAINER, > + &container->fd)) { > + error_report("vfio: error disconnecting group %d from" > + " container", group->groupid); > + } > + return ret; > + } > group->container = container; > QLIST_INSERT_HEAD(&container->group_list, group, container_next); > vfio_kvm_device_add_group(group); > @@ -2025,6 +2056,12 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, > goto free_container_exit; > } > > + ret = vfio_ram_block_discard_disable(container, true); > + if (ret) { > + error_setg_errno(errp, -ret, "Cannot set discarding of RAM broken"); > + goto free_container_exit; > + } > + > switch (container->iommu_type) { > case VFIO_TYPE1v2_IOMMU: > case VFIO_TYPE1_IOMMU: > @@ -2072,7 +2109,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, > if (ret) { > error_setg_errno(errp, errno, "failed to enable container"); > ret = -errno; > - goto free_container_exit; > + goto enable_discards_exit; > } > } else { > container->prereg_listener = vfio_prereg_listener; > @@ -2084,7 +2121,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, > ret = -1; > error_propagate_prepend(errp, container->error, > "RAM memory listener initialization failed: "); > - goto free_container_exit; > + goto enable_discards_exit; > } > } > > @@ -2097,7 +2134,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, > if (v2) { > memory_listener_unregister(&container->prereg_listener); > } > - goto free_container_exit; > + goto enable_discards_exit; > } > > if (v2) { > @@ -2112,7 +2149,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, > if (ret) { > error_setg_errno(errp, -ret, > "failed to remove existing window"); > - goto free_container_exit; > + goto enable_discards_exit; > } > } else { > /* The default table uses 4K pages */ > @@ -2153,6 +2190,9 @@ listener_release_exit: > vfio_kvm_device_del_group(group); > vfio_listener_release(container); > > +enable_discards_exit: > + vfio_ram_block_discard_disable(container, false); > + > free_container_exit: > g_free(container); > > @@ -2160,7 +2200,6 @@ close_fd_exit: > close(fd); > > put_space_exit: > - ram_block_discard_disable(false); > vfio_put_address_space(space); > > return ret; > @@ -2282,7 +2321,7 @@ void vfio_put_group(VFIOGroup *group) > } > > if (!group->ram_block_discard_allowed) { > - ram_block_discard_disable(false); > + vfio_ram_block_discard_disable(group->container, false); > } > vfio_kvm_device_del_group(group); > vfio_disconnect_container(group); > @@ -2336,7 +2375,7 @@ int vfio_get_device(VFIOGroup *group, const char *name, > > if (!group->ram_block_discard_allowed) { > group->ram_block_discard_allowed = true; > - ram_block_discard_disable(false); > + vfio_ram_block_discard_disable(group->container, false); > } > } >
diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 15ecd05a4b..d879b8ab92 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -135,6 +135,27 @@ static const char *index_to_str(VFIODevice *vbasedev, int index) } } +static int vfio_ram_block_discard_disable(VFIOContainer *container, bool state) +{ + switch (container->iommu_type) { + case VFIO_TYPE1v2_IOMMU: + case VFIO_TYPE1_IOMMU: + /* We support coordinated discarding of RAM via the RamDiscardMgr. */ + return ram_block_uncoordinated_discard_disable(state); + default: + /* + * VFIO_SPAPR_TCE_IOMMU most probably works just fine with + * RamDiscardMgr, however, it is completely untested. + * + * VFIO_SPAPR_TCE_v2_IOMMU with "DMA memory preregistering" does + * completely the opposite of managing mapping/pinning dynamically as + * required by RamDiscardMgr. We would have to special-case sections + * with a RamDiscardMgr. + */ + return ram_block_discard_disable(state); + } +} + int vfio_set_irq_signaling(VFIODevice *vbasedev, int index, int subindex, int action, int fd, Error **errp) { @@ -1979,15 +2000,25 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, * new memory, it will not yet set ram_block_discard_set_required() and * therefore, neither stops us here or deals with the sudden memory * consumption of inflated memory. + * + * We do support discarding of memory coordinated via the RamDiscardMgr + * with some IOMMU types. vfio_ram_block_discard_disable() handles the + * details once we know which type of IOMMU we are using. */ - ret = ram_block_discard_disable(true); - if (ret) { - error_setg_errno(errp, -ret, "Cannot set discarding of RAM broken"); - return ret; - } QLIST_FOREACH(container, &space->containers, next) { if (!ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &container->fd)) { + ret = vfio_ram_block_discard_disable(container, true); + if (ret) { + error_setg_errno(errp, -ret, + "Cannot set discarding of RAM broken"); + if (ioctl(group->fd, VFIO_GROUP_UNSET_CONTAINER, + &container->fd)) { + error_report("vfio: error disconnecting group %d from" + " container", group->groupid); + } + return ret; + } group->container = container; QLIST_INSERT_HEAD(&container->group_list, group, container_next); vfio_kvm_device_add_group(group); @@ -2025,6 +2056,12 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, goto free_container_exit; } + ret = vfio_ram_block_discard_disable(container, true); + if (ret) { + error_setg_errno(errp, -ret, "Cannot set discarding of RAM broken"); + goto free_container_exit; + } + switch (container->iommu_type) { case VFIO_TYPE1v2_IOMMU: case VFIO_TYPE1_IOMMU: @@ -2072,7 +2109,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, if (ret) { error_setg_errno(errp, errno, "failed to enable container"); ret = -errno; - goto free_container_exit; + goto enable_discards_exit; } } else { container->prereg_listener = vfio_prereg_listener; @@ -2084,7 +2121,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, ret = -1; error_propagate_prepend(errp, container->error, "RAM memory listener initialization failed: "); - goto free_container_exit; + goto enable_discards_exit; } } @@ -2097,7 +2134,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, if (v2) { memory_listener_unregister(&container->prereg_listener); } - goto free_container_exit; + goto enable_discards_exit; } if (v2) { @@ -2112,7 +2149,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as, if (ret) { error_setg_errno(errp, -ret, "failed to remove existing window"); - goto free_container_exit; + goto enable_discards_exit; } } else { /* The default table uses 4K pages */ @@ -2153,6 +2190,9 @@ listener_release_exit: vfio_kvm_device_del_group(group); vfio_listener_release(container); +enable_discards_exit: + vfio_ram_block_discard_disable(container, false); + free_container_exit: g_free(container); @@ -2160,7 +2200,6 @@ close_fd_exit: close(fd); put_space_exit: - ram_block_discard_disable(false); vfio_put_address_space(space); return ret; @@ -2282,7 +2321,7 @@ void vfio_put_group(VFIOGroup *group) } if (!group->ram_block_discard_allowed) { - ram_block_discard_disable(false); + vfio_ram_block_discard_disable(group->container, false); } vfio_kvm_device_del_group(group); vfio_disconnect_container(group); @@ -2336,7 +2375,7 @@ int vfio_get_device(VFIOGroup *group, const char *name, if (!group->ram_block_discard_allowed) { group->ram_block_discard_allowed = true; - ram_block_discard_disable(false); + vfio_ram_block_discard_disable(group->container, false); } }
We support coordinated discarding of RAM using the RamDiscardMgr for the VFIO_TYPE1 iommus. Let's unlock support for coordinated discards, keeping uncoordinated discards (e.g., via virtio-balloon) disabled if possible. This unlocks virtio-mem + vfio on x86-64. Note that vfio used via "nvme://" by the block layer has to be implemented/unlocked separately. For now, virtio-mem only supports x86-64; we don't restrict RamDiscardMgr to x86-64, though: arm64 and s390x are supposed to work as well, and we'll test once unlocking virtio-mem support. The spapr IOMMUs will need special care, to be tackled later, e.g.., once supporting virtio-mem. Note: The block size of a virtio-mem device has to be set to sane sizes, depending on the maximum hotplug size - to not run out of vfio mappings. The default virtio-mem block size is usually in the range of a couple of MBs. The maximum number of mapping is 64k, shared with other users. Assume you want to hotplug 256GB using virtio-mem - the block size would have to be set to at least 8 MiB (resulting in 32768 separate mappings). Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com> Cc: Peter Xu <peterx@redhat.com> Cc: Auger Eric <eric.auger@redhat.com> Cc: Wei Yang <richard.weiyang@linux.alibaba.com> Cc: teawater <teawaterz@linux.alibaba.com> Cc: Marek Kedzierski <mkedzier@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> --- hw/vfio/common.c | 63 +++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 51 insertions(+), 12 deletions(-)