[v5,2/2] hw/vfio/common: Fail on VFIO/HW nested paging detection
diff mbox series

Message ID 20190829090141.21821-3-eric.auger@redhat.com
State New
Headers show
Series
  • VFIO/SMMUv3: Fail on VFIO/HW nested paging detection
Related show

Commit Message

Auger Eric Aug. 29, 2019, 9:01 a.m. UTC
As of today, VFIO only works along with vIOMMU supporting
caching mode. The SMMUv3 does not support this mode and
requires HW nested paging to work properly with VFIO.

So any attempt to run a VFIO device protected by such IOMMU
would prevent the assigned device from working and at the
moment the guest does not even boot as the default
memory_region_iommu_replay() implementation attempts to
translate the whole address space and completely stalls
the guest.

So let's fail on that case.

Signed-off-by: Eric Auger <eric.auger@redhat.com>

---

v3 -> v4:
- use IOMMU_ATTR_HW_NESTED_PAGING
- do not abort anymore but jump to fail
---
 hw/vfio/common.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Alex Williamson Aug. 29, 2019, 6:14 p.m. UTC | #1
On Thu, 29 Aug 2019 11:01:41 +0200
Eric Auger <eric.auger@redhat.com> wrote:

> As of today, VFIO only works along with vIOMMU supporting
> caching mode. The SMMUv3 does not support this mode and
> requires HW nested paging to work properly with VFIO.
> 
> So any attempt to run a VFIO device protected by such IOMMU
> would prevent the assigned device from working and at the
> moment the guest does not even boot as the default
> memory_region_iommu_replay() implementation attempts to
> translate the whole address space and completely stalls
> the guest.

Why doesn't this stall an x86 guest?

I'm a bit confused about what this provides versus the flag_changed
notifier looking for IOMMU_NOTIFIER_MAP, which AIUI is the common
deficiency between VT-d w/o caching-mode and SMMUv3 w/o nested mode.
The iommu notifier is registered prior to calling iommu_replay, so it
seems we already have an opportunity to do something there.  Help me
understand why this is needed.  Thanks,

Alex

> 
> So let's fail on that case.
> 
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> 
> ---
> 
> v3 -> v4:
> - use IOMMU_ATTR_HW_NESTED_PAGING
> - do not abort anymore but jump to fail
> ---
>  hw/vfio/common.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 3e03c495d8..e8c009d019 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -606,9 +606,19 @@ static void vfio_listener_region_add(MemoryListener *listener,
>      if (memory_region_is_iommu(section->mr)) {
>          VFIOGuestIOMMU *giommu;
>          IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
> +        bool nested;
>          int iommu_idx;
>  
>          trace_vfio_listener_region_add_iommu(iova, end);
> +
> +        if (!memory_region_iommu_get_attr(iommu_mr,
> +                                          IOMMU_ATTR_NEED_HW_NESTED_PAGING,
> +                                          (void *)&nested) && nested) {
> +            error_report("VFIO/vIOMMU integration based on HW nested paging "
> +                         "is not yet supported");
> +            ret = -EINVAL;
> +            goto fail;
> +        }
>          /*
>           * FIXME: For VFIO iommu types which have KVM acceleration to
>           * avoid bouncing all map/unmaps through qemu this way, this
Auger Eric Aug. 30, 2019, 8:06 a.m. UTC | #2
Hi Alex,

On 8/29/19 8:14 PM, Alex Williamson wrote:
> On Thu, 29 Aug 2019 11:01:41 +0200
> Eric Auger <eric.auger@redhat.com> wrote:
> 
>> As of today, VFIO only works along with vIOMMU supporting
>> caching mode. The SMMUv3 does not support this mode and
>> requires HW nested paging to work properly with VFIO.
>>
>> So any attempt to run a VFIO device protected by such IOMMU
>> would prevent the assigned device from working and at the
>> moment the guest does not even boot as the default
>> memory_region_iommu_replay() implementation attempts to
>> translate the whole address space and completely stalls
>> the guest.
> 
> Why doesn't this stall an x86 guest?
it does not stall on x86 since intel_iommu implements a custom replay
(see vtd_iommu_replay) and you do not execute the dummy default one.
This function performs a full page table walk, scanning all the valid
entries and calling the MAP notifier on those. Although this operation
is tedious it has nothing to compare against the dummy default replay
function which calls translate() on the whole address range (on a page
basis).

> 
> I'm a bit confused about what this provides versus the flag_changed
> notifier looking for IOMMU_NOTIFIER_MAP, which AIUI is the common
> deficiency between VT-d w/o caching-mode and SMMUv3 w/o nested mode.
> The iommu notifier is registered prior to calling iommu_replay, so it
> seems we already have an opportunity to do something there.  Help me
> understand why this is needed.  Thanks,

At the moment the smmuv3 notify_flag_changed callback implementation
(smmuv3_notify_flag_changed) emits a warning when it detects a MAP
notifier gets registered:

warn_report("SMMUv3 does not support notification on MAP: "
            "device %s will not function properly", pcidev->name);

and then the replay gets executed, looping forever.

I could exit instead of emitting a warning but the drawback is that on
vfio hotplug, it will also exit whereas we would rather simply reject
the hotplug.

I think the solution based on the IOMMU MR attribute handles both the
static and hotplug solutions. Also looking further, I will need this
IOMMU MR attribute for 2stage SMMU integration (see [RFC v5 14/29] vfio:
Force nested if iommu requires it). I know that it is standing for a
while and it is still hypothetical but setting up 2stage will require
specific treatments in the vfio common.c code, opt-in the 2stage mode,
register specific iommu mr notifiers. Using the IOMMU MR attribute
allows me to detect which kind of VFIO/IOMMU integration I need to setup.


Thanks

Eric


> 
> Alex
> 
>>
>> So let's fail on that case.
>>
>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>>
>> ---
>>
>> v3 -> v4:
>> - use IOMMU_ATTR_HW_NESTED_PAGING
>> - do not abort anymore but jump to fail
>> ---
>>  hw/vfio/common.c | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
>> index 3e03c495d8..e8c009d019 100644
>> --- a/hw/vfio/common.c
>> +++ b/hw/vfio/common.c
>> @@ -606,9 +606,19 @@ static void vfio_listener_region_add(MemoryListener *listener,
>>      if (memory_region_is_iommu(section->mr)) {
>>          VFIOGuestIOMMU *giommu;
>>          IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
>> +        bool nested;
>>          int iommu_idx;
>>  
>>          trace_vfio_listener_region_add_iommu(iova, end);
>> +
>> +        if (!memory_region_iommu_get_attr(iommu_mr,
>> +                                          IOMMU_ATTR_NEED_HW_NESTED_PAGING,
>> +                                          (void *)&nested) && nested) {
>> +            error_report("VFIO/vIOMMU integration based on HW nested paging "
>> +                         "is not yet supported");
>> +            ret = -EINVAL;
>> +            goto fail;
>> +        }
>>          /*
>>           * FIXME: For VFIO iommu types which have KVM acceleration to
>>           * avoid bouncing all map/unmaps through qemu this way, this
> 
>
Alex Williamson Aug. 30, 2019, 5:22 p.m. UTC | #3
On Fri, 30 Aug 2019 10:06:56 +0200
Auger Eric <eric.auger@redhat.com> wrote:

> Hi Alex,
> 
> On 8/29/19 8:14 PM, Alex Williamson wrote:
> > On Thu, 29 Aug 2019 11:01:41 +0200
> > Eric Auger <eric.auger@redhat.com> wrote:
> >   
> >> As of today, VFIO only works along with vIOMMU supporting
> >> caching mode. The SMMUv3 does not support this mode and
> >> requires HW nested paging to work properly with VFIO.
> >>
> >> So any attempt to run a VFIO device protected by such IOMMU
> >> would prevent the assigned device from working and at the
> >> moment the guest does not even boot as the default
> >> memory_region_iommu_replay() implementation attempts to
> >> translate the whole address space and completely stalls
> >> the guest.  
> > 
> > Why doesn't this stall an x86 guest?  
> it does not stall on x86 since intel_iommu implements a custom replay
> (see vtd_iommu_replay) and you do not execute the dummy default one.
> This function performs a full page table walk, scanning all the valid
> entries and calling the MAP notifier on those. Although this operation
> is tedious it has nothing to compare against the dummy default replay
> function which calls translate() on the whole address range (on a page
> basis).

Ah right.  OTOH, what are the arguments against smmuv3 providing a
replay function?

> > I'm a bit confused about what this provides versus the flag_changed
> > notifier looking for IOMMU_NOTIFIER_MAP, which AIUI is the common
> > deficiency between VT-d w/o caching-mode and SMMUv3 w/o nested mode.
> > The iommu notifier is registered prior to calling iommu_replay, so it
> > seems we already have an opportunity to do something there.  Help me
> > understand why this is needed.  Thanks,  
> 
> At the moment the smmuv3 notify_flag_changed callback implementation
> (smmuv3_notify_flag_changed) emits a warning when it detects a MAP
> notifier gets registered:
> 
> warn_report("SMMUv3 does not support notification on MAP: "
>             "device %s will not function properly", pcidev->name);
> 
> and then the replay gets executed, looping forever.
> 
> I could exit instead of emitting a warning but the drawback is that on
> vfio hotplug, it will also exit whereas we would rather simply reject
> the hotplug.

There are solutions to the above by modifying the existing framework
rather than creating a parallel solution though.  For instance, could
memory_region_register_iommu_notifier() reject the notifier if the flag
change is incompatible, allowing the fault to propagate back to vfio
and taking a similar exit path as provided here.
 
> I think the solution based on the IOMMU MR attribute handles both the
> static and hotplug solutions. Also looking further, I will need this
> IOMMU MR attribute for 2stage SMMU integration (see [RFC v5 14/29]
> vfio: Force nested if iommu requires it). I know that it is standing
> for a while and it is still hypothetical but setting up 2stage will
> require specific treatments in the vfio common.c code, opt-in the
> 2stage mode, register specific iommu mr notifiers. Using the IOMMU MR
> attribute allows me to detect which kind of VFIO/IOMMU integration I
> need to setup.

Hmm, I'm certainly more on board with that use case.  I guess the
question is whether the problem statement presented here justifies what
seems to be a parallel solution to what we have today, or could have
with some enhancements.  Thanks,

Alex

> >>
> >> So let's fail on that case.
> >>
> >> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> >>
> >> ---
> >>
> >> v3 -> v4:
> >> - use IOMMU_ATTR_HW_NESTED_PAGING
> >> - do not abort anymore but jump to fail
> >> ---
> >>  hw/vfio/common.c | 10 ++++++++++
> >>  1 file changed, 10 insertions(+)
> >>
> >> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> >> index 3e03c495d8..e8c009d019 100644
> >> --- a/hw/vfio/common.c
> >> +++ b/hw/vfio/common.c
> >> @@ -606,9 +606,19 @@ static void
> >> vfio_listener_region_add(MemoryListener *listener, if
> >> (memory_region_is_iommu(section->mr)) { VFIOGuestIOMMU *giommu;
> >>          IOMMUMemoryRegion *iommu_mr =
> >> IOMMU_MEMORY_REGION(section->mr);
> >> +        bool nested;
> >>          int iommu_idx;
> >>  
> >>          trace_vfio_listener_region_add_iommu(iova, end);
> >> +
> >> +        if (!memory_region_iommu_get_attr(iommu_mr,
> >> +
> >> IOMMU_ATTR_NEED_HW_NESTED_PAGING,
> >> +                                          (void *)&nested) &&
> >> nested) {
> >> +            error_report("VFIO/vIOMMU integration based on HW
> >> nested paging "
> >> +                         "is not yet supported");
> >> +            ret = -EINVAL;
> >> +            goto fail;
> >> +        }
> >>          /*
> >>           * FIXME: For VFIO iommu types which have KVM
> >> acceleration to
> >>           * avoid bouncing all map/unmaps through qemu this way,
> >> this  
> > 
> >

Patch
diff mbox series

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 3e03c495d8..e8c009d019 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -606,9 +606,19 @@  static void vfio_listener_region_add(MemoryListener *listener,
     if (memory_region_is_iommu(section->mr)) {
         VFIOGuestIOMMU *giommu;
         IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
+        bool nested;
         int iommu_idx;
 
         trace_vfio_listener_region_add_iommu(iova, end);
+
+        if (!memory_region_iommu_get_attr(iommu_mr,
+                                          IOMMU_ATTR_NEED_HW_NESTED_PAGING,
+                                          (void *)&nested) && nested) {
+            error_report("VFIO/vIOMMU integration based on HW nested paging "
+                         "is not yet supported");
+            ret = -EINVAL;
+            goto fail;
+        }
         /*
          * FIXME: For VFIO iommu types which have KVM acceleration to
          * avoid bouncing all map/unmaps through qemu this way, this