diff mbox series

[v3,5/5] intel_iommu: Optimize out some unnecessary UNMAP calls

Message ID 20230608095231.225450-6-zhenzhong.duan@intel.com
State New
Headers show
Series Optimize UNMAP call and bug fix | expand

Commit Message

Duan, Zhenzhong June 8, 2023, 9:52 a.m. UTC
Commit 63b88968f1 ("intel-iommu: rework the page walk logic") adds logic
to record mapped IOVA ranges so we only need to send MAP or UNMAP when
necessary. But there is still a corner case of unnecessary UNMAP.

During invalidation, either domain or device selective, we only need to
unmap when there are recorded mapped IOVA ranges, presuming most of OSes
allocating IOVA range continuously, e.g. on x86, linux sets up mapping
from 0xffffffff downwards.

Strace shows UNMAP ioctl taking 0.000014us and we have 28 such ioctl()
in one invalidation, as two notifiers in x86 are split into power of 2
pieces.

ioctl(48, VFIO_IOMMU_UNMAP_DMA, 0x7ffffd5c42f0) = 0 <0.000014>

The other purpose of this patch is to eliminate noisy error log when we
work with IOMMUFD. It looks the duplicate UNMAP call will fail with IOMMUFD
while always succeed with legacy container. This behavior difference leads
to below error log for IOMMUFD:

IOMMU_IOAS_UNMAP failed: No such file or directory
vfio_container_dma_unmap(0x562012d6b6d0, 0x0, 0x80000000) = -2 (No such file or directory)
IOMMU_IOAS_UNMAP failed: No such file or directory
vfio_container_dma_unmap(0x562012d6b6d0, 0x80000000, 0x40000000) = -2 (No such file or directory)
...

Suggested-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
 hw/i386/intel_iommu.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

Comments

Peter Xu June 8, 2023, 2:05 p.m. UTC | #1
On Thu, Jun 08, 2023 at 05:52:31PM +0800, Zhenzhong Duan wrote:
> Commit 63b88968f1 ("intel-iommu: rework the page walk logic") adds logic
> to record mapped IOVA ranges so we only need to send MAP or UNMAP when
> necessary. But there is still a corner case of unnecessary UNMAP.
> 
> During invalidation, either domain or device selective, we only need to
> unmap when there are recorded mapped IOVA ranges, presuming most of OSes
> allocating IOVA range continuously, e.g. on x86, linux sets up mapping
> from 0xffffffff downwards.
> 
> Strace shows UNMAP ioctl taking 0.000014us and we have 28 such ioctl()
> in one invalidation, as two notifiers in x86 are split into power of 2
> pieces.
> 
> ioctl(48, VFIO_IOMMU_UNMAP_DMA, 0x7ffffd5c42f0) = 0 <0.000014>

Thanks for the numbers, but for a fair comparison IMHO it needs to be a
comparison of before/after on the whole time used for unmap AS.  It'll be
great to have finer granule measurements like each ioctl, but the total
time used should be more important (especially to contain "after"). Side
note: I don't think the UNMAP ioctl will take the same time; it should
matter on whether there's mapping exist).

Actually it's hard to tell because this also depends on what's in the iova
tree.. but still at least we know how it works in some cases.

> 
> The other purpose of this patch is to eliminate noisy error log when we
> work with IOMMUFD. It looks the duplicate UNMAP call will fail with IOMMUFD
> while always succeed with legacy container. This behavior difference leads
> to below error log for IOMMUFD:
> 
> IOMMU_IOAS_UNMAP failed: No such file or directory
> vfio_container_dma_unmap(0x562012d6b6d0, 0x0, 0x80000000) = -2 (No such file or directory)
> IOMMU_IOAS_UNMAP failed: No such file or directory
> vfio_container_dma_unmap(0x562012d6b6d0, 0x80000000, 0x40000000) = -2 (No such file or directory)
> ...

My gut feeling is the major motivation is actually this (not the perf).
tens of some 14us ioctls is really nothing on a rare event..

Jason Wang raised a question in previous version and I think JasonG's reply
is here:

https://lore.kernel.org/r/ZHTaQXd3ZybmhCLb@nvidia.com

JasonG: sorry I know zero on iommufd api yet, but you said:

        The VFIO emulation functions should do whatever VFIO does, is there
        a mistake there?

IIUC what VFIO does here is it returns succeed if unmap over nothing rather
than failing like iommufd.  Curious (like JasonW) on why that retval?  I'd
assume for returning "how much unmapped" we can at least still return 0 for
nothing.

Are you probably suggesting that we can probably handle that in QEMU side
on -ENOENT here for iommufd only (a question to Yi?).

If that's already a kernel abi, not sure whether it's even discussable, but
just to raise this up.
Jason Gunthorpe June 8, 2023, 2:11 p.m. UTC | #2
On Thu, Jun 08, 2023 at 10:05:08AM -0400, Peter Xu wrote:

> IIUC what VFIO does here is it returns succeed if unmap over nothing rather
> than failing like iommufd.  Curious (like JasonW) on why that retval?  I'd
> assume for returning "how much unmapped" we can at least still return 0 for
> nothing.

In iommufd maps are objects, you can only map or unmap entire
objects. The ability to batch unmap objects by specifying an range
that spans many is something that was easy to do and that VFIO had,
but I'm not sure it is actually usefull..

So asking to unmap an object that is already known not to be mapped is
actually possibly racy, especially if you consider iommufd's support
for kernel-side IOVA allocation. It should not be done, or if it is
done, with user space locking to protect it.

For VFIO, long long ago, VFIO could unmap IOVA page at a time - ie it
wasn't objects. In this world it made some sense that the unmap would
'succeed' as the end result was unmapped.

> Are you probably suggesting that we can probably handle that in QEMU side
> on -ENOENT here for iommufd only (a question to Yi?).

Yes, this can be done, ENOENT is reliably returned and qemu doesn't
use the kernel-side IOVA allocator.

But if there is the proper locks to prevent a map/unmap race, then
there should also be the proper locks to check that there is no map in
the first place and avoid the kernel call..

Jason
Peter Xu June 8, 2023, 3:40 p.m. UTC | #3
On Thu, Jun 08, 2023 at 11:11:15AM -0300, Jason Gunthorpe wrote:
> On Thu, Jun 08, 2023 at 10:05:08AM -0400, Peter Xu wrote:
> 
> > IIUC what VFIO does here is it returns succeed if unmap over nothing rather
> > than failing like iommufd.  Curious (like JasonW) on why that retval?  I'd
> > assume for returning "how much unmapped" we can at least still return 0 for
> > nothing.
> 
> In iommufd maps are objects, you can only map or unmap entire
> objects. The ability to batch unmap objects by specifying an range
> that spans many is something that was easy to do and that VFIO had,
> but I'm not sure it is actually usefull..
> 
> So asking to unmap an object that is already known not to be mapped is
> actually possibly racy, especially if you consider iommufd's support
> for kernel-side IOVA allocation. It should not be done, or if it is
> done, with user space locking to protect it.
> 
> For VFIO, long long ago, VFIO could unmap IOVA page at a time - ie it
> wasn't objects. In this world it made some sense that the unmap would
> 'succeed' as the end result was unmapped.
> 
> > Are you probably suggesting that we can probably handle that in QEMU side
> > on -ENOENT here for iommufd only (a question to Yi?).
> 
> Yes, this can be done, ENOENT is reliably returned and qemu doesn't
> use the kernel-side IOVA allocator.
> 
> But if there is the proper locks to prevent a map/unmap race, then
> there should also be the proper locks to check that there is no map in
> the first place and avoid the kernel call..

The problem is IIRC guest iommu driver can do smart things like batching
invalidations, it means when QEMU gets it from the guest OS it may already
not matching one mapped objects.

We can definitely lookup every single object and explicitly unmap, but it
loses partial of the point of batching that guest OS does.  Logically QEMU
can redirect that batched invalidation into one ioctl() to the host, rather
than a lot of smaller ones.

While for this specific patch - Zhenzhong/Yi, do you agree that we should
just handle -ENOENT in the iommufd series (I assume it's still under work),
then for this specific patch it's only about performance difference?

Thanks,
Jason Gunthorpe June 8, 2023, 4:27 p.m. UTC | #4
On Thu, Jun 08, 2023 at 11:40:55AM -0400, Peter Xu wrote:

> > But if there is the proper locks to prevent a map/unmap race, then
> > there should also be the proper locks to check that there is no map in
> > the first place and avoid the kernel call..
> 
> The problem is IIRC guest iommu driver can do smart things like batching
> invalidations, it means when QEMU gets it from the guest OS it may already
> not matching one mapped objects.

qemu has to fix it. The kernel API is object based, not paged
based. You cannot unmap partions of a prior mapping.

I assume for this kind of emulation it is doing 4k objects because
it has no idea what size of mapping the client will use?

> We can definitely lookup every single object and explicitly unmap, but it
> loses partial of the point of batching that guest OS does.  

You don't need every single object, but it would be faster to check
where things are mapped and then call the kernel correctly instead of
trying to iterate with the unmapped reults.

Jason
Peter Xu June 8, 2023, 7:53 p.m. UTC | #5
On Thu, Jun 08, 2023 at 01:27:50PM -0300, Jason Gunthorpe wrote:
> On Thu, Jun 08, 2023 at 11:40:55AM -0400, Peter Xu wrote:
> 
> > > But if there is the proper locks to prevent a map/unmap race, then
> > > there should also be the proper locks to check that there is no map in
> > > the first place and avoid the kernel call..
> > 
> > The problem is IIRC guest iommu driver can do smart things like batching
> > invalidations, it means when QEMU gets it from the guest OS it may already
> > not matching one mapped objects.
> 
> qemu has to fix it. The kernel API is object based, not paged
> based. You cannot unmap partions of a prior mapping.
> 
> I assume for this kind of emulation it is doing 4k objects because
> it has no idea what size of mapping the client will use?

MAP is fine, before notify() to VFIO or anything, qemu scans the pgtable
and handles it in page size or huge page size, so it can be >4K but always
guest iommu pgsize aligned.

I think we rely on guest behaving right, so it should also always operate
on that size minimum when mapped huge.  It shouldn't violate the
"per-object" protocol of iommufd.

IIUC the same to vfio type1v2 from that aspect.

It's more about UNMAP batching, but I assume iommufd is fine if it's fine
with holes inside for that case.  The only difference of "not exist" of
-ENOENT seems to be just same as before as long as QEMU treats it as 0 like
before.

Though that does look slightly special, because the whole empty UNMAP
region can be seen as a hole too; not sure when that -ENOENT will be useful
if qemu should always bypass it anyway.  Indeed not a problem to qemu.

> 
> > We can definitely lookup every single object and explicitly unmap, but it
> > loses partial of the point of batching that guest OS does.  
> 
> You don't need every single object, but it would be faster to check
> where things are mapped and then call the kernel correctly instead of
> trying to iterate with the unmapped reults.

Maybe yes.  If so, It'll be great if Zhenzhong could just attach some proof
on that, meanwhile drop the "iommufd UNMAP warnings" section in the commit
message.

Thanks,
Peter Xu June 8, 2023, 8:34 p.m. UTC | #6
On Thu, Jun 08, 2023 at 05:52:31PM +0800, Zhenzhong Duan wrote:
>      while (remain >= VTD_PAGE_SIZE) {
> -        IOMMUTLBEvent event;
>          uint64_t mask = dma_aligned_pow2_mask(start, end, s->aw_bits);
>          uint64_t size = mask + 1;
>  
>          assert(size);
>  
> -        event.type = IOMMU_NOTIFIER_UNMAP;
> -        event.entry.iova = start;
> -        event.entry.addr_mask = mask;
> -        event.entry.target_as = &address_space_memory;
> -        event.entry.perm = IOMMU_NONE;
> -        /* This field is meaningless for unmap */
> -        event.entry.translated_addr = 0;
> -
> -        memory_region_notify_iommu_one(n, &event);
> +        map.iova = start;
> +        map.size = mask;
> +        if (iova_tree_find(as->iova_tree, &map)) {
> +            event.entry.iova = start;
> +            event.entry.addr_mask = mask;
> +            memory_region_notify_iommu_one(n, &event);
> +        }

Ah one more thing: I think this path can also be triggered by notifiers
without MAP event registered, whose iova tree will always be empty.  So we
may only do this for MAP, then I'm not sure whether it'll be worthwhile..
Jason Gunthorpe June 9, 2023, 1 a.m. UTC | #7
On Thu, Jun 08, 2023 at 03:53:23PM -0400, Peter Xu wrote:
> Though that does look slightly special, because the whole empty UNMAP
> region can be seen as a hole too; not sure when that -ENOENT will be useful
> if qemu should always bypass it anyway.  Indeed not a problem to qemu.

It sounds like it might be good to have a flag to unmap the whole
range regardless of contiguity

Jason
Duan, Zhenzhong June 9, 2023, 3:41 a.m. UTC | #8
>-----Original Message-----
>From: Peter Xu <peterx@redhat.com>
>Sent: Thursday, June 8, 2023 10:05 PM
>To: Duan, Zhenzhong <zhenzhong.duan@intel.com>; Jason Gunthorpe
><jgg@nvidia.com>
>Cc: qemu-devel@nongnu.org; mst@redhat.com; jasowang@redhat.com;
>pbonzini@redhat.com; richard.henderson@linaro.org; eduardo@habkost.net;
>marcel.apfelbaum@gmail.com; alex.williamson@redhat.com;
>clg@redhat.com; david@redhat.com; philmd@linaro.org;
>kwankhede@nvidia.com; cjia@nvidia.com; Liu, Yi L <yi.l.liu@intel.com>; Peng,
>Chao P <chao.p.peng@intel.com>
>Subject: Re: [PATCH v3 5/5] intel_iommu: Optimize out some unnecessary
>UNMAP calls
>
>On Thu, Jun 08, 2023 at 05:52:31PM +0800, Zhenzhong Duan wrote:
>> Commit 63b88968f1 ("intel-iommu: rework the page walk logic") adds
>> logic to record mapped IOVA ranges so we only need to send MAP or
>> UNMAP when necessary. But there is still a corner case of unnecessary
>UNMAP.
>>
>> During invalidation, either domain or device selective, we only need
>> to unmap when there are recorded mapped IOVA ranges, presuming most
>of
>> OSes allocating IOVA range continuously, e.g. on x86, linux sets up
>> mapping from 0xffffffff downwards.
>>
>> Strace shows UNMAP ioctl taking 0.000014us and we have 28 such ioctl()
>> in one invalidation, as two notifiers in x86 are split into power of 2
>> pieces.
>>
>> ioctl(48, VFIO_IOMMU_UNMAP_DMA, 0x7ffffd5c42f0) = 0 <0.000014>
>
>Thanks for the numbers, but for a fair comparison IMHO it needs to be a
>comparison of before/after on the whole time used for unmap AS.  It'll be
>great to have finer granule measurements like each ioctl, but the total time
>used should be more important (especially to contain "after"). Side
>note: I don't think the UNMAP ioctl will take the same time; it should matter
>on whether there's mapping exist).

Yes, but what we want to optimize out is the unmapping no-existent range case.
Will show the time diff spent in unmap AS.

>
>Actually it's hard to tell because this also depends on what's in the iova tree..
>but still at least we know how it works in some cases.
>
>>
>> The other purpose of this patch is to eliminate noisy error log when
>> we work with IOMMUFD. It looks the duplicate UNMAP call will fail with
>> IOMMUFD while always succeed with legacy container. This behavior
>> difference leads to below error log for IOMMUFD:
>>
>> IOMMU_IOAS_UNMAP failed: No such file or directory
>> vfio_container_dma_unmap(0x562012d6b6d0, 0x0, 0x80000000) = -2 (No
>> such file or directory) IOMMU_IOAS_UNMAP failed: No such file or
>> directory vfio_container_dma_unmap(0x562012d6b6d0, 0x80000000,
>> 0x40000000) = -2 (No such file or directory) ...
>
>My gut feeling is the major motivation is actually this (not the perf).
>tens of some 14us ioctls is really nothing on a rare event..

To be honest, yes.

Thanks
Zhenzhong

>
>Jason Wang raised a question in previous version and I think JasonG's reply is
>here:
>
>https://lore.kernel.org/r/ZHTaQXd3ZybmhCLb@nvidia.com
>
>JasonG: sorry I know zero on iommufd api yet, but you said:
>
>        The VFIO emulation functions should do whatever VFIO does, is there
>        a mistake there?
>
>IIUC what VFIO does here is it returns succeed if unmap over nothing rather
>than failing like iommufd.  Curious (like JasonW) on why that retval?  I'd
>assume for returning "how much unmapped" we can at least still return 0 for
>nothing.
>
>Are you probably suggesting that we can probably handle that in QEMU side
>on -ENOENT here for iommufd only (a question to Yi?).
>
>If that's already a kernel abi, not sure whether it's even discussable, but just to
>raise this up.
>
>--
>Peter Xu
Duan, Zhenzhong June 9, 2023, 4:01 a.m. UTC | #9
>-----Original Message-----
>From: Peter Xu <peterx@redhat.com>
>Sent: Friday, June 9, 2023 4:34 AM
>To: Duan, Zhenzhong <zhenzhong.duan@intel.com>
>Cc: qemu-devel@nongnu.org; mst@redhat.com; jasowang@redhat.com;
>pbonzini@redhat.com; richard.henderson@linaro.org; eduardo@habkost.net;
>marcel.apfelbaum@gmail.com; alex.williamson@redhat.com;
>clg@redhat.com; david@redhat.com; philmd@linaro.org;
>kwankhede@nvidia.com; cjia@nvidia.com; Liu, Yi L <yi.l.liu@intel.com>; Peng,
>Chao P <chao.p.peng@intel.com>
>Subject: Re: [PATCH v3 5/5] intel_iommu: Optimize out some unnecessary
>UNMAP calls
>
>On Thu, Jun 08, 2023 at 05:52:31PM +0800, Zhenzhong Duan wrote:
>>      while (remain >= VTD_PAGE_SIZE) {
>> -        IOMMUTLBEvent event;
>>          uint64_t mask = dma_aligned_pow2_mask(start, end, s->aw_bits);
>>          uint64_t size = mask + 1;
>>
>>          assert(size);
>>
>> -        event.type = IOMMU_NOTIFIER_UNMAP;
>> -        event.entry.iova = start;
>> -        event.entry.addr_mask = mask;
>> -        event.entry.target_as = &address_space_memory;
>> -        event.entry.perm = IOMMU_NONE;
>> -        /* This field is meaningless for unmap */
>> -        event.entry.translated_addr = 0;
>> -
>> -        memory_region_notify_iommu_one(n, &event);
>> +        map.iova = start;
>> +        map.size = mask;
>> +        if (iova_tree_find(as->iova_tree, &map)) {
>> +            event.entry.iova = start;
>> +            event.entry.addr_mask = mask;
>> +            memory_region_notify_iommu_one(n, &event);
>> +        }
>
>Ah one more thing: I think this path can also be triggered by notifiers without
>MAP event registered, whose iova tree will always be empty.  So we may only
>do this for MAP, then I'm not sure whether it'll be worthwhile..

Hmm, yes, my change will lead to vhost missed to receive some invalidation request in device-tlb disabled case as iova tree is empty. Thanks for point out.

Let me collect time diff spent in unmap AS for you to decide if it still worth or not.

Thanks
Zhenzhong
Duan, Zhenzhong June 9, 2023, 4:03 a.m. UTC | #10
>-----Original Message-----
>From: Peter Xu <peterx@redhat.com>
>Sent: Thursday, June 8, 2023 11:41 PM
>To: Jason Gunthorpe <jgg@nvidia.com>; Liu, Yi L <yi.l.liu@intel.com>; Duan,
>Zhenzhong <zhenzhong.duan@intel.com>
>Cc: Duan, Zhenzhong <zhenzhong.duan@intel.com>; qemu-
>devel@nongnu.org; mst@redhat.com; jasowang@redhat.com;
>pbonzini@redhat.com; richard.henderson@linaro.org; eduardo@habkost.net;
>marcel.apfelbaum@gmail.com; alex.williamson@redhat.com;
>clg@redhat.com; david@redhat.com; philmd@linaro.org;
>kwankhede@nvidia.com; cjia@nvidia.com; Liu, Yi L <yi.l.liu@intel.com>; Peng,
>Chao P <chao.p.peng@intel.com>
>Subject: Re: [PATCH v3 5/5] intel_iommu: Optimize out some unnecessary
>UNMAP calls
>
>On Thu, Jun 08, 2023 at 11:11:15AM -0300, Jason Gunthorpe wrote:
>> On Thu, Jun 08, 2023 at 10:05:08AM -0400, Peter Xu wrote:
>>
>> > IIUC what VFIO does here is it returns succeed if unmap over nothing
>rather
>> > than failing like iommufd.  Curious (like JasonW) on why that retval?  I'd
>> > assume for returning "how much unmapped" we can at least still return 0
>for
>> > nothing.
>>
>> In iommufd maps are objects, you can only map or unmap entire
>> objects. The ability to batch unmap objects by specifying an range
>> that spans many is something that was easy to do and that VFIO had,
>> but I'm not sure it is actually usefull..
>>
>> So asking to unmap an object that is already known not to be mapped is
>> actually possibly racy, especially if you consider iommufd's support
>> for kernel-side IOVA allocation. It should not be done, or if it is
>> done, with user space locking to protect it.
>>
>> For VFIO, long long ago, VFIO could unmap IOVA page at a time - ie it
>> wasn't objects. In this world it made some sense that the unmap would
>> 'succeed' as the end result was unmapped.
>>
>> > Are you probably suggesting that we can probably handle that in QEMU
>side
>> > on -ENOENT here for iommufd only (a question to Yi?).
>>
>> Yes, this can be done, ENOENT is reliably returned and qemu doesn't
>> use the kernel-side IOVA allocator.
>>
>> But if there is the proper locks to prevent a map/unmap race, then
>> there should also be the proper locks to check that there is no map in
>> the first place and avoid the kernel call..
>
>The problem is IIRC guest iommu driver can do smart things like batching
>invalidations, it means when QEMU gets it from the guest OS it may already
>not matching one mapped objects.
>
>We can definitely lookup every single object and explicitly unmap, but it
>loses partial of the point of batching that guest OS does.  Logically QEMU
>can redirect that batched invalidation into one ioctl() to the host, rather
>than a lot of smaller ones.
>
>While for this specific patch - Zhenzhong/Yi, do you agree that we should
>just handle -ENOENT in the iommufd series (I assume it's still under work),
>then for this specific patch it's only about performance difference?

Yes, that make sense.

Thanks
Zhenzhong
Duan, Zhenzhong June 9, 2023, 5:49 a.m. UTC | #11
>-----Original Message-----
>From: Peter Xu <peterx@redhat.com>
>Sent: Friday, June 9, 2023 3:53 AM
>To: Jason Gunthorpe <jgg@nvidia.com>
>Cc: Liu, Yi L <yi.l.liu@intel.com>; Duan, Zhenzhong
><zhenzhong.duan@intel.com>; qemu-devel@nongnu.org; mst@redhat.com;
>jasowang@redhat.com; pbonzini@redhat.com;
>richard.henderson@linaro.org; eduardo@habkost.net;
>marcel.apfelbaum@gmail.com; alex.williamson@redhat.com;
>clg@redhat.com; david@redhat.com; philmd@linaro.org;
>kwankhede@nvidia.com; cjia@nvidia.com; Peng, Chao P
><chao.p.peng@intel.com>
>Subject: Re: [PATCH v3 5/5] intel_iommu: Optimize out some unnecessary
>UNMAP calls
>
>On Thu, Jun 08, 2023 at 01:27:50PM -0300, Jason Gunthorpe wrote:
>> On Thu, Jun 08, 2023 at 11:40:55AM -0400, Peter Xu wrote:
>>
>> > > But if there is the proper locks to prevent a map/unmap race, then
>> > > there should also be the proper locks to check that there is no map in
>> > > the first place and avoid the kernel call..
>> >
>> > The problem is IIRC guest iommu driver can do smart things like batching
>> > invalidations, it means when QEMU gets it from the guest OS it may
>already
>> > not matching one mapped objects.
>>
>> qemu has to fix it. The kernel API is object based, not paged
>> based. You cannot unmap partions of a prior mapping.
>>
>> I assume for this kind of emulation it is doing 4k objects because
>> it has no idea what size of mapping the client will use?
>
>MAP is fine, before notify() to VFIO or anything, qemu scans the pgtable
>and handles it in page size or huge page size, so it can be >4K but always
>guest iommu pgsize aligned.
>
>I think we rely on guest behaving right, so it should also always operate
>on that size minimum when mapped huge.  It shouldn't violate the
>"per-object" protocol of iommufd.
>
>IIUC the same to vfio type1v2 from that aspect.
>
>It's more about UNMAP batching, but I assume iommufd is fine if it's fine
>with holes inside for that case.  The only difference of "not exist" of
>-ENOENT seems to be just same as before as long as QEMU treats it as 0 like
>before.
>
>Though that does look slightly special, because the whole empty UNMAP
>region can be seen as a hole too; not sure when that -ENOENT will be useful
>if qemu should always bypass it anyway.  Indeed not a problem to qemu.
>
>>
>> > We can definitely lookup every single object and explicitly unmap, but it
>> > loses partial of the point of batching that guest OS does.
>>
>> You don't need every single object, but it would be faster to check
>> where things are mapped and then call the kernel correctly instead of
>> trying to iterate with the unmapped reults.
>
>Maybe yes.  If so, It'll be great if Zhenzhong could just attach some proof
>on that, meanwhile drop the "iommufd UNMAP warnings" section in the commit
>message.

Seems vtd_page_walk_one() already works in above way, checking mapping
changes and calling kernel for changed entries?

Thanks
Zhenzhong
Peter Xu June 9, 2023, 9:26 p.m. UTC | #12
On Fri, Jun 09, 2023 at 05:49:06AM +0000, Duan, Zhenzhong wrote:
> Seems vtd_page_walk_one() already works in above way, checking mapping
> changes and calling kernel for changed entries?

Agreed in most cases, but the path this patch modified is not?  E.g. it
happens in rare cases where we simply want to unmap everything (e.g. on a
system reset, or invalid context entry)?

That's also why I'm curious whether perf of this path matters at all (and
assuming now we all agree that's the only goal now..), because afaiu it
didn't really trigger in common paths.
Duan, Zhenzhong June 13, 2023, 2:37 a.m. UTC | #13
>-----Original Message-----
>From: Peter Xu <peterx@redhat.com>
>Sent: Saturday, June 10, 2023 5:26 AM
>Subject: Re: [PATCH v3 5/5] intel_iommu: Optimize out some unnecessary
>UNMAP calls
>
>On Fri, Jun 09, 2023 at 05:49:06AM +0000, Duan, Zhenzhong wrote:
>> Seems vtd_page_walk_one() already works in above way, checking mapping
>> changes and calling kernel for changed entries?
>
>Agreed in most cases, but the path this patch modified is not?  E.g. it happens
>in rare cases where we simply want to unmap everything (e.g. on a system
>reset, or invalid context entry)?
Clear.

>
>That's also why I'm curious whether perf of this path matters at all (and
>assuming now we all agree that's the only goal now..), because afaiu it didn't
>really trigger in common paths.
I'll collect performance data and reply back.

Thanks
Zhenzhong
Duan, Zhenzhong June 14, 2023, 9:38 a.m. UTC | #14
>-----Original Message-----
>From: Duan, Zhenzhong
>Sent: Friday, June 9, 2023 12:02 PM
>To: Peter Xu <peterx@redhat.com>
>Cc: qemu-devel@nongnu.org; mst@redhat.com; jasowang@redhat.com;
>pbonzini@redhat.com; richard.henderson@linaro.org; eduardo@habkost.net;
>marcel.apfelbaum@gmail.com; alex.williamson@redhat.com;
>clg@redhat.com; david@redhat.com; philmd@linaro.org;
>kwankhede@nvidia.com; cjia@nvidia.com; Liu, Yi L <yi.l.liu@intel.com>; Peng,
>Chao P <chao.p.peng@intel.com>
>Subject: RE: [PATCH v3 5/5] intel_iommu: Optimize out some unnecessary
>UNMAP calls
>
>
>
>>-----Original Message-----
>>From: Peter Xu <peterx@redhat.com>
>>Sent: Friday, June 9, 2023 4:34 AM
>>To: Duan, Zhenzhong <zhenzhong.duan@intel.com>
>>Cc: qemu-devel@nongnu.org; mst@redhat.com; jasowang@redhat.com;
>>pbonzini@redhat.com; richard.henderson@linaro.org;
>eduardo@habkost.net;
>>marcel.apfelbaum@gmail.com; alex.williamson@redhat.com;
>clg@redhat.com;
>>david@redhat.com; philmd@linaro.org; kwankhede@nvidia.com;
>>cjia@nvidia.com; Liu, Yi L <yi.l.liu@intel.com>; Peng, Chao P
>><chao.p.peng@intel.com>
>>Subject: Re: [PATCH v3 5/5] intel_iommu: Optimize out some unnecessary
>>UNMAP calls
>>
>>On Thu, Jun 08, 2023 at 05:52:31PM +0800, Zhenzhong Duan wrote:
>>>      while (remain >= VTD_PAGE_SIZE) {
>>> -        IOMMUTLBEvent event;
>>>          uint64_t mask = dma_aligned_pow2_mask(start, end, s->aw_bits);
>>>          uint64_t size = mask + 1;
>>>
>>>          assert(size);
>>>
>>> -        event.type = IOMMU_NOTIFIER_UNMAP;
>>> -        event.entry.iova = start;
>>> -        event.entry.addr_mask = mask;
>>> -        event.entry.target_as = &address_space_memory;
>>> -        event.entry.perm = IOMMU_NONE;
>>> -        /* This field is meaningless for unmap */
>>> -        event.entry.translated_addr = 0;
>>> -
>>> -        memory_region_notify_iommu_one(n, &event);
>>> +        map.iova = start;
>>> +        map.size = mask;
>>> +        if (iova_tree_find(as->iova_tree, &map)) {
>>> +            event.entry.iova = start;
>>> +            event.entry.addr_mask = mask;
>>> +            memory_region_notify_iommu_one(n, &event);
>>> +        }
>>
>>Ah one more thing: I think this path can also be triggered by notifiers
>>without MAP event registered, whose iova tree will always be empty.  So
>>we may only do this for MAP, then I'm not sure whether it'll be worthwhile..
>
>Hmm, yes, my change will lead to vhost missed to receive some invalidation
>request in device-tlb disabled case as iova tree is empty. Thanks for point out.
>
>Let me collect time diff spent in unmap AS for you to decide if it still worth or
>not.
I used below changes to collect time spent:

@@ -3739,12 +3739,14 @@ VTDAddressSpace *vtd_find_add_as(IntelIOMMUState *s, PCIBus *bus,
 /* Unmap the whole range in the notifier's scope. */
 static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
 {
+    int64_t start_tv, delta_tv;
     hwaddr size, remain;
     hwaddr start = n->start;
     hwaddr end = n->end;
     IntelIOMMUState *s = as->iommu_state;
     DMAMap map;

+    start_tv = qemu_clock_get_us(QEMU_CLOCK_REALTIME);
     /*
      * Note: all the codes in this function has a assumption that IOVA
      * bits are no more than VTD_MGAW bits (which is restricted by
@@ -3793,6 +3795,9 @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
     map.iova = n->start;
     map.size = size;
     iova_tree_remove(as->iova_tree, map);
+
+    delta_tv = qemu_clock_get_us(QEMU_CLOCK_REALTIME) - start_tv;
+    printf("************ delta_tv %lu us **************\n", delta_tv);
 }

With legacy container(vfio-pci,host=81:11.1,id=vfio1,bus=root1)
Hotplug:
************ delta_tv 12 us **************
************ delta_tv 8 us **************
Unplug:
************ delta_tv 12 us **************
************ delta_tv 3 us **************

After fix:
Hotplug: empty
Unplug:
************ delta_tv 2 us **************
************ delta_tv 1 us **************

With iommufd container(vfio-pci,host=81:11.1,id=vfio1,bus=root1,iommufd=iommufd0)
Hotplug:
************ delta_tv 25 us **************
************ delta_tv 23 us ************** 
Unplug:
************ delta_tv 15 us **************
************ delta_tv 5 us **************

After fix:
Hotplug: empty
Unplug:
************ delta_tv 2 us **************
************ delta_tv 1 us **************

It looks the benefit of this patch is negligible for legacy container and iommufd.
I'd like to drop this patch as it makes no difference, your opinion?

Thanks
Zhenzhong
Duan, Zhenzhong June 14, 2023, 9:47 a.m. UTC | #15
>-----Original Message-----
>From: Peter Xu <peterx@redhat.com>
>Sent: Saturday, June 10, 2023 5:26 AM
>Subject: Re: [PATCH v3 5/5] intel_iommu: Optimize out some unnecessary
>UNMAP calls
>
>On Fri, Jun 09, 2023 at 05:49:06AM +0000, Duan, Zhenzhong wrote:
>> Seems vtd_page_walk_one() already works in above way, checking mapping
>> changes and calling kernel for changed entries?
>
>Agreed in most cases, but the path this patch modified is not?  E.g. it happens
>in rare cases where we simply want to unmap everything (e.g. on a system
>reset, or invalid context entry)?
>
>That's also why I'm curious whether perf of this path matters at all (and
>assuming now we all agree that's the only goal now..), because afaiu it didn't
>really trigger in common paths.
I used below changes to collect time spent with iommufd backend during system reset.
Enable macro TEST_UNMAP to test unmap iova tree entries one by one.
Disable TEST_UNMAP to use unmap_ALL

--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3736,16 +3736,44 @@ VTDAddressSpace *vtd_find_add_as(IntelIOMMUState *s, PCIBus *bus,
     return vtd_dev_as;
 }

+static gboolean iova_tree_iterator1(DMAMap *map)
+{
+    static int cnt;
+    printf("**********dump iova tree %d: iova %lx, size %lx\n", ++cnt, map->iova, map->size);
+    return false;
+}
+
+//#define TEST_UNMAP
+#ifdef TEST_UNMAP
+static gboolean vtd_unmap_single(DMAMap *map, gpointer *private)
+{
+    IOMMUTLBEvent event;
+
+    event.type = IOMMU_NOTIFIER_UNMAP;
+    event.entry.iova = map->iova;
+    event.entry.addr_mask = map->size;
+    event.entry.target_as = &address_space_memory;
+    event.entry.perm = IOMMU_NONE;
+    event.entry.translated_addr = 0;
+
+    memory_region_notify_iommu_one((IOMMUNotifier *)private, &event);
+    return false;
+}
+#endif
+
 /* Unmap the whole range in the notifier's scope. */
 static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
 {
+    int64_t start_tv, delta_tv;
     hwaddr size, remain;
     hwaddr start = n->start;
     hwaddr end = n->end;
     IntelIOMMUState *s = as->iommu_state;
-    IOMMUTLBEvent event;
     DMAMap map;

+    iova_tree_foreach(as->iova_tree, iova_tree_iterator1);
+
+    start_tv = qemu_clock_get_us(QEMU_CLOCK_REALTIME);
     /*
      * Note: all the codes in this function has a assumption that IOVA
      * bits are no more than VTD_MGAW bits (which is restricted by
@@ -3763,6 +3791,13 @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
     assert(start <= end);
     size = remain = end - start + 1;

+#ifdef TEST_UNMAP
+    map.iova = n->start;
+    map.size = size - 1;
+    iova_tree_foreach_range_data(as->iova_tree, &map, vtd_unmap_single,
+                                 (gpointer *)n);
+#else
+    IOMMUTLBEvent event;
     event.type = IOMMU_NOTIFIER_UNMAP;
     event.entry.target_as = &address_space_memory;
     event.entry.perm = IOMMU_NONE;
@@ -3788,6 +3823,7 @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
     }

     assert(!remain);
+#endif

     trace_vtd_as_unmap_whole(pci_bus_num(as->bus),
                              VTD_PCI_SLOT(as->devfn),
@@ -3797,6 +3833,9 @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
     map.iova = n->start;
     map.size = size - 1; /* Inclusive */
     iova_tree_remove(as->iova_tree, map);
+
+    delta_tv = qemu_clock_get_us(QEMU_CLOCK_REALTIME) - start_tv;
+    printf("************ delta_tv %lu us **************\n", delta_tv);
 }


RESULT:
[   14.825015] reboot: Power down
**********dump iova tree 1: iova fffbe000, size fff
...
**********dump iova tree 66: iova fffff000, size fff
...
With TEST_UNMAP:
************ delta_tv 393 us **************
************ delta_tv 0 us **************

Without TEST_UNMAP:
************ delta_tv 364 us **************
************ delta_tv 2 us **************

It looks no explicit difference, unmap_ALL is a little better.
I also tried legacy container, result is similar as above:

With TEST_UNMAP:
************ delta_tv 325 us **************
************ delta_tv 0 us **************

Without TEST_UNMAP:
************ delta_tv 317 us **************
************ delta_tv 1 us **************

Thanks
Zhenzhong
Peter Xu June 14, 2023, 12:51 p.m. UTC | #16
On Wed, Jun 14, 2023 at 09:38:41AM +0000, Duan, Zhenzhong wrote:
> It looks the benefit of this patch is negligible for legacy container and iommufd.
> I'd like to drop this patch as it makes no difference, your opinion?

Thanks for the test results.  Sounds good here.
diff mbox series

Patch

diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index dcc334060cd6..9e5ba81c89e2 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3743,6 +3743,7 @@  static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
     hwaddr start = n->start;
     hwaddr end = n->end;
     IntelIOMMUState *s = as->iommu_state;
+    IOMMUTLBEvent event;
     DMAMap map;
 
     /*
@@ -3762,22 +3763,25 @@  static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
     assert(start <= end);
     size = remain = end - start + 1;
 
+    event.type = IOMMU_NOTIFIER_UNMAP;
+    event.entry.target_as = &address_space_memory;
+    event.entry.perm = IOMMU_NONE;
+    /* This field is meaningless for unmap */
+    event.entry.translated_addr = 0;
+
     while (remain >= VTD_PAGE_SIZE) {
-        IOMMUTLBEvent event;
         uint64_t mask = dma_aligned_pow2_mask(start, end, s->aw_bits);
         uint64_t size = mask + 1;
 
         assert(size);
 
-        event.type = IOMMU_NOTIFIER_UNMAP;
-        event.entry.iova = start;
-        event.entry.addr_mask = mask;
-        event.entry.target_as = &address_space_memory;
-        event.entry.perm = IOMMU_NONE;
-        /* This field is meaningless for unmap */
-        event.entry.translated_addr = 0;
-
-        memory_region_notify_iommu_one(n, &event);
+        map.iova = start;
+        map.size = mask;
+        if (iova_tree_find(as->iova_tree, &map)) {
+            event.entry.iova = start;
+            event.entry.addr_mask = mask;
+            memory_region_notify_iommu_one(n, &event);
+        }
 
         start += size;
         remain -= size;