diff mbox series

[V2,31/33] iommu/vt-d: Enable PCI/IMS

Message ID 20221121091328.184455059@linutronix.de
State New
Headers show
Series genirq, PCI/MSI: Support for per device MSI and PCI/IMS - Part 3 implementation | expand

Commit Message

Thomas Gleixner Nov. 21, 2022, 2:38 p.m. UTC
PCI/IMS works like PCI/MSI-X in the remapping. Just add the feature flag.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/iommu/intel/irq_remapping.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Tian, Kevin Nov. 24, 2022, 3:17 a.m. UTC | #1
> From: Thomas Gleixner <tglx@linutronix.de>
> Sent: Monday, November 21, 2022 10:38 PM
> 
> PCI/IMS works like PCI/MSI-X in the remapping. Just add the feature flag.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  drivers/iommu/intel/irq_remapping.c |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> --- a/drivers/iommu/intel/irq_remapping.c
> +++ b/drivers/iommu/intel/irq_remapping.c
> @@ -1429,7 +1429,9 @@ static const struct irq_domain_ops intel
>  };
> 
>  static const struct msi_parent_ops dmar_msi_parent_ops = {
> -	.supported_flags	= X86_VECTOR_MSI_FLAGS_SUPPORTED |
> MSI_FLAG_MULTI_PCI_MSI,
> +	.supported_flags	= X86_VECTOR_MSI_FLAGS_SUPPORTED |
> +				  MSI_FLAG_MULTI_PCI_MSI |
> +				  MSI_FLAG_PCI_IMS,
>  	.prefix			= "IR-",
>  	.init_dev_msi_info	= msi_parent_init_dev_msi_info,
>  };

vIR is already available on vIOMMU today [1].

Fortunately both intel/amd IOMMU has a way to detect whether it's a vIOMMU.

For intel it's cap_caching_mode().

For AMD it's amd_iommu_np_cache.

Then MSI_FLAG_PCI_IMS should be set only on physical IOMMU.

In the future once we have hypercall then it can be set on vIOMMU too.

[1] https://lore.kernel.org/all/BL1PR11MB5271326D39DAB692F07587768C739@BL1PR11MB5271.namprd11.prod.outlook.com/
Thomas Gleixner Nov. 24, 2022, 9:37 a.m. UTC | #2
On Thu, Nov 24 2022 at 03:17, Kevin Tian wrote:
>>  static const struct msi_parent_ops dmar_msi_parent_ops = {
>> -	.supported_flags	= X86_VECTOR_MSI_FLAGS_SUPPORTED |
>> MSI_FLAG_MULTI_PCI_MSI,
>> +	.supported_flags	= X86_VECTOR_MSI_FLAGS_SUPPORTED |
>> +				  MSI_FLAG_MULTI_PCI_MSI |
>> +				  MSI_FLAG_PCI_IMS,
>>  	.prefix			= "IR-",
>>  	.init_dev_msi_info	= msi_parent_init_dev_msi_info,
>>  };
>
> vIR is already available on vIOMMU today [1].
>
> Fortunately both intel/amd IOMMU has a way to detect whether it's a vIOMMU.
>
> For intel it's cap_caching_mode().
>
> For AMD it's amd_iommu_np_cache.
>
> Then MSI_FLAG_PCI_IMS should be set only on physical IOMMU.

Ok. Let me fix that then.

But that made me read back some more.

Jason said, that the envisioned Mellanox use case does not depend on the
IOMMU because the card itself has one which takes care of the
protections.

How are we going to resolve that dilemma?

Thanks,

        tglx
Jason Gunthorpe Nov. 24, 2022, 1:14 p.m. UTC | #3
On Thu, Nov 24, 2022 at 10:37:53AM +0100, Thomas Gleixner wrote:
> On Thu, Nov 24 2022 at 03:17, Kevin Tian wrote:
> >>  static const struct msi_parent_ops dmar_msi_parent_ops = {
> >> -	.supported_flags	= X86_VECTOR_MSI_FLAGS_SUPPORTED |
> >> MSI_FLAG_MULTI_PCI_MSI,
> >> +	.supported_flags	= X86_VECTOR_MSI_FLAGS_SUPPORTED |
> >> +				  MSI_FLAG_MULTI_PCI_MSI |
> >> +				  MSI_FLAG_PCI_IMS,
> >>  	.prefix			= "IR-",
> >>  	.init_dev_msi_info	= msi_parent_init_dev_msi_info,
> >>  };
> >
> > vIR is already available on vIOMMU today [1].
> >
> > Fortunately both intel/amd IOMMU has a way to detect whether it's a vIOMMU.
> >
> > For intel it's cap_caching_mode().
> >
> > For AMD it's amd_iommu_np_cache.
> >
> > Then MSI_FLAG_PCI_IMS should be set only on physical IOMMU.
> 
> Ok. Let me fix that then.
> 
> But that made me read back some more.
> 
> Jason said, that the envisioned Mellanox use case does not depend on the
> IOMMU because the card itself has one which takes care of the
> protections.

Right, but that doesn't mean we need the physical iommu turned
off. Setting the mlx pci device to identity mode is usually enough to
get back to full performance.

> How are we going to resolve that dilemma?

The outcome is we don't have a strategy right now to make IMS work in
VMs. This series is all about making it work on physical machines,
that has to be a good first step.

I'm hoping the OCP work stream on SIOV will tackle how to fix the
interrupt problems. Some of the ideas I've seen could be formed into
something that would work in a VM.

Jason
Thomas Gleixner Nov. 24, 2022, 1:21 p.m. UTC | #4
On Thu, Nov 24 2022 at 09:14, Jason Gunthorpe wrote:
> On Thu, Nov 24, 2022 at 10:37:53AM +0100, Thomas Gleixner wrote:
>> Jason said, that the envisioned Mellanox use case does not depend on the
>> IOMMU because the card itself has one which takes care of the
>> protections.
>
> Right, but that doesn't mean we need the physical iommu turned
> off. Setting the mlx pci device to identity mode is usually enough to
> get back to full performance.

Ok.

>> How are we going to resolve that dilemma?
>
> The outcome is we don't have a strategy right now to make IMS work in
> VMs. This series is all about making it work on physical machines,
> that has to be a good first step.
>
> I'm hoping the OCP work stream on SIOV will tackle how to fix the
> interrupt problems. Some of the ideas I've seen could be formed into
> something that would work in a VM.

Fair enough.

Let me put the limitation into effect then.

Thanks,

        tglx
Tian, Kevin Nov. 28, 2022, 1:54 a.m. UTC | #5
> From: Thomas Gleixner <tglx@linutronix.de>
> Sent: Thursday, November 24, 2022 9:21 PM
> 
> On Thu, Nov 24 2022 at 09:14, Jason Gunthorpe wrote:
> > On Thu, Nov 24, 2022 at 10:37:53AM +0100, Thomas Gleixner wrote:
> >> Jason said, that the envisioned Mellanox use case does not depend on the
> >> IOMMU because the card itself has one which takes care of the
> >> protections.
> >
> > Right, but that doesn't mean we need the physical iommu turned
> > off. Setting the mlx pci device to identity mode is usually enough to
> > get back to full performance.
> 
> Ok.

yes. IR can be enabled orthogonal to DMA mapping mode in IOMMU.

> 
> >> How are we going to resolve that dilemma?
> >
> > The outcome is we don't have a strategy right now to make IMS work in
> > VMs. This series is all about making it work on physical machines,
> > that has to be a good first step.

yes, that is the point. As long as IMS is disabled in the guest it's already
a good first step for the moment.

> >
> > I'm hoping the OCP work stream on SIOV will tackle how to fix the
> > interrupt problems. Some of the ideas I've seen could be formed into
> > something that would work in a VM.
> 
> Fair enough.
> 
> Let me put the limitation into effect then.
> 
> Thanks,
> 
>         tglx
diff mbox series

Patch

--- a/drivers/iommu/intel/irq_remapping.c
+++ b/drivers/iommu/intel/irq_remapping.c
@@ -1429,7 +1429,9 @@  static const struct irq_domain_ops intel
 };
 
 static const struct msi_parent_ops dmar_msi_parent_ops = {
-	.supported_flags	= X86_VECTOR_MSI_FLAGS_SUPPORTED | MSI_FLAG_MULTI_PCI_MSI,
+	.supported_flags	= X86_VECTOR_MSI_FLAGS_SUPPORTED |
+				  MSI_FLAG_MULTI_PCI_MSI |
+				  MSI_FLAG_PCI_IMS,
 	.prefix			= "IR-",
 	.init_dev_msi_info	= msi_parent_init_dev_msi_info,
 };