Message ID | 20200528030240.16024-3-jonathan.derrick@intel.com |
---|---|
State | New |
Headers | show |
Series | VMD endpoint passthrough support | expand |
On Wed, May 27, 2020 at 11:02:39PM -0400, Jon Derrick wrote: > Versions of VMD with the Host Physical Address shadow register use this > register to calculate the bus address offset needed to do guest > passthrough of the domain. This register shadows the Host Physical > Address registers including the resource type bits. After calculating > the offset, the extra resource type bits lead to the VMD resources being > over-provisioned at the front and under-provisioned at the back. > > Example: > pci 10000:80:02.0: reg 0x10: [mem 0xf801fffc-0xf803fffb 64bit] > > Expected: > pci 10000:80:02.0: reg 0x10: [mem 0xf8020000-0xf803ffff 64bit] > > If other devices are mapped in the over-provisioned front, it could lead > to resource conflict issues with VMD or those devices. > > Fixes: a1a30170138c9 ("PCI: vmd: Fix shadow offsets to reflect spec changes") > Signed-off-by: Jon Derrick <jonathan.derrick@intel.com> > --- > drivers/pci/controller/vmd.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) Hi Jon, it looks like I can take this patch for v5.8 whereas patch 2 depends on the QEMU changes acceptance and should probably wait. Please let me know your thoughts asap and I will try to at least squeeze this patch in. Lorenzo > diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c > index dac91d6..e386d4e 100644 > --- a/drivers/pci/controller/vmd.c > +++ b/drivers/pci/controller/vmd.c > @@ -445,9 +445,11 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) > if (!membar2) > return -ENOMEM; > offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - > - readq(membar2 + MB2_SHADOW_OFFSET); > + (readq(membar2 + MB2_SHADOW_OFFSET) & > + PCI_BASE_ADDRESS_MEM_MASK); > offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - > - readq(membar2 + MB2_SHADOW_OFFSET + 8); > + (readq(membar2 + MB2_SHADOW_OFFSET + 8) & > + PCI_BASE_ADDRESS_MEM_MASK); > pci_iounmap(vmd->dev, membar2); > } > } > -- > 1.8.3.1 >
On Fri, 2020-05-29 at 11:33 +0100, Lorenzo Pieralisi wrote: > On Wed, May 27, 2020 at 11:02:39PM -0400, Jon Derrick wrote: > > Versions of VMD with the Host Physical Address shadow register use this > > register to calculate the bus address offset needed to do guest > > passthrough of the domain. This register shadows the Host Physical > > Address registers including the resource type bits. After calculating > > the offset, the extra resource type bits lead to the VMD resources being > > over-provisioned at the front and under-provisioned at the back. > > > > Example: > > pci 10000:80:02.0: reg 0x10: [mem 0xf801fffc-0xf803fffb 64bit] > > > > Expected: > > pci 10000:80:02.0: reg 0x10: [mem 0xf8020000-0xf803ffff 64bit] > > > > If other devices are mapped in the over-provisioned front, it could lead > > to resource conflict issues with VMD or those devices. > > > > Fixes: a1a30170138c9 ("PCI: vmd: Fix shadow offsets to reflect spec changes") > > Signed-off-by: Jon Derrick <jonathan.derrick@intel.com> > > --- > > drivers/pci/controller/vmd.c | 6 ++++-- > > 1 file changed, 4 insertions(+), 2 deletions(-) > > Hi Jon, > > it looks like I can take this patch for v5.8 whereas patch 2 depends > on the QEMU changes acceptance and should probably wait. > > Please let me know your thoughts asap and I will try to at least > squeeze this patch in. > > Lorenzo Hi Lorenzo, This is fine. Please take Patch 1. Patch 2 is harmless without the QEMU changes, but may always need a different approach. Best, jon > > > diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c > > index dac91d6..e386d4e 100644 > > --- a/drivers/pci/controller/vmd.c > > +++ b/drivers/pci/controller/vmd.c > > @@ -445,9 +445,11 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) > > if (!membar2) > > return -ENOMEM; > > offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - > > - readq(membar2 + MB2_SHADOW_OFFSET); > > + (readq(membar2 + MB2_SHADOW_OFFSET) & > > + PCI_BASE_ADDRESS_MEM_MASK); > > offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - > > - readq(membar2 + MB2_SHADOW_OFFSET + 8); > > + (readq(membar2 + MB2_SHADOW_OFFSET + 8) & > > + PCI_BASE_ADDRESS_MEM_MASK); > > pci_iounmap(vmd->dev, membar2); > > } > > } > > -- > > 1.8.3.1 > >
On Fri, May 29, 2020 at 03:53:37PM +0000, Derrick, Jonathan wrote: > On Fri, 2020-05-29 at 11:33 +0100, Lorenzo Pieralisi wrote: > > On Wed, May 27, 2020 at 11:02:39PM -0400, Jon Derrick wrote: > > > Versions of VMD with the Host Physical Address shadow register use this > > > register to calculate the bus address offset needed to do guest > > > passthrough of the domain. This register shadows the Host Physical > > > Address registers including the resource type bits. After calculating > > > the offset, the extra resource type bits lead to the VMD resources being > > > over-provisioned at the front and under-provisioned at the back. > > > > > > Example: > > > pci 10000:80:02.0: reg 0x10: [mem 0xf801fffc-0xf803fffb 64bit] > > > > > > Expected: > > > pci 10000:80:02.0: reg 0x10: [mem 0xf8020000-0xf803ffff 64bit] > > > > > > If other devices are mapped in the over-provisioned front, it could lead > > > to resource conflict issues with VMD or those devices. > > > > > > Fixes: a1a30170138c9 ("PCI: vmd: Fix shadow offsets to reflect spec changes") > > > Signed-off-by: Jon Derrick <jonathan.derrick@intel.com> > > > --- > > > drivers/pci/controller/vmd.c | 6 ++++-- > > > 1 file changed, 4 insertions(+), 2 deletions(-) > > > > Hi Jon, > > > > it looks like I can take this patch for v5.8 whereas patch 2 depends > > on the QEMU changes acceptance and should probably wait. > > > > Please let me know your thoughts asap and I will try to at least > > squeeze this patch in. > > > > Lorenzo > > Hi Lorenzo, > > This is fine. Please take Patch 1. > Patch 2 is harmless without the QEMU changes, but may always need a > different approach. Pulled patch 1 into pci/vmd, thanks. Lorenzo
On Fri, 2020-05-29 at 17:18 +0100, Lorenzo Pieralisi wrote: > On Fri, May 29, 2020 at 03:53:37PM +0000, Derrick, Jonathan wrote: > > On Fri, 2020-05-29 at 11:33 +0100, Lorenzo Pieralisi wrote: > > > On Wed, May 27, 2020 at 11:02:39PM -0400, Jon Derrick wrote: > > > > Versions of VMD with the Host Physical Address shadow register use this > > > > register to calculate the bus address offset needed to do guest > > > > passthrough of the domain. This register shadows the Host Physical > > > > Address registers including the resource type bits. After calculating > > > > the offset, the extra resource type bits lead to the VMD resources being > > > > over-provisioned at the front and under-provisioned at the back. > > > > > > > > Example: > > > > pci 10000:80:02.0: reg 0x10: [mem 0xf801fffc-0xf803fffb 64bit] > > > > > > > > Expected: > > > > pci 10000:80:02.0: reg 0x10: [mem 0xf8020000-0xf803ffff 64bit] > > > > > > > > If other devices are mapped in the over-provisioned front, it could lead > > > > to resource conflict issues with VMD or those devices. > > > > > > > > Fixes: a1a30170138c9 ("PCI: vmd: Fix shadow offsets to reflect spec changes") > > > > Signed-off-by: Jon Derrick <jonathan.derrick@intel.com> > > > > --- > > > > drivers/pci/controller/vmd.c | 6 ++++-- > > > > 1 file changed, 4 insertions(+), 2 deletions(-) > > > > > > Hi Jon, > > > > > > it looks like I can take this patch for v5.8 whereas patch 2 depends > > > on the QEMU changes acceptance and should probably wait. > > > > > > Please let me know your thoughts asap and I will try to at least > > > squeeze this patch in. > > > > > > Lorenzo > > > > Hi Lorenzo, > > > > This is fine. Please take Patch 1. > > Patch 2 is harmless without the QEMU changes, but may always need a > > different approach. > > Pulled patch 1 into pci/vmd, thanks. > > Lorenzo Hi Lorenzo, Alex has pr-ed the QEMU patch [1] Is it too late to pull patch 2/2 for v5.8? [1] https://github.com/awilliam/qemu-vfio/releases/tag/vfio-update-20200611.0
On Thu, Jun 11, 2020 at 09:16:48PM +0000, Derrick, Jonathan wrote: [...] > > > > Hi Jon, > > > > > > > > it looks like I can take this patch for v5.8 whereas patch 2 depends > > > > on the QEMU changes acceptance and should probably wait. > > > > > > > > Please let me know your thoughts asap and I will try to at least > > > > squeeze this patch in. > > > > > > > > Lorenzo > > > > > > Hi Lorenzo, > > > > > > This is fine. Please take Patch 1. > > > Patch 2 is harmless without the QEMU changes, but may always need a > > > different approach. > > > > Pulled patch 1 into pci/vmd, thanks. > > > > Lorenzo > > Hi Lorenzo, > > Alex has pr-ed the QEMU patch [1] > Is it too late to pull patch 2/2 for v5.8? I think it is - I don't know if Bjorn planned a second PR for this merge window, if not it is v5.9 material I am afraid. Thanks, Lorenzo > [1] > https://github.com/awilliam/qemu-vfio/releases/tag/vfio-update-20200611.0
On Fri, 2020-06-12 at 14:54 +0100, Lorenzo Pieralisi wrote: > On Thu, Jun 11, 2020 at 09:16:48PM +0000, Derrick, Jonathan wrote: > > [...] > > > > > > Hi Jon, > > > > > > > > > > it looks like I can take this patch for v5.8 whereas patch 2 depends > > > > > on the QEMU changes acceptance and should probably wait. > > > > > > > > > > Please let me know your thoughts asap and I will try to at least > > > > > squeeze this patch in. > > > > > > > > > > Lorenzo > > > > > > > > Hi Lorenzo, > > > > > > > > This is fine. Please take Patch 1. > > > > Patch 2 is harmless without the QEMU changes, but may always need a > > > > different approach. > > > > > > Pulled patch 1 into pci/vmd, thanks. > > > > > > Lorenzo > > > > Hi Lorenzo, > > > > Alex has pr-ed the QEMU patch [1] > > Is it too late to pull patch 2/2 for v5.8? > > I think it is - I don't know if Bjorn planned a second PR for this > merge window, if not it is v5.9 material I am afraid. > > Thanks, > Lorenzo > > > [1] > > https://github.com/awilliam/qemu-vfio/releases/tag/vfio-update-20200611.0 No problem Jon
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index dac91d6..e386d4e 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -445,9 +445,11 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) if (!membar2) return -ENOMEM; offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - - readq(membar2 + MB2_SHADOW_OFFSET); + (readq(membar2 + MB2_SHADOW_OFFSET) & + PCI_BASE_ADDRESS_MEM_MASK); offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - - readq(membar2 + MB2_SHADOW_OFFSET + 8); + (readq(membar2 + MB2_SHADOW_OFFSET + 8) & + PCI_BASE_ADDRESS_MEM_MASK); pci_iounmap(vmd->dev, membar2); } }
Versions of VMD with the Host Physical Address shadow register use this register to calculate the bus address offset needed to do guest passthrough of the domain. This register shadows the Host Physical Address registers including the resource type bits. After calculating the offset, the extra resource type bits lead to the VMD resources being over-provisioned at the front and under-provisioned at the back. Example: pci 10000:80:02.0: reg 0x10: [mem 0xf801fffc-0xf803fffb 64bit] Expected: pci 10000:80:02.0: reg 0x10: [mem 0xf8020000-0xf803ffff 64bit] If other devices are mapped in the over-provisioned front, it could lead to resource conflict issues with VMD or those devices. Fixes: a1a30170138c9 ("PCI: vmd: Fix shadow offsets to reflect spec changes") Signed-off-by: Jon Derrick <jonathan.derrick@intel.com> --- drivers/pci/controller/vmd.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)