diff mbox series

PCI/P2PDMA: Add Intel SkyLake-E to the whitelist

Message ID 1575669165-31697-1-git-send-email-abaloyan@gigaio.com
State New
Headers show
Series PCI/P2PDMA: Add Intel SkyLake-E to the whitelist | expand

Commit Message

Armen Baloyan Dec. 6, 2019, 9:52 p.m. UTC
Intel SkyLake-E was successfully tested for p2pdma
transactions spanning over a host bridge and PCI
bridge with IOMMU on.

Signed-off-by: Armen Baloyan <abaloyan@gigaio.com>
---
 drivers/pci/p2pdma.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Logan Gunthorpe Dec. 6, 2019, 9:53 p.m. UTC | #1
On 2019-12-06 2:52 p.m., Armen Baloyan wrote:
> Intel SkyLake-E was successfully tested for p2pdma
> transactions spanning over a host bridge and PCI
> bridge with IOMMU on.
> 
> Signed-off-by: Armen Baloyan <abaloyan@gigaio.com>

Reviewed-by: Logan Gunthorpe <logang@deltatee.com>

Thanks!

> ---
>  drivers/pci/p2pdma.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
> index 79fcb8d8f1b1..9f8e9df8f4ca 100644
> --- a/drivers/pci/p2pdma.c
> +++ b/drivers/pci/p2pdma.c
> @@ -324,6 +324,9 @@ static const struct pci_p2pdma_whitelist_entry {
>  	/* Intel Xeon E7 v3/Xeon E5 v3/Core i7 */
>  	{PCI_VENDOR_ID_INTEL,	0x2f00, REQ_SAME_HOST_BRIDGE},
>  	{PCI_VENDOR_ID_INTEL,	0x2f01, REQ_SAME_HOST_BRIDGE},
> +	/* Intel SkyLake-E. */
> +	{PCI_VENDOR_ID_INTEL,	0x2030, 0},
> +	{PCI_VENDOR_ID_INTEL,	0x2020, 0},
>  	{}
>  };
>  
>
Bjorn Helgaas Dec. 10, 2019, 9:22 p.m. UTC | #2
On Fri, Dec 06, 2019 at 01:52:45PM -0800, Armen Baloyan wrote:
> Intel SkyLake-E was successfully tested for p2pdma
> transactions spanning over a host bridge and PCI
> bridge with IOMMU on.
> 
> Signed-off-by: Armen Baloyan <abaloyan@gigaio.com>

Applied with Logan's reviewed-by to pci/p2pdma for v5.6, thanks!

Logan, the commit log for 494d63b0d5d0 ("PCI/P2PDMA: Whitelist some
Intel host bridges") says:

    Intel devices do not have good support for P2P requests that span different
    host bridges as the transactions will cross the QPI/UPI bus and this does
    not perform well.

    Therefore, enable support for these devices only if the host bridges match.

    Add Intel devices that have been tested and are known to work. There are
    likely many others out there that will need to be tested and added.

That suggests it's possible that P2P DMA may actually work but with
poor performance, and that you only want to add host bridges that work
with good performance to the whitelist.

Armen found that it *works*, but I have no idea what the performance
was.  Do you care about the performance, or is this a simple "works /
doesn't work" test?

> ---
>  drivers/pci/p2pdma.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
> index 79fcb8d8f1b1..9f8e9df8f4ca 100644
> --- a/drivers/pci/p2pdma.c
> +++ b/drivers/pci/p2pdma.c
> @@ -324,6 +324,9 @@ static const struct pci_p2pdma_whitelist_entry {
>  	/* Intel Xeon E7 v3/Xeon E5 v3/Core i7 */
>  	{PCI_VENDOR_ID_INTEL,	0x2f00, REQ_SAME_HOST_BRIDGE},
>  	{PCI_VENDOR_ID_INTEL,	0x2f01, REQ_SAME_HOST_BRIDGE},
> +	/* Intel SkyLake-E. */
> +	{PCI_VENDOR_ID_INTEL,	0x2030, 0},
> +	{PCI_VENDOR_ID_INTEL,	0x2020, 0},
>  	{}
>  };
>  
> -- 
> 2.16.5
>
Logan Gunthorpe Dec. 10, 2019, 9:49 p.m. UTC | #3
On 2019-12-10 2:22 p.m., Bjorn Helgaas wrote:
> On Fri, Dec 06, 2019 at 01:52:45PM -0800, Armen Baloyan wrote:
>> Intel SkyLake-E was successfully tested for p2pdma
>> transactions spanning over a host bridge and PCI
>> bridge with IOMMU on.
>>
>> Signed-off-by: Armen Baloyan <abaloyan@gigaio.com>
> 
> Applied with Logan's reviewed-by to pci/p2pdma for v5.6, thanks!
> 
> Logan, the commit log for 494d63b0d5d0 ("PCI/P2PDMA: Whitelist some
> Intel host bridges") says:
> 
>     Intel devices do not have good support for P2P requests that span different
>     host bridges as the transactions will cross the QPI/UPI bus and this does
>     not perform well.
> 
>     Therefore, enable support for these devices only if the host bridges match.
> 
>     Add Intel devices that have been tested and are known to work. There are
>     likely many others out there that will need to be tested and added.
> 
> That suggests it's possible that P2P DMA may actually work but with
> poor performance, and that you only want to add host bridges that work
> with good performance to the whitelist.
> 
> Armen found that it *works*, but I have no idea what the performance
> was.  Do you care about the performance, or is this a simple "works /
> doesn't work" test?

Armen and I discussed this off list and things are a bit more
complicated than I'd like but I think this is still the right way to go
until we have more evidence:

With older SandyBridge devices, I know that if a P2P transaction crosses
the QPI bus between sockets the performance will be unacceptable. Which
is why I added the current check so P2P transactions will not cross
between host bridges.

With the SkyLake device Armen tested with: it is not a multi-socket
configuration but the device, internally, has multiple host bridges.
Armen tested the performance across two of these host bridges and is
relying on that working so cannot set that flag.

What we don't know is whether P2P transactions across a multi-socket
SkyLake platforms will perform well. And, if it doesn't, I don't at this
time know how we can differentiate between the host bridges on the same
socket and those on other sockets.

So IMO, until someone comes forward saying that a particular SkyLake
platform doesn't work well and informing us how to differentiate them,
Armen's patch is the best we can do.

Logan
diff mbox series

Patch

diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 79fcb8d8f1b1..9f8e9df8f4ca 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -324,6 +324,9 @@  static const struct pci_p2pdma_whitelist_entry {
 	/* Intel Xeon E7 v3/Xeon E5 v3/Core i7 */
 	{PCI_VENDOR_ID_INTEL,	0x2f00, REQ_SAME_HOST_BRIDGE},
 	{PCI_VENDOR_ID_INTEL,	0x2f01, REQ_SAME_HOST_BRIDGE},
+	/* Intel SkyLake-E. */
+	{PCI_VENDOR_ID_INTEL,	0x2030, 0},
+	{PCI_VENDOR_ID_INTEL,	0x2020, 0},
 	{}
 };