diff mbox

[kernel,v3,9/9] powerpc/powernv/npu: Enable NVLink pass through

Message ID 1460450270-42354-10-git-send-email-aik@ozlabs.ru (mailing list archive)
State Superseded
Headers show

Commit Message

Alexey Kardashevskiy April 12, 2016, 8:37 a.m. UTC
IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
also has a couple of fast speed links (NVLink). The interface to links
is exposed as an emulated PCI bridge which is included into the same
IOMMU group as the corresponding GPU.

In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE.

In order to make these links work when GPU is passed to the guest,
these bridges need to be passed as well; otherwise performance will
degrade.

This implements and exports API to manage NPU state in regard to VFIO;
it replicates iommu_table_group_ops.

This defines a new pnv_pci_ioda2_npu_ops which is assigned to
the IODA2 bridge if there are NPUs for a GPU on the bridge.
The new callbacks call the default IODA2 callbacks plus new NPU API.
This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
table_group, it is not expected to fail as the helper is only called
from the pnv_pci_ioda2_npu_ops.

This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
the GPU group if any found. The helper uses helpers to look for
the "ibm,gpu" property in the device tree which is a phandle of
the corresponding GPU.

This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
loop skips NPU PEs as they do not have 32bit DMA segments.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v3:
* moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
* removed hack to make iommu_add_device() work, iommu_group_add_device()
is used instead
* cleanup in gpe_table_group_to_npe_cb()

v2:
* reimplemented to support NPU + GPU in the same group
* merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
"powerpc/powernv/npu: Enable passing through via VFIO" into this patch
---
 arch/powerpc/platforms/powernv/npu-dma.c  | 126 ++++++++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/pci-ioda.c | 105 +++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/pci.h      |   6 ++
 3 files changed, 237 insertions(+)

Comments

kernel test robot April 12, 2016, 9:49 a.m. UTC | #1
Hi Alexey,

[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.6-rc3 next-20160412]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Alexey-Kardashevskiy/powerpc-powernv-npu-Enable-PCI-pass-through-for-NVLink/20160412-165337
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

All errors (new ones prefixed by >>):

   arch/powerpc/platforms/powernv/pci-ioda.c: In function 'pnv_pci_ioda_setup_DMA':
>> arch/powerpc/platforms/powernv/pci-ioda.c:3125:29: error: 'pnv_pci_ioda2_npu_ops' undeclared (first use in this function)
        gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
                                ^
   arch/powerpc/platforms/powernv/pci-ioda.c:3125:29: note: each undeclared identifier is reported only once for each function it appears in

vim +/pnv_pci_ioda2_npu_ops +3125 arch/powerpc/platforms/powernv/pci-ioda.c

  3119			if (phb->type != PNV_PHB_NPU)
  3120				continue;
  3121	
  3122			list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
  3123				gpe = pnv_pci_npu_setup_iommu(pe);
  3124				if (gpe)
> 3125					gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
  3126			}
  3127		}
  3128	}

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
David Gibson April 15, 2016, 4:40 a.m. UTC | #2
On Tue, Apr 12, 2016 at 06:37:50PM +1000, Alexey Kardashevskiy wrote:
> IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
> also has a couple of fast speed links (NVLink). The interface to links
> is exposed as an emulated PCI bridge which is included into the same
> IOMMU group as the corresponding GPU.
> 
> In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE.
> 
> In order to make these links work when GPU is passed to the guest,
> these bridges need to be passed as well; otherwise performance will
> degrade.
> 
> This implements and exports API to manage NPU state in regard to VFIO;
> it replicates iommu_table_group_ops.
> 
> This defines a new pnv_pci_ioda2_npu_ops which is assigned to
> the IODA2 bridge if there are NPUs for a GPU on the bridge.
> The new callbacks call the default IODA2 callbacks plus new NPU API.
> This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
> table_group, it is not expected to fail as the helper is only called
> from the pnv_pci_ioda2_npu_ops.
> 
> This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
> the GPU group if any found. The helper uses helpers to look for
> the "ibm,gpu" property in the device tree which is a phandle of
> the corresponding GPU.
> 
> This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
> loop skips NPU PEs as they do not have 32bit DMA segments.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v3:
> * moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
> * removed hack to make iommu_add_device() work, iommu_group_add_device()
> is used instead
> * cleanup in gpe_table_group_to_npe_cb()
> 
> v2:
> * reimplemented to support NPU + GPU in the same group
> * merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
> "powerpc/powernv/npu: Enable passing through via VFIO" into this patch
> ---
>  arch/powerpc/platforms/powernv/npu-dma.c  | 126 ++++++++++++++++++++++++++++++
>  arch/powerpc/platforms/powernv/pci-ioda.c | 105 +++++++++++++++++++++++++
>  arch/powerpc/platforms/powernv/pci.h      |   6 ++
>  3 files changed, 237 insertions(+)
> 
> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> index 8e70221..7cb9f6a 100644
> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> @@ -12,6 +12,7 @@
>  #include <linux/export.h>
>  #include <linux/pci.h>
>  #include <linux/memblock.h>
> +#include <linux/iommu.h>
>  
>  #include <asm/iommu.h>
>  #include <asm/pnv-pci.h>
> @@ -262,3 +263,128 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
>  		}
>  	}
>  }
> +
> +long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
> +		struct iommu_table *tbl)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	int64_t rc;
> +	const unsigned long size = tbl->it_indirect_levels ?
> +		tbl->it_level_size : tbl->it_size;
> +	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
> +	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
> +
> +	pe_info(npe, "Setting up window#%d %llx..%llx pg=%lx\n", num,
> +			start_addr, start_addr + win_size - 1,
> +			IOMMU_PAGE_SIZE(tbl));
> +
> +	/* Ignore @num as there is just one window per NPU */
> +	rc = opal_pci_map_pe_dma_window(phb->opal_id,
> +			npe->pe_number,
> +			npe->pe_number,
> +			tbl->it_indirect_levels + 1,
> +			__pa(tbl->it_base),
> +			size << 3,
> +			IOMMU_PAGE_SIZE(tbl));
> +	if (rc) {
> +		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
> +		return rc;
> +	}
> +
> +	pnv_pci_link_table_and_group(phb->hose->node, num,
> +			tbl, &npe->table_group);
> +	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> +
> +	return rc;
> +}
> +
> +long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	long ret;
> +
> +	pe_info(npe, "Removing DMA window #%d\n", num);
> +
> +	/* Ignore @num as there is just one window per NPU */
> +	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> +			npe->pe_number,
> +			0/* levels */, 0/* table address */,
> +			0/* table size */, 0/* page size */);
> +	if (ret)
> +		pe_warn(npe, "Unmapping failed, ret = %ld\n", ret);
> +	else
> +		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> +
> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
> +			&npe->table_group);
> +
> +	return ret;
> +}
> +
> +/* Switch ownership from platform code to external user (e.g. VFIO) */
> +void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	int64_t ret;
> +
> +	if (npe->table_group.tables[0]) {
> +		/* Disable 32bit window */
> +		pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
> +				&npe->table_group);
> +		npe->table_group.tables[0] = NULL;
> +		ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> +				npe->pe_number,
> +				0/* levels */, 0/* table address */,
> +				0/* table size */, 0/* page size */);
> +	} else {
> +		/* Disable bypass */
> +		ret = opal_pci_map_pe_dma_window_real(phb->opal_id,
> +				npe->pe_number, npe->pe_number,
> +				0 /* bypass base */, 0);
> +	}

It's not immediately obvious to me why this is an if/else.  I'm
assuming that the way the kernel uses the NPE IOMMU it always either
has a 32-bit DMA window, or it's in bypass mode.  Is that inherent to
the way the hardware works, or just the way Linux uses it?

I'm just worrying if this could open an exploitable hole if the host
kernel ever changes so that it could have bypass windows and TCE
windows simultaneously active.  Is there any way to unconditionally
disable bypass *and* disable any existing DMA window.

Apart from that nit,

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Alistair Popple April 18, 2016, 1:52 a.m. UTC | #3
Hi David,

On Fri, 15 Apr 2016 14:40:20 David Gibson wrote:
> On Tue, Apr 12, 2016 at 06:37:50PM +1000, Alexey Kardashevskiy wrote:
> > IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
> > also has a couple of fast speed links (NVLink). The interface to links
> > is exposed as an emulated PCI bridge which is included into the same
> > IOMMU group as the corresponding GPU.
> > 
> > In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE.
> > 
> > In order to make these links work when GPU is passed to the guest,
> > these bridges need to be passed as well; otherwise performance will
> > degrade.
> > 
> > This implements and exports API to manage NPU state in regard to VFIO;
> > it replicates iommu_table_group_ops.
> > 
> > This defines a new pnv_pci_ioda2_npu_ops which is assigned to
> > the IODA2 bridge if there are NPUs for a GPU on the bridge.
> > The new callbacks call the default IODA2 callbacks plus new NPU API.
> > This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
> > table_group, it is not expected to fail as the helper is only called
> > from the pnv_pci_ioda2_npu_ops.
> > 
> > This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
> > the GPU group if any found. The helper uses helpers to look for
> > the "ibm,gpu" property in the device tree which is a phandle of
> > the corresponding GPU.
> > 
> > This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
> > loop skips NPU PEs as they do not have 32bit DMA segments.
> > 
> > Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> > ---
> > Changes:
> > v3:
> > * moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
> > * removed hack to make iommu_add_device() work, iommu_group_add_device()
> > is used instead
> > * cleanup in gpe_table_group_to_npe_cb()
> > 
> > v2:
> > * reimplemented to support NPU + GPU in the same group
> > * merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
> > "powerpc/powernv/npu: Enable passing through via VFIO" into this patch
> > ---
> >  arch/powerpc/platforms/powernv/npu-dma.c  | 126 ++++++++++++++++++++++++++++++
> >  arch/powerpc/platforms/powernv/pci-ioda.c | 105 +++++++++++++++++++++++++
> >  arch/powerpc/platforms/powernv/pci.h      |   6 ++
> >  3 files changed, 237 insertions(+)
> > 
> > diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> > index 8e70221..7cb9f6a 100644
> > --- a/arch/powerpc/platforms/powernv/npu-dma.c
> > +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> > @@ -12,6 +12,7 @@
> >  #include <linux/export.h>
> >  #include <linux/pci.h>
> >  #include <linux/memblock.h>
> > +#include <linux/iommu.h>
> >  
> >  #include <asm/iommu.h>
> >  #include <asm/pnv-pci.h>
> > @@ -262,3 +263,128 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
> >  		}
> >  	}
> >  }
> > +
> > +long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
> > +		struct iommu_table *tbl)
> > +{
> > +	struct pnv_phb *phb = npe->phb;
> > +	int64_t rc;
> > +	const unsigned long size = tbl->it_indirect_levels ?
> > +		tbl->it_level_size : tbl->it_size;
> > +	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
> > +	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
> > +
> > +	pe_info(npe, "Setting up window#%d %llx..%llx pg=%lx\n", num,
> > +			start_addr, start_addr + win_size - 1,
> > +			IOMMU_PAGE_SIZE(tbl));
> > +
> > +	/* Ignore @num as there is just one window per NPU */
> > +	rc = opal_pci_map_pe_dma_window(phb->opal_id,
> > +			npe->pe_number,
> > +			npe->pe_number,
> > +			tbl->it_indirect_levels + 1,
> > +			__pa(tbl->it_base),
> > +			size << 3,
> > +			IOMMU_PAGE_SIZE(tbl));
> > +	if (rc) {
> > +		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
> > +		return rc;
> > +	}
> > +
> > +	pnv_pci_link_table_and_group(phb->hose->node, num,
> > +			tbl, &npe->table_group);
> > +	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> > +
> > +	return rc;
> > +}
> > +
> > +long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
> > +{
> > +	struct pnv_phb *phb = npe->phb;
> > +	long ret;
> > +
> > +	pe_info(npe, "Removing DMA window #%d\n", num);
> > +
> > +	/* Ignore @num as there is just one window per NPU */
> > +	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> > +			npe->pe_number,
> > +			0/* levels */, 0/* table address */,
> > +			0/* table size */, 0/* page size */);
> > +	if (ret)
> > +		pe_warn(npe, "Unmapping failed, ret = %ld\n", ret);
> > +	else
> > +		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> > +
> > +	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
> > +			&npe->table_group);
> > +
> > +	return ret;
> > +}
> > +
> > +/* Switch ownership from platform code to external user (e.g. VFIO) */
> > +void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
> > +{
> > +	struct pnv_phb *phb = npe->phb;
> > +	int64_t ret;
> > +
> > +	if (npe->table_group.tables[0]) {
> > +		/* Disable 32bit window */
> > +		pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
> > +				&npe->table_group);
> > +		npe->table_group.tables[0] = NULL;
> > +		ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> > +				npe->pe_number,
> > +				0/* levels */, 0/* table address */,
> > +				0/* table size */, 0/* page size */);
> > +	} else {
> > +		/* Disable bypass */
> > +		ret = opal_pci_map_pe_dma_window_real(phb->opal_id,
> > +				npe->pe_number, npe->pe_number,
> > +				0 /* bypass base */, 0);
> > +	}
> 
> It's not immediately obvious to me why this is an if/else.  I'm
> assuming that the way the kernel uses the NPE IOMMU it always either
> has a 32-bit DMA window, or it's in bypass mode.  Is that inherent to
> the way the hardware works, or just the way Linux uses it?

Unlike the IODA2 the NPU only has a single TVE/window meaning the NPU IOMMU is
either disabled, in bypass or has a 32-bit DMA window. Alexey can comment on
the if/else but I agree that it isn't strictly necessary - calling either 
map_pe_dma_window() or map_pe_dma_window_real() with the arguments in both the
if and else cases will zero the TVE effectively disabling DMA.

> I'm just worrying if this could open an exploitable hole if the host
> kernel ever changes so that it could have bypass windows and TCE
> windows simultaneously active.  Is there any way to unconditionally
> disable bypass *and* disable any existing DMA window.

I'm not a VFIO expert, but from the perspective of the NPU HW this situation 
couldn't exist for the reasons described above. Calling
opal_pci_map_pe_dma_window/_real() with either a zero table size or PCI memory
size will unconditionally disable all DMA windows for that PE# on the NPU.

- Alistair
 
> Apart from that nit,
> 
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> 
>
Alexey Kardashevskiy April 19, 2016, 1:31 a.m. UTC | #4
On 04/18/2016 11:52 AM, Alistair Popple wrote:
> Hi David,
>
> On Fri, 15 Apr 2016 14:40:20 David Gibson wrote:
>> On Tue, Apr 12, 2016 at 06:37:50PM +1000, Alexey Kardashevskiy wrote:
>>> IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
>>> also has a couple of fast speed links (NVLink). The interface to links
>>> is exposed as an emulated PCI bridge which is included into the same
>>> IOMMU group as the corresponding GPU.
>>>
>>> In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE.
>>>
>>> In order to make these links work when GPU is passed to the guest,
>>> these bridges need to be passed as well; otherwise performance will
>>> degrade.
>>>
>>> This implements and exports API to manage NPU state in regard to VFIO;
>>> it replicates iommu_table_group_ops.
>>>
>>> This defines a new pnv_pci_ioda2_npu_ops which is assigned to
>>> the IODA2 bridge if there are NPUs for a GPU on the bridge.
>>> The new callbacks call the default IODA2 callbacks plus new NPU API.
>>> This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
>>> table_group, it is not expected to fail as the helper is only called
>>> from the pnv_pci_ioda2_npu_ops.
>>>
>>> This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
>>> the GPU group if any found. The helper uses helpers to look for
>>> the "ibm,gpu" property in the device tree which is a phandle of
>>> the corresponding GPU.
>>>
>>> This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
>>> loop skips NPU PEs as they do not have 32bit DMA segments.
>>>
>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>> ---
>>> Changes:
>>> v3:
>>> * moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
>>> * removed hack to make iommu_add_device() work, iommu_group_add_device()
>>> is used instead
>>> * cleanup in gpe_table_group_to_npe_cb()
>>>
>>> v2:
>>> * reimplemented to support NPU + GPU in the same group
>>> * merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
>>> "powerpc/powernv/npu: Enable passing through via VFIO" into this patch
>>> ---
>>>   arch/powerpc/platforms/powernv/npu-dma.c  | 126 ++++++++++++++++++++++++++++++
>>>   arch/powerpc/platforms/powernv/pci-ioda.c | 105 +++++++++++++++++++++++++
>>>   arch/powerpc/platforms/powernv/pci.h      |   6 ++
>>>   3 files changed, 237 insertions(+)
>>>
>>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
>>> index 8e70221..7cb9f6a 100644
>>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>>> @@ -12,6 +12,7 @@
>>>   #include <linux/export.h>
>>>   #include <linux/pci.h>
>>>   #include <linux/memblock.h>
>>> +#include <linux/iommu.h>
>>>
>>>   #include <asm/iommu.h>
>>>   #include <asm/pnv-pci.h>
>>> @@ -262,3 +263,128 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
>>>   		}
>>>   	}
>>>   }
>>> +
>>> +long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
>>> +		struct iommu_table *tbl)
>>> +{
>>> +	struct pnv_phb *phb = npe->phb;
>>> +	int64_t rc;
>>> +	const unsigned long size = tbl->it_indirect_levels ?
>>> +		tbl->it_level_size : tbl->it_size;
>>> +	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
>>> +	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
>>> +
>>> +	pe_info(npe, "Setting up window#%d %llx..%llx pg=%lx\n", num,
>>> +			start_addr, start_addr + win_size - 1,
>>> +			IOMMU_PAGE_SIZE(tbl));
>>> +
>>> +	/* Ignore @num as there is just one window per NPU */
>>> +	rc = opal_pci_map_pe_dma_window(phb->opal_id,
>>> +			npe->pe_number,
>>> +			npe->pe_number,
>>> +			tbl->it_indirect_levels + 1,
>>> +			__pa(tbl->it_base),
>>> +			size << 3,
>>> +			IOMMU_PAGE_SIZE(tbl));
>>> +	if (rc) {
>>> +		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
>>> +		return rc;
>>> +	}
>>> +
>>> +	pnv_pci_link_table_and_group(phb->hose->node, num,
>>> +			tbl, &npe->table_group);
>>> +	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
>>> +
>>> +	return rc;
>>> +}
>>> +
>>> +long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
>>> +{
>>> +	struct pnv_phb *phb = npe->phb;
>>> +	long ret;
>>> +
>>> +	pe_info(npe, "Removing DMA window #%d\n", num);
>>> +
>>> +	/* Ignore @num as there is just one window per NPU */
>>> +	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
>>> +			npe->pe_number,
>>> +			0/* levels */, 0/* table address */,
>>> +			0/* table size */, 0/* page size */);
>>> +	if (ret)
>>> +		pe_warn(npe, "Unmapping failed, ret = %ld\n", ret);
>>> +	else
>>> +		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
>>> +
>>> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
>>> +			&npe->table_group);
>>> +
>>> +	return ret;
>>> +}
>>> +
>>> +/* Switch ownership from platform code to external user (e.g. VFIO) */
>>> +void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
>>> +{
>>> +	struct pnv_phb *phb = npe->phb;
>>> +	int64_t ret;
>>> +
>>> +	if (npe->table_group.tables[0]) {
>>> +		/* Disable 32bit window */
>>> +		pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
>>> +				&npe->table_group);
>>> +		npe->table_group.tables[0] = NULL;
>>> +		ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
>>> +				npe->pe_number,
>>> +				0/* levels */, 0/* table address */,
>>> +				0/* table size */, 0/* page size */);
>>> +	} else {
>>> +		/* Disable bypass */
>>> +		ret = opal_pci_map_pe_dma_window_real(phb->opal_id,
>>> +				npe->pe_number, npe->pe_number,
>>> +				0 /* bypass base */, 0);
>>> +	}
>>
>> It's not immediately obvious to me why this is an if/else.  I'm
>> assuming that the way the kernel uses the NPE IOMMU it always either
>> has a 32-bit DMA window, or it's in bypass mode.  Is that inherent to
>> the way the hardware works, or just the way Linux uses it?
>
> Unlike the IODA2 the NPU only has a single TVE/window meaning the NPU IOMMU is
> either disabled, in bypass or has a 32-bit DMA window.


... or has a 64-bit DMA window.


> Alexey can comment on
> the if/else but I agree that it isn't strictly necessary - calling either
> map_pe_dma_window() or map_pe_dma_window_real() with the arguments in both the
> if and else cases will zero the TVE effectively disabling DMA.

Right, OPAL does essentially the same thing when I disable DMA (window or 
bypass).


>> I'm just worrying if this could open an exploitable hole if the host
>> kernel ever changes so that it could have bypass windows and TCE
>> windows simultaneously active.  Is there any way to unconditionally
>> disable bypass *and* disable any existing DMA window.
>
> I'm not a VFIO expert, but from the perspective of the NPU HW this situation
> couldn't exist for the reasons described above. Calling
> opal_pci_map_pe_dma_window/_real() with either a zero table size or PCI memory
> size will unconditionally disable all DMA windows for that PE# on the NPU.
>
> - Alistair
>
>> Apart from that nit,
>>
>> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>


Thanks!

However there are still issues with the patchset which I am planning to fix 
and at least one bug with continuing DMA after GPU was reset, when I figure 
out what is going on, I'll respin.
diff mbox

Patch

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 8e70221..7cb9f6a 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -12,6 +12,7 @@ 
 #include <linux/export.h>
 #include <linux/pci.h>
 #include <linux/memblock.h>
+#include <linux/iommu.h>
 
 #include <asm/iommu.h>
 #include <asm/pnv-pci.h>
@@ -262,3 +263,128 @@  void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
 		}
 	}
 }
+
+long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
+		struct iommu_table *tbl)
+{
+	struct pnv_phb *phb = npe->phb;
+	int64_t rc;
+	const unsigned long size = tbl->it_indirect_levels ?
+		tbl->it_level_size : tbl->it_size;
+	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
+	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
+
+	pe_info(npe, "Setting up window#%d %llx..%llx pg=%lx\n", num,
+			start_addr, start_addr + win_size - 1,
+			IOMMU_PAGE_SIZE(tbl));
+
+	/* Ignore @num as there is just one window per NPU */
+	rc = opal_pci_map_pe_dma_window(phb->opal_id,
+			npe->pe_number,
+			npe->pe_number,
+			tbl->it_indirect_levels + 1,
+			__pa(tbl->it_base),
+			size << 3,
+			IOMMU_PAGE_SIZE(tbl));
+	if (rc) {
+		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
+		return rc;
+	}
+
+	pnv_pci_link_table_and_group(phb->hose->node, num,
+			tbl, &npe->table_group);
+	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+
+	return rc;
+}
+
+long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
+{
+	struct pnv_phb *phb = npe->phb;
+	long ret;
+
+	pe_info(npe, "Removing DMA window #%d\n", num);
+
+	/* Ignore @num as there is just one window per NPU */
+	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
+			npe->pe_number,
+			0/* levels */, 0/* table address */,
+			0/* table size */, 0/* page size */);
+	if (ret)
+		pe_warn(npe, "Unmapping failed, ret = %ld\n", ret);
+	else
+		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+
+	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
+			&npe->table_group);
+
+	return ret;
+}
+
+/* Switch ownership from platform code to external user (e.g. VFIO) */
+void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
+{
+	struct pnv_phb *phb = npe->phb;
+	int64_t ret;
+
+	if (npe->table_group.tables[0]) {
+		/* Disable 32bit window */
+		pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
+				&npe->table_group);
+		npe->table_group.tables[0] = NULL;
+		ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
+				npe->pe_number,
+				0/* levels */, 0/* table address */,
+				0/* table size */, 0/* page size */);
+	} else {
+		/* Disable bypass */
+		ret = opal_pci_map_pe_dma_window_real(phb->opal_id,
+				npe->pe_number, npe->pe_number,
+				0 /* bypass base */, 0);
+	}
+
+	if (ret != OPAL_SUCCESS)
+		pe_err(npe, "Failed to remove DMA window");
+	else
+		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+}
+
+/* Switch ownership from external user (e.g. VFIO) back to core */
+void pnv_npu_release_ownership(struct pnv_ioda_pe *npe)
+{
+	struct pnv_phb *phb = npe->phb;
+	int64_t ret;
+
+	/* Disable a window (not bypass) as external users do not use bypass */
+	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
+			npe->pe_number,
+			0/* levels */, 0/* table address */,
+			0/* table size */, 0/* page size */);
+	if (ret != OPAL_SUCCESS)
+		pe_err(npe, "Failed to remove DMA window");
+	else
+		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+}
+
+struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
+{
+	struct pnv_phb *phb = npe->phb;
+	struct pci_bus *pbus = phb->hose->bus;
+	struct pci_dev *npdev, *gpdev = NULL, *gptmp;
+	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
+
+	if (!gpe || !gpdev)
+		return NULL;
+
+	list_for_each_entry(npdev, &pbus->devices, bus_list) {
+		gptmp = pnv_pci_get_gpu_dev(npdev);
+
+		if (gptmp != gpdev)
+			continue;
+
+		pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev));
+		iommu_group_add_device(gpe->table_group.group, &npdev->dev);
+	}
+
+	return gpe;
+}
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 922c74c..95f9e19 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2273,6 +2273,93 @@  static struct iommu_table_group_ops pnv_pci_ioda2_ops = {
 	.take_ownership = pnv_ioda2_take_ownership,
 	.release_ownership = pnv_ioda2_release_ownership,
 };
+
+static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque)
+{
+	struct pci_controller *hose;
+	struct pnv_phb *phb;
+	struct pnv_ioda_pe **ptmppe = opaque;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	struct pci_dn *pdn = pci_get_pdn(pdev);
+
+	if (!pdn || pdn->pe_number == IODA_INVALID_PE)
+		return 0;
+
+	hose = pci_bus_to_host(pdev->bus);
+	phb = hose->private_data;
+	if (phb->type != PNV_PHB_NPU)
+		return 0;
+
+	*ptmppe = &phb->ioda.pe_array[pdn->pe_number];
+
+	return 1;
+}
+
+/*
+ * This returns PE of associated NPU.
+ * This assumes that NPU is in the same IOMMU group with GPU and there is
+ * no other PEs.
+ */
+static struct pnv_ioda_pe *gpe_table_group_to_npe(
+		struct iommu_table_group *table_group)
+{
+	struct pnv_ioda_pe *npe = NULL;
+	int ret = iommu_group_for_each_dev(table_group->group, &npe,
+			gpe_table_group_to_npe_cb);
+
+	BUG_ON(!ret || !npe);
+
+	return npe;
+}
+
+static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
+		int num, struct iommu_table *tbl)
+{
+	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
+
+	if (ret)
+		return ret;
+
+	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num, tbl);
+	if (ret)
+		pnv_pci_ioda2_unset_window(table_group, num);
+
+	return ret;
+}
+
+static long pnv_pci_ioda2_npu_unset_window(
+		struct iommu_table_group *table_group,
+		int num)
+{
+	long ret = pnv_pci_ioda2_unset_window(table_group, num);
+
+	if (ret)
+		return ret;
+
+	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
+}
+
+static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
+{
+	pnv_ioda2_take_ownership(table_group);
+	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
+}
+
+static void pnv_ioda2_npu_release_ownership(
+		struct iommu_table_group *table_group)
+{
+	pnv_npu_release_ownership(gpe_table_group_to_npe(table_group));
+	pnv_ioda2_release_ownership(table_group);
+}
+
+static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
+	.get_table_size = pnv_pci_ioda2_get_table_size,
+	.create_table = pnv_pci_ioda2_create_table,
+	.set_window = pnv_pci_ioda2_npu_set_window,
+	.unset_window = pnv_pci_ioda2_npu_unset_window,
+	.take_ownership = pnv_ioda2_npu_take_ownership,
+	.release_ownership = pnv_ioda2_npu_release_ownership,
+};
 #endif
 
 static void pnv_pci_ioda_setup_opal_tce_kill(struct pnv_phb *phb)
@@ -3012,6 +3099,7 @@  static void pnv_pci_ioda_setup_DMA(void)
 {
 	struct pci_controller *hose, *tmp;
 	struct pnv_phb *phb;
+	struct pnv_ioda_pe *pe, *gpe;
 
 	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
 		pnv_ioda_setup_dma(hose->private_data);
@@ -3020,6 +3108,23 @@  static void pnv_pci_ioda_setup_DMA(void)
 		phb = hose->private_data;
 		phb->initialized = 1;
 	}
+
+	/*
+	 * Now we have all PHBs discovered, time to add NPU devices to
+	 * the corresponding IOMMU groups.
+	 */
+	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
+		phb = hose->private_data;
+
+		if (phb->type != PNV_PHB_NPU)
+			continue;
+
+		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
+			gpe = pnv_pci_npu_setup_iommu(pe);
+			if (gpe)
+				gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
+		}
+	}
 }
 
 static void pnv_pci_ioda_create_dbgfs(void)
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index e117bd8..ebc6ee4 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -244,5 +244,11 @@  extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 /* Nvlink functions */
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
+extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe);
+extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
+		struct iommu_table *tbl);
+extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
+extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
+extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
 
 #endif /* __POWERNV_PCI_H */